page_title
stringlengths 1
91
| page_text
stringlengths 0
34.2k
|
---|---|
Kidney disease | Kidney disease, or renal disease, technically referred to as nephropathy, is damage to or disease of a kidney. Nephritis is an inflammatory kidney disease and has several types according to the location of the inflammation. Inflammation can be diagnosed by blood tests. Nephrosis is non-inflammatory kidney disease. Nephritis and nephrosis can give rise to nephritic syndrome and nephrotic syndrome respectively. Kidney disease usually causes a loss of kidney function to some degree and can result in kidney failure, the complete loss of kidney function. Kidney failure is known as the end-stage of kidney disease, where dialysis or a kidney transplant is the only treatment option.
Chronic kidney disease is defined as prolonged kidney abnormalities (functional and/or structural in nature) that last for more than three months. Acute kidney disease is now termed acute kidney injury and is marked by the sudden reduction in kidney function over seven days. In 2007, about one in eight Americans had chronic kidney disease. This rate is increasing over time to where about 1 in 7 Americans are estimated to have CKD as of 2021.With the increasing prevalence of chronic kidney disease, there are many treatment plans available. Individuals can get blood or urine tests to access the health and function of the kidneys. For severe kidney disease, dialysis and kidney transplants are the most popular options. Dialysis is a treatment where it acts as an artificial kidney and removes waste products and fluid from an individuals blood. This is accomplished via a machine. Kidney transplants are the surgical procedure of transporting a healthy kidney from a donor into an individual with kidney disease. Both of these options have their pros and con, the success varies by individual.
Causes
Causes of kidney disease include deposition of the Immunoglobulin A antibodies in the glomerulus, administration of analgesics, xanthine oxidase deficiency, toxicity of chemotherapy agents, and long-term exposure to lead or its salts. Chronic conditions that can produce nephropathy include systemic lupus erythematosus, diabetes mellitus and high blood pressure (hypertension), which lead to diabetic nephropathy and hypertensive nephropathy, respectively.
Analgesics
One cause of nephropathy is the long term usage of pain medications known as analgesics. The pain medicines which can cause kidney problems include aspirin, acetaminophen, and nonsteroidal anti-inflammatory drugs (NSAIDs). This form of nephropathy is "chronic analgesic nephritis," a chronic inflammatory change characterized by loss and atrophy of tubules and interstitial fibrosis and inflammation (BRS Pathology, 2nd edition).
Specifically, long-term use of the analgesic phenacetin has been linked to renal papillary necrosis (necrotizing papillitis).
Diabetes
Diabetic nephropathy is a progressive kidney disease caused by angiopathy of the capillaries in the glomeruli. It is characterized by nephrotic syndrome and diffuse scarring of the glomeruli. It is particularly associated with poorly managed diabetes mellitus and is a primary reason for dialysis in many developed countries. It is classified as a small blood vessel complication of diabetes.
Autosomal dominant polycystic kidney disease
Gabow 1990 talks about Autosomal Dominant Polycystic Kidney disease and how this disease is genetic. They go on to say "Autosomal dominant polycystic kidney disease (ADPKD) is the most common genetic disease, affecting a half million Americans. The clinical phenotype can result from at least two different gene defects. One gene that can cause ADPKD has been located on the short arm of chromosome 16." The same article also goes on to say that millions of Americans are effected by this disease and is very common.
Long COVID and Kidney Disease
Yende & Parikh 2021 talk about the effects that COVID can have on a person that has a pre-existing health issue regarding kidney diseases. "frailty, chronic diseases, disability and immunodeficiency are at increased risk of kidney disease and progression to kidney failure, and infection with SARS-CoV-2 can further increase this risk” (Long COVID and Kidney Disease, 2021).
Diet
Higher dietary intake of animal protein, animal fat, and cholesterol may increase risk for microalbuminuria, a sign of kidney function decline, and generally, diets higher in fruits, vegetables, and whole grains but lower in meat and sweets may be protective against kidney function decline. This may be because sources of animal protein, animal fat, and cholesterol, and sweets are more acid-producing, while fruits, vegetables, legumes, and whole grains are more base-producing.
IgA nephropathy
IgA nephropathy is the most common glomerulonephritis throughout the world Primary IgA nephropathy is characterized by deposition of the IgA antibody in the glomerulus. The classic presentation (in 40-50% of the cases) is episodic frank hematuria which usually starts within a day or two of a non-specific upper respiratory tract infection (hence synpharyngitic) as opposed to post-streptococcal glomerulonephritis which occurs some time (weeks) after initial infection. Less commonly gastrointestinal or urinary infection can be the inciting agent. All of these infections have in common the activation of mucosal defenses and hence IgA antibody production.
Iodinated contrast media
Kidney disease induced by iodinated contrast media (ICM) is called CIN (= contrast induced nephropathy) or contrast-induced AKI (= acute kidney injury). Currently, the underlying mechanisms are unclear. But there is a body of evidence that several factors including apoptosis-induction seem to play a role.
Lithium
Lithium, a medication commonly used to treat bipolar disorder and schizoaffective disorders, can cause nephrogenic diabetes insipidus; its long-term use can lead to nephropathy.
Lupus
Despite expensive treatments, lupus nephritis remains a major cause of morbidity and mortality in people with relapsing or refractory lupus nephritis.
Xanthine oxidase deficiency
Another possible cause of Kidney disease is due to decreased function of xanthine oxidase in the purine degradation pathway. Xanthine oxidase will degrade hypoxanthine to xanthine and then to uric acid. Xanthine is not very soluble in water; therefore, an increase in xanthine forms crystals (which can lead to kidney stones) and result in damage to the kidney. Xanthine oxidase inhibitors, like allopurinol, can cause nephropathy.
Polycystic disease of the kidneys
Additional possible cause of nephropathy is due to the formation of cysts or pockets containing fluid within the kidneys. These cysts become enlarged with the progression of aging causing renal failure. Cysts may also form in other organs including the liver, brain, and ovaries. Polycystic Kidney Disease is a genetic disease caused by mutations in the PKD1, PKD2, and PKHD1 genes. This disease affects about half a million people in the US. Polycystic kidneys are susceptible to infections and cancer.
Toxicity of chemotherapy agents
Nephropathy can be associated with some therapies used to treat cancer. The most common form of kidney disease in cancer patients is Acute Kidney Injury (AKI) which can usually be due to volume depletion from vomiting and diarrhea that occur following chemotherapy or occasionally due to kidney toxicities of chemotherapeutic agents. Kidney failure from break down of cancer cells, usually after chemotherapy, is unique to onconephrology. Several chemotherapeutic agents, for example Cisplatin, are associated with acute and chronic kidney injuries. Newer agents such as anti Vascular Endothelial Growth Factor (anti VEGF) are also associated with similar injuries, as well as proteinuria, hypertension and thrombotic microangiopathy.
Diagnosis
The standard diagnostic workup of suspected kidney disease includes a medical history, physical examination, a urine test, and an ultrasound of the kidneys (renal ultrasonography). An ultrasound is essential in the diagnosis and management of kidney disease.
Treatment
Treatment approaches for kidney disease focus on managing the symptoms, controlling the progression, and also treating co-morbidities that a person may have.
Dialysis
Transplantation
Millions of people across the world have kidney disease. Of those millions, several thousand will need dialysis or a kidney transplant at its end-stage. In the United States, as of 2008, 16,500 people needed a kidney transplant. Of those, 5,000 died while waiting for a transplant. Currently, there is a shortage of donors, and in 2007 there were only 64,606 kidney transplants in the world. This shortage of donors is causing countries to place monetary value on kidneys. Countries such as Iran and Singapore are eliminating their lists by paying their citizens to donate. Also, the black market accounts for 5-10 percent of transplants that occur worldwide. The act of buying an organ through the black market is illegal in the United States. To be put on the waiting list for a kidney transplant, patients must first be referred by a physician, then they must choose and contact a donor hospital. Once they choose a donor hospital, patients must then receive an evaluation to make sure they are sustainable to receive a transplant. In order to be a match for a kidney transplant, patients must match blood type and human leukocyte antigen factors with their donors. They must also have no reactions to the antibodies from the donors kidneys.
Prognosis
Kidney disease can have serious consequences if it cannot be controlled effectively. Generally, the progression of kidney disease is from mild to serious. Some kidney diseases can cause kidney failure.
Notable people
Nathan W. Levin, American physician, organization founder and author
See also
Hematologic Diseases Information Service
Mesoamerican nephropathy, an enigmatic chronic kidney disease of Central America
Protein toxicity
References
https://www.mayoclinic.org/diseases-conditions/chronic-kidney-disease/diagnosis-treatment/drc-20354527
== External links == |
Pseudallescheria boydii | Pseudallescheria boydii is a species of fungus classified in the Ascomycota. It is associated with some forms of eumycetoma/maduromycosis and is the causative agent of pseudallescheriasis. Typically found in stagnant and polluted water, it has been implicated in the infection of immunocompromised and near-drowned pneumonia patients. Treatment of infections with P. boydii is complicated by resistance to many of the standard antifungal agents normally used to treat infections by filamentous fungi.Pseudallescheria boydii fungal infection was the cause of death in three athletes submerged in the Yarkon River after a bridge collapsed during the 1997 Maccabiah Games.
Taxonomy
The fungus was originally described by American mycologist Cornelius Lott Shear in 1922 as a species of Allescheria. Shear obtained cultures from a patient of the Medical Department of the University of Texas. The microbe was apparently associated with a penetrating thorn the patient had incurred in his ankle while running barefoot 12 years before. The diseased area was found to contain hyphae-containing granules that, when cultured, led to the growth of the organism. Shear considered the fungus most closely related to Eurotiopsis gayoni (now called Allescheria gayoni). The specific epithet boydii refers to Dr. Mark F. Boyd, who sent Lott the specimen. David Malloch moved the species to the newly created genus Petriellidium in 1970. The genus name of Petriellidium was in honour of Lionello Petri (1875-1946), who was an Italian botanist (Mycology) and Phytopathologist from Florence. The species was then transferred to the genus Pseudallescheria in 1982 when examination of the type specimens of Petriellidium and Pseudallescheria revealed that they were the same genus.
Ecology
An ability to tolerate minimal aeration and high osmotic pressure enables P. boydii to grow on soil, polluted and stagnant water and manure. Although this fungus is commonly found in temperate climates, it is thermotolerant and can survive in tropical climates and in environments with low oxygen pressure. Growth of P. boydii can be seen in environments where nitrogen-containing compounds are common, usually due to human pollution. Its ability to use natural gas and other volatile organic compounds suggests a capacity for bioremediation.
Growth and morphology
Pseudallescheria boydii is a saprotrophic fungus with broad hyphae growing up to 2–5 μm in width. Colonies change in colour from white to pale brown and develop a cottony texture with maturity. After a 2–3 week incubation period, cleistothecia may form containing asci filled with eight fusiform, one-celled ascospores measuring 12–18 × 9–13 μm in diameter. This fungus grows on most standard media, maturing in 7 days. Its primary nutrients are the sugars xylose, arabinose, glucose, sucrose, ribitol, xylitol and L-arabinitol. It cannot assimilate maltose or lactose; however, it is able to assimilate urea, asparagine, potassium nitrate and ammonium nitrate. The optimal temperature for growth is 25 °C (77 °F) and the fungus is generally considered to be mesophilic, although it can grow at higher temperatures (up to 37 °C (99 °F)) as well. Asexual reproduction manifests in one of two forms: the Scedosporium type (the most common type) and the Graphium type. Scedosporium apiospermum forms greyish-white colonies with a grey-black reverse. The conidia are single-celled, pale brown and oval in form. Their size ranges from 4–9 x 6–10 μm and their development is annellidic.
Pathogenicity
Pseudallescheria boydii is an emerging opportunistic pathogen. Immune response is characterized by TLR2 recognition of P. boydii derived α-glucans, while TLR4 mediates the recognition of P. boydii derived rhamnomannans. Human infection takes one of two forms: mycetoma (99% of infections), a chronic, subcutaneous disease, and pseudallescheriasis, which includes all other forms of the disease commonly presented in the central nervous system, lungs, joints and bone. The former can also be distinguished by the presence of sclerotia, or granules, which are typically absent in pseudallescheriasis-type infections. Infection is initiated via inhalation or traumatic implantation in the skin. Infection can lead to arthritis, otitis, endocarditis, sinusitis, and other manifestations. Masses of hyphae can form "fungus balls" in the lungs. While "fungus balls" can also form in other organs, they are commonly derived from host necrotic tissue resulting from nodular infarction and thrombosis of lung vessels following infection.This species is second in prevalence after Aspergillus fumigatus as a fungal pathogen in cystic fibrosis patients. It causes allergic bronchopulmonary disease and chronic lung lesions that resemble aspergillosis. Infections can also occur in immunocompetent individuals, usually in the lungs and upper respiratory tract. Infections in the CNS, which are rare, present as neutrophilic meningitis or multiple brain abscesses and have a mortality rate of up to 75%. Infections have also been observed in animals, notably corneal infection, abdominal mycetoma and disseminated infections in dogs and horses. Transient colonization is more likely than disease. However, invasive pseudoallescheriasis can be found in patients with prolonged neutropenia, high-dose corticosteroid therapy and allotransplantation of bone marrow. Pseudallescheria boydii has also been implicated in pneumonia subsequent to near-drowning events with infection developing anywhere between a few weeks to several months after exposure yielding high mortality. Dissemination of the organism to the central nervous system has been observed in some cases. This species is also known as a non-invasive colonist of the external ear and airways of patients with poor lung or sinus clearance, and the first documented case of human pseudallescheriasis involved the ear canal. It has also been implicated in infection of joints following traumatic injury, and these infections can progress to osteomyelitis. Infections of the skin and cornea have also been reported. Typical host-related risk factors for infection include lymphopenia, steroid treatment, serum albumin levels of < 3 mg/dL and neutropenia.
Diagnosis
Detection and diagnosis of S. apiospermum is possible through isolation of the fungus in culture or through cytology and histopathology in the tissues of diseased individuals. In mycetoma-type infections, a confluence of symptoms is necessary for diagnosis, including tumefaction, draining sinuses and extrusion of grains. Furthermore, P. boydii grains and hyphae should be cultured and observed microscopically after staining with H&E, periodic acid–Schiff stain, Tissue Gram or Grocotts methenamine silver stain. A radiological diagnosis may be helpful in elucidating the extent of the disease in terms of bone and soft tissue involvement. Scedosporium-caused eumycetomas have been found to have thick-walled cavities and grains appearing as hyperreflective echoes on scans, while actinomycetomas show fine echoes at the bottom of cavities.Direct detection is possible in samples histochemically stained in 20% KOH followed by fluorescence microscopy with antibody. The characteristic shape, texture and colour of tissues can help identify S. apiospermum grains, which are often surrounded by an eosinophilic zone. Histopathologically, hyalohyphomycotic fungi like Scedosporium spp., Aspergillus spp., Fusarium spp. and Petriella spp. are similar in that they show septation of hyphae at regular intervals, have dichotomous branching and invade blood vessels. However, Scedosporium presents more irregular branching, sometimes with terminal or intercalary chlamydospores. In serum, Scedosporium infections can be detected by counterimmunoelectrophoresis. Molecular diagnostics appear to be promising in complementing current conventional diagnostic methods.Culture detection is accomplished by rinsing "grains" in 70% ethanol and sterile saline solution to avoid bacterial contamination prior to inoculation on growth medium. Selection of Scedosporium growth can be achieved on Leonians agar supplemented with 10 g/mL benomyl, or on media containing cycloheximide or amphotericin B. Optimal incubation is at a temperature of 25–35 °C (77–95 °F).
Treatment
Pseudallescheria boydii is resistant to amphotericin B and nearly all other antifungal drugs. Consequently, there is currently no consistently effective antifungal therapy for this agent. Miconazole has shown the best in vivo activity; however, itraconazole, fluconazole, ketoconazole and voriconazole have also been used in treatment, albeit with less success. In an in vitro environment, terbinafine has been found to work in synergy with azoles against P. boydii. Echinocandins, such as caspofungin and sordarins, have shown promise in in vitro assays. CMT-3, a chemically modified tetracycline, has also shown to be active in vitro against P. boydii.
Epidemiology
In the United States, P. boydii is the most common causal agent of eumycetoma, and tends to be more common in men than in women, particularly in the 20- to 45-year-old age group. In the United States, the incidence of infection by S. apiospermum between 1993 and 1998 was 0.82; this figure increased to 1.33 by 2005. Pseudallescheria boydii infection was implicated in the deaths of three athletes injured during the opening ceremony of the 1997 Maccabiah Games when the Maccabiah bridge collapsed in the Yarkon River.
== References == |
Polydipsia | Polydipsia is excessive thirst or excess drinking. The word derives from the Greek πολυδίψιος (poludípsios) "very thirsty", which is derived from πολύς (polús, "much, many") + δίψα (dípsa, "thirst"). Polydipsia is a nonspecific symptom in various medical disorders. It also occurs as an abnormal behaviour in some non-human animals, such as in birds.
Causes
Diabetes
Polydipsia can be characteristic of diabetes mellitus, often as an initial symptom. It is observed in cases of poorly controlled diabetes, which is sometimes the result of low patient adherence to anti-diabetic medication.Diabetes insipidus ("tasteless" diabetes, as opposed to diabetes mellitus) can also cause polydipsia.
Other physiological causes
It can also be caused by a change in the osmolality of the extracellular fluids of the body, hypokalemia, decreased blood volume (as occurs during major hemorrhage), and other conditions that create a water deficit. This is usually a result of osmotic diuresis.
Polydipsia is also a symptom of anticholinergic poisoning. Zinc is also known to reduce symptoms of polydipsia by causing the body to absorb fluids more efficiently (reduction of diarrhea, induces constipation) and it causes the body to retain more sodium; thus a zinc deficiency can be a possible cause. The combination of polydipsia and (nocturnal) polyuria is also seen in (primary) hyperaldosteronism (which often goes with hypokalemia).
Antipsychotics can have side effects such as dry mouth that may make the patient feel thirsty.
Primary polydipsia
Primary polydipsia describes excessive thirst and water intake caused in the absence of physiological stimuli to drink. This includes both psychogenic primary polydipsia and non-psychogenic primary polydipsia, such as in patients with autoimmune chronic hepatitis with severely elevated globulin levels.Psychogenic polydipsia is an excessive water intake seen in some patients with mental illnesses such as schizophrenia, or with developmental disabilities. It should be taken very seriously, as the amount of water ingested exceeds the amount that can be excreted by the kidneys, and can on rare occasions be life-threatening as the bodys serum sodium level is diluted to an extent that seizures and cardiac arrest can occur.
While psychogenic polydipsia is generally not found outside the population of serious mental disorders, there is some anecdotal evidence of a milder form (typically called habit polydipsia or habit drinking) that can be found in the absence of psychosis or other mental conditions. The excessive levels of fluid intake may result in a false diagnosis of diabetes insipidus, since the chronic ingestion of excessive water can produce diagnostic results that closely mimic those of mild diabetes insipidus. As discussed in the entry on diabetes insipidus, "Habit drinking (in its severest form termed psychogenic polydipsia) is the most common imitator of diabetes insipidus at all ages. While many adult cases in the medical literature are associated with mental disorders, most patients with habit polydipsia have no other detectable disease. The distinction is made during the water deprivation test, as some degree of urinary concentration above isosmolar is usually obtained before the patient becomes dehydrated." However, prior to a water deprivation test, consideration should be given to a psychiatric consult to see whether it is possible to rule out psychogenic polydipsia or habit polydipsia.
Diagnosis
Polydipsia is a symptom (evidence of a disease state), not a disease in itself. As it is often accompanied by polyuria (excessive urination) and low sodium levels. Investigations directed at diagnosing diabetes insipidus and diabetes mellitus can be useful. Blood serum tests can also provide useful information about the osmolality of the bodys extracellular fluids. A decrease in osmolality caused by excess water intake will decrease the serum concentration of red blood cells, blood urea nitrogen (BUN), and sodium.
See also
References
== External links == |
Nephroptosis | Nephroptosis, is rare and abnormal condition in which the kidney drops down into the pelvis when the patient stands up. It is more common in women than in men. It has been one of the most controversial conditions in terms of both its diagnosis and its treatments.
Symptoms and signs
Nephroptosis is asymptomatic in most persons. However, nephroptosis can be characterized by violent attacks of colicky flank pain, nausea, chills, hypertension, hematuria and proteinuria. Persons with symptomatic nephroptosis often complain of sharp pains that radiate into the groin. Many persons also suggest a weighing feeling on the abdomen. Pain is typically relieved by lying down. It is believed that flank pain on standing that is relieved by lying down is due to movement of the kidney causing intermittent renal tract obstruction. The attack of colic pain is called Dietls crisis or renal paroxysm.
Cause
It is believed to result from deficiency of supporting inferior pararenal fasciae.
Diagnosis
Diagnosis is contemplated based upon patient symptoms. Diagnosis is confirmed during intravenous urography, by obtaining erect and supine films. The renal DMSA scan may show decreased counts in the sitting position compared with supine scan.
Treatment
Nephropexy was performed in the past to stabilize the kidney, but presently surgery is not recommended in asymptomatic patients. A Nephropexy does not guarantee the symptoms will go away. Laparoscopic nephropexy has recently become available for selected symptomatic patients.
References
Further reading
Barber N, Thompson P (2004). "Nephroptosis and nephropexy--hung up on the past?". Eur Urol. 46 (4): 428–33. doi:10.1016/j.eururo.2004.03.023. PMID 15363554.
== External links == |
Mosaic (genetics) | Mosaicism or genetic mosaicism is a condition in multicellular organisms in which a single organism possesses more than one genetic line as the result of genetic mutation. This means that various genetic lines resulted from a single fertilized egg. Genetic mosaics may often be confused with chimerism, in which two or more genotypes arise in one individual similarly to mosaicism. In chimerism, though, the two genotypes arise from the fusion of more than one fertilized zygote in the early stages of embryonic development, rather than from a mutation or chromosome loss.
Genetic mosaicism can result from many different mechanisms including chromosome nondisjunction, anaphase lag, and endoreplication. Anaphase lagging is the most common way by which mosaicism arises in the preimplantation embryo. Mosaicism can also result from a mutation in one cell during development, in which case the mutation will be passed on only to its daughter cells (and will be present only in certain adult cells). Somatic mosaicism is not generally inheritable as it does not generally affect germ cells.
History
In 1929, Alfred Sturtevant studied mosaicism in Drosophila, a genus of fly. Muller in 1930 demonstrated that mosaicism in Drosophila is always associated with chromosomal rearrangements and Schultz in 1936 showed that in all cases studied these rearrangements were associated with heterochromatic inert regions, several hypotheses on the nature of such mosaicism were proposed. One hypothesis assumed that mosaicism appears as the result of a break and loss of chromosome segments. Curt Stern in 1935 assumed that the structural changes in the chromosomes took place as a result of somatic crossing, as a result of which mutations or small chromosomal rearrangements in somatic cells. Thus the inert region causes an increase in mutation frequency or small chromosomal rearrangements in active segments adjacent to inert regions.In the 1930s, Stern demonstrated that genetic recombination, normal in meiosis, can also take place in mitosis. When it does, it results in somatic (body) mosaics. These organisms contain two or more genetically distinct types of tissue. The term somatic mosaicism was used by CW Cotterman in 1956 in his seminal paper on antigenic variation.In 1944, Belgovskii proposed that mosaicism could not account for certain mosaic expressions caused by chromosomal rearrangements involving heterochromatic inert regions. The associated weakening of biochemical activity led to what he called a genetic chimera.
Types
Germline mosaicism
Germline or gonadal mosaicism is a particular form of mosaicism wherein some gametes—i.e., sperm or oocytes—carry a mutation, but the rest are normal. The cause is usually a mutation that occurred in an early stem cell that gave rise to all or part of the gametes.
Somatic mosaicism
Somatic mosaicism occurs when the somatic cells of the body are of more than one genotype. In the more common mosaics, different genotypes arise from a single fertilized egg cell, due to mitotic errors at first or later cleavages.
Somatic mutation leading to mosaicism is prevalent in the beginning and end stages of human life. Somatic mosaics are common in embryogenesis due to retrotransposition of long interspersed nuclear element-1 (LINE-1 or L1) and Alu transposable elements. In early development, DNA from undifferentiated cell types may be more susceptible to mobile element invasion due to long, unmethylated regions in the genome. Further, the accumulation of DNA copy errors and damage over a lifetime lead to greater occurrences of mosaic tissues in aging humans. As longevity has increased dramatically over the last century, human genome may not have had time to adapt to cumulative effects of mutagenesis. Thus, cancer research has shown that somatic mutations are increasingly present throughout a lifetime and are responsible for most leukemia, lymphomas, and solid tumors.
Trisomies, monosomies and related conditions
The most common form of mosaicism found through prenatal diagnosis involves trisomies. Although most forms of trisomy are due to problems in meiosis and affect all cells of the organism, some cases occur where the trisomy occurs in only a selection of the cells. This may be caused by a nondisjunction event in an early mitosis, resulting in a loss of a chromosome from some trisomic cells. Generally, this leads to a milder phenotype than in nonmosaic patients with the same disorder.
In rare cases, intersex conditions can be caused by mosaicism where some cells in the body have XX and others XY chromosomes (46, XX/XY). In the fruit fly Drosophila melanogaster, where a fly possessing two X chromosomes is a female and a fly possessing a single X chromosome is a sterile male, a loss of an X chromosome early in embryonic development can result in sexual mosaics, or gynandromorphs. Likewise, a loss of the Y chromosome can result in XY/X mosaic males.An example of this is one of the milder forms of Klinefelter syndrome, called 46,XY/47,XXY mosaic wherein some of the patients cells contain XY chromosomes, and some contain XXY chromosomes. The 46/47 annotation indicates that the XY cells have the normal number of 46 total chromosomes, and the XXY cells have a total of 47 chromosomes.
Also monosomies can present with some form of mosaicism. The only non-lethal full monosomy occurring in humans is the one causing Turners syndrome. Around 30% of Turners syndrome cases demonstrate mosaicism, while complete monosomy (45, X) occurs in about 50–60% of cases.
Mosaicism need not necessarily be deleterious, though. Revertant somatic mosaicism is a rare recombination event with a spontaneous correction of a mutant, pathogenic allele. In revertant mosaicism, the healthy tissue formed by mitotic recombination can outcompete the original, surrounding mutant cells in tissues such as blood and epithelia that regenerate often. In the skin disorder ichthyosis with confetti, normal skin spots appear early in life and increase in number and size over time.Other endogenous factors can also lead to mosaicism, including mobile elements, DNA polymerase slippage, and unbalanced chromosomal segregation. Exogenous factors include nicotine and UV radiation. Somatic mosaics have been created in Drosophila using X‑ray treatment and the use of irradiation to induce somatic mutation has been a useful technique in the study of genetics.True mosaicism should not be mistaken for the phenomenon of X-inactivation, where all cells in an organism have the same genotype, but a different copy of the X chromosome is expressed in different cells. The latter is the case in normal (XX) female mammals, although it is not always visible from the phenotype (as it is in calico cats). However, all multicellular organisms are likely to be somatic mosaics to some extent.
Gonosomal mosaicism
Gonosomal mosaicism is a type of somatic mosaicism that occurs very early in the organisms development and thus is present within both germline and somatic cells. Somatic mosaicism is not generally inheritable as it does not usually affect germ cells. In the instance of gonosomal mosaicism, organisms have the potential to pass the genetic alteration, including to potential offspring because the altered allele is present in both somatic and germline cells.
Brain cell mosaicism
A frequent type of neuronal genomic mosaicism is copy number variation. Possible sources of such variation were suggested to be incorrect repair of DNA damages and somatic recombination.
Mitotic recombination
One basic mechanism that can produce mosaic tissue is mitotic recombination or somatic crossover. It was first discovered by Curt Stern in Drosophila in 1936. The amount of tissue that is mosaic depends on where in the tree of cell division the exchange takes place. A phenotypic character called "twin spot" seen in Drosophila is a result of mitotic recombination. However, it also depends on the allelic status of the genes undergoing recombination. Twin spot occurs only if the heterozygous genes are linked in repulsion, i.e. the trans phase. The recombination needs to occur between the centromeres of the adjacent gene. This gives an appearance of yellow patches on the wild-type background in Drosophila. another example of mitotic recombination is the Blooms syndrome, which happens due to the mutation in the blm gene. The resulting BLM protein is defective. The defect in RecQ, a helicase, facilitates the defective unwinding of DNA during replication, thus is associated with the occurrence of this disease.
Use in experimental biology
Genetic mosaics are a particularly powerful tool when used in the commonly studied fruit fly, where specially selected strains frequently lose an X or a Y chromosome in one of the first embryonic cell divisions. These mosaics can then be used to analyze such things as courtship behavior, and female sexual attraction.More recently, the use of a transgene incorporated into the Drosophila genome has made the system far more flexible. The flip recombinase (or FLP) is a gene from the commonly studied yeast Saccharomyces cerevisiae that recognizes "flip recombinase target" (FRT) sites, which are short sequences of DNA, and induces recombination between them. FRT sites have been inserted transgenically near the centromere of each chromosome arm of D. melanogaster. The FLP gene can then be induced selectively, commonly using either the heat shock promoter or the GAL4/UAS system. The resulting clones can be identified either negatively or positively.
In negatively marked clones, the fly is transheterozygous for a gene encoding a visible marker (commonly the green fluorescent protein) and an allele of a gene to be studied (both on chromosomes bearing FRT sites). After induction of FLP expression, cells that undergo recombination will have progeny homozygous for either the marker or the allele being studied. Therefore, the cells that do not carry the marker (which are dark) can be identified as carrying a mutation.
Using negatively marked clones is sometimes inconvenient, especially when generating very small patches of cells, where seeing a dark spot on a bright background is more difficult than a bright spot on a dark background. Creating positively marked clones is possible using the so-called MARCM ("mosaic analysis with a repressible cell marker" system, developed by Liqun Luo, a professor at Stanford University, and his postdoctoral student Tzumin Lee, who now leads a group at Janelia Farm Research Campus. This system builds on the GAL4/UAS system, which is used to express GFP in specific cells. However, a globally expressed GAL80 gene is used to repress the action of GAL4, preventing the expression of GFP. Instead of using GFP to mark the wild-type chromosome as above, GAL80 serves this purpose, so that when it is removed by mitotic recombination, GAL4 is allowed to function, and GFP turns on. This results in the cells of interest being marked brightly in a dark background.
See also
Extrachromosomal array
Heterochromia
Parasitic twin
Vanishing twin
X0/XY mosaic
Human somatic variation
References
Further reading
Zimmer, Carl (21 May 2018). "Every Cell in Your Body Has the Same DNA. Except It Doesnt". The New York Times. Archived from the original on 23 May 2018. Retrieved 23 May 2018.
"From Many, One -- Diverse mammals, including humans, have been found to carry distinct genomes in their cells. What does such genetic chimerism mean for health and disease?". The Scientist. Archived from the original on 25 April 2017. Retrieved 23 May 2018. |
Fracture | Fracture is the separation of an object or material into two or more pieces under the action of stress. The fracture of a solid usually occurs due to the development of certain displacement discontinuity surfaces within the solid. If a displacement develops perpendicular to the surface, it is called a normal tensile crack or simply a crack; if a displacement develops tangentially, it is called a shear crack, slip band or dislocation.Brittle fractures occur with no apparent deformation before fracture. Ductile fractures occur after visible deformation. Fracture strength, or breaking strength, is the stress when a specimen fails or fractures. The detailed understanding of how a fracture occurs and develops in materials is the object of fracture mechanics.
Strength
Fracture strength, also known as breaking strength, is the stress at which a specimen fails via fracture. This is usually determined for a given specimen by a tensile test, which charts the stress–strain curve (see image). The final recorded point is the fracture strength.
Ductile materials have a fracture strength lower than the ultimate tensile strength (UTS), whereas in brittle materials the fracture strength is equivalent to the UTS. If a ductile material reaches its ultimate tensile strength in a load-controlled situation, it will continue to deform, with no additional load application, until it ruptures. However, if the loading is displacement-controlled, the deformation of the material may relieve the load, preventing rupture.
The statistics of fracture in random materials have very intriguing behavior, and was noted by the architects and engineers quite early. Indeed, fracture or breakdown studies might be the oldest physical science studies, which still remain intriguing and very much alive. Leonardo da Vinci, more than 500 years ago, observed that the tensile strengths of nominally identical specimens of iron wire decrease with increasing length of the wires (see e.g., for a recent discussion). Similar observations were made by Galileo Galilei more than 400 years ago. This is the manifestation of the extreme statistics of failure (bigger sample volume can have larger defects due to cumulative fluctuations where failures nucleate and induce lower strength of the sample).
Types
There are two types of fractures: brittle and ductile fractures respectively without or with plastic deformation prior to failure.
Brittle
In brittle fracture, no apparent plastic deformation takes place before fracture. Brittle fracture typically involves little energy absorption and occurs at high speeds—up to 2,133.6 m/s (7,000 ft/s) in steel. In most cases brittle fracture will continue even when loading is discontinued.In brittle crystalline materials, fracture can occur by cleavage as the result of tensile stress acting normal to crystallographic planes with low bonding (cleavage planes). In amorphous solids, by contrast, the lack of a crystalline structure results in a conchoidal fracture, with cracks proceeding normal to the applied tension.
The fracture strength (or micro-crack nucleation stress) of a material was first theoretically estimated by Alan Arnold Griffith in 1921:
σ
t
h
e
o
r
e
t
i
c
a
l
=
E
γ
r
o
{\displaystyle \sigma _{\mathrm {theoretical} }={\sqrt {\frac {E\gamma }{r_{o}}}}}
where: –
E
{\displaystyle E}
is the Youngs modulus of the material,
γ
{\displaystyle \gamma }
is the surface energy, and
r
o
{\displaystyle r_{o}}
is the micro-crack length (or equilibrium distance between atomic centers in a crystalline solid).On the other hand, a crack introduces a stress concentration modeled by
σ
e
l
l
i
p
t
i
c
a
l
c
r
a
c
k
=
σ
a
p
p
l
i
e
d
(
1
+
2
a
ρ
)
=
2
σ
a
p
p
l
i
e
d
a
ρ
{\displaystyle \sigma _{\mathrm {elliptical\ crack} }=\sigma _{\mathrm {applied} }\left(1+2{\sqrt {\frac {a}{\rho }}}\right)=2\sigma _{\mathrm {applied} }{\sqrt {\frac {a}{\rho }}}}
(For sharp cracks)where: –
σ
a
p
p
l
i
e
d
{\displaystyle \sigma _{\mathrm {applied} }}
is the loading stress,
a
{\displaystyle a}
is half the length of the crack, and
ρ
{\displaystyle \rho }
is the radius of curvature at the crack tip.Putting these two equations together gets
σ
f
r
a
c
t
u
r
e
=
E
γ
ρ
4
a
r
o
.
{\displaystyle \sigma _{\mathrm {fracture} }={\sqrt {\frac {E\gamma \rho }{4ar_{o}}}}.}
Sharp cracks (small
ρ
{\displaystyle \rho }
) and large defects (large
a
{\displaystyle a}
) both lower the fracture strength of the material.
Recently, scientists have discovered supersonic fracture, the phenomenon of crack propagation faster than the speed of sound in a material. This phenomenon was recently also verified by experiment of fracture in rubber-like materials.
The basic sequence in a typical brittle fracture is: introduction of a flaw either before or after the material is put in service, slow and stable crack propagation under recurring loading, and sudden rapid failure when the crack reaches critical crack length based on the conditions defined by fracture mechanics. Brittle fracture may be avoided by controlling three primary factors: material fracture toughness (Kc), nominal stress level (σ), and introduced flaw size (a). Residual stresses, temperature, loading rate, and stress concentrations also contribute to brittle fracture by influencing the three primary factors.Under certain conditions, ductile materials can exhibit brittle behavior. Rapid loading, low temperature, and triaxial stress constraint conditions may cause ductile materials to fail without prior deformation.
Ductile
In ductile fracture, extensive plastic deformation (necking) takes place before fracture. The terms "rupture" and "ductile rupture" describe the ultimate failure of ductile materials loaded in tension. The extensive plasticity causes the crack to propagate slowly due to the absorption of a large amount of energy before fracture.
Because ductile rupture involves a high degree of plastic deformation, the fracture behavior of a propagating crack as modelled above changes fundamentally. Some of the energy from stress concentrations at the crack tips is dissipated by plastic deformation ahead of the crack as it propagates.
The basic steps in ductile fracture are void formation, void coalescence (also known as crack formation), crack propagation, and failure, often resulting in a cup-and-cone shaped failure surface. Voids typically coalesce around precipitates, secondary phases, inclusions, and at grain boundaries in the material. Ductile fracture is typically transgranular and deformation due to dislocation slip can cause the shear lip characteristic of cup and cone fracture.
Characteristics
The manner in which a crack propagates through a material gives insight into the mode of fracture. With ductile fracture a crack moves slowly and is accompanied by a large amount of plastic deformation around the crack tip. A ductile crack will usually not propagate unless an increased stress is applied and generally cease propagating when loading is removed. In a ductile material, a crack may progress to a section of the material where stresses are slightly lower and stop due to the blunting effect of plastic deformations at the crack tip. On the other hand, with brittle fracture, cracks spread very rapidly with little or no plastic deformation. The cracks that propagate in a brittle material will continue to grow once initiated.
Crack propagation is also categorized by the crack characteristics at the microscopic level. A crack that passes through the grains within the material is undergoing transgranular fracture. A crack that propagates along the grain boundaries is termed an intergranular fracture. Typically, the bonds between material grains are stronger at room temperature than the material itself, so transgranular fracture is more likely to occur. When temperatures increase enough to weaken the grain bonds, intergranular fracture is the more common fracture mode.
Testing
Fracture in materials is studied and quantified in multiple ways. Fracture is largely determined by the fracture toughness (
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
), so fracture testing is often done to determine this. The two most widely used techniques for determining fracture toughness are the Three-point flexural test and the compact tension test.
By performing the compact tension and three-point flexural tests, one is able to determine the fracture toughness through the following equation:
K
c
=
σ
F
π
c
f
(
c
/
a
)
{\displaystyle \mathrm {K_{c}} =\sigma _{\mathrm {F} }{\sqrt {\pi \mathrm {c} }}\mathrm {f\ (c/a)} }
Where:-
f
(
c
/
a
)
{\displaystyle \mathrm {f\ (c/a)} }
is an empirically-derived equation to capture the test sample geometry
σ
F
{\displaystyle \sigma _{\mathrm {F} }}
is the fracture stress, and
c
{\displaystyle \mathrm {c} }
is the crack length.To accurately attain
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
, the value of
c
{\textstyle \mathrm {c} }
must be precisely measured. This is done by taking the test piece with its fabricated notch of length
c
′
{\textstyle \mathrm {c\prime } }
and sharpening this notch to better emulate a crack tip found in real-world materials. Cyclical prestressing the sample can then induce a fatigue crack which extends the crack from the fabricated notch length of
c
′
{\textstyle \mathrm {c\prime } }
to
c
{\textstyle \mathrm {c} }
. This value
c
{\textstyle \mathrm {c} }
is used in the above equations for determining
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
.Following this test, the sample can then be reoriented such that further loading of a load (F) will extend this crack and thus a load versus sample deflection curve can be obtained. With this curve, the slope of the linear portion, which is the inverse of the compliance of the material, can be obtained. This is then used to derive f(c/a) as defined above in the equation. With the knowledge of all these variables,
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
can then be calculated.
Ceramics and inorganic glasses
Ceramics and inorganic glasses have fracturing behavior that differ those of metallic materials. Ceramics have high strengths and perform well in high temperatures due to the material strength being independent of temperature. Ceramics have low toughness as determined by testing under a tensile load; often, ceramics have
K
c
{\textstyle \mathrm {K} _{\mathrm {c} }}
values that are ~5% of that found in metals. However, ceramics are usually loaded in compression in everyday use, so the compressive strength is often referred to as the strength; this strength can often exceed that of most metals. However, ceramics are brittle and thus most work done revolves around preventing brittle fracture. Due to how ceramics are manufactured and processed, there are often preexisting defects in the material introduce a high degree of variability in the Mode I brittle fracture. Thus, there is a probabilistic nature to be accounted for in the design of ceramics. The Weibull distribution predicts the survival probability of a fraction of samples with a certain volume that survive a tensile stress sigma, and is often used to better assess the success of a ceramic in avoiding fracture.
Fiber bundles
To model fracture of a bundle of fibers, the Fiber Bundle Model was introduced by Thomas Pierce in 1926 as a model to understand the strength of composite materials. The bundle consists of a large number of parallel Hookean springs of identical length and each having identical spring constants. They have however different breaking stresses. All these springs are suspended from a rigid horizontal platform. The load is attached to a horizontal platform, connected to the lower ends of the springs. When this lower platform is absolutely rigid, the load at any point of time is shared equally (irrespective of how many fibers or springs have broken and where) by all the surviving fibers. This mode of load-sharing is called Equal-Load-Sharing mode. The lower platform can also be assumed to have finite rigidity, so that local deformation of the platform occurs wherever springs fail and the surviving neighbor fibers have to share a larger fraction of that transferred from the failed fiber. The extreme case is that of local load-sharing model, where load of the failed spring or fiber is shared (usually equally) by the surviving nearest neighbor fibers.
Disasters
Failures caused by brittle fracture have not been limited to any particular category of engineered structure. Though brittle fracture is less common than other types of failure, the impacts to life and property can be more severe. The following notable historic failures were attributed to brittle fracture:
Pressure vessels: Great Molasses Flood in 1919, New Jersey molasses tank failure in 1973
Bridges: King Street Bridge span collapse in 1962, Silver Bridge collapse in 1967, partial failure of the Hoan Bridge in 2000
Ships: Titanic in 1912, Liberty ships during World War II, SS Schenectady in 1943
See also
Notes
References
Further reading
Dieter, G. E. (1988) Mechanical Metallurgy ISBN 0-07-100406-8
A. Garcimartin, A. Guarino, L. Bellon and S. Cilberto (1997) " Statistical Properties of Fracture Precursors ". Physical Review Letters, 79, 3202 (1997)
Callister, Jr., William D. (2002) Materials Science and Engineering: An Introduction. ISBN 0-471-13576-3
Peter Rhys Lewis, Colin Gagg, Ken Reynolds, CRC Press (2004), Forensic Materials Engineering: Case Studies.
External links
Virtual museum of failed products at http://materials.open.ac.uk/mem/index.html
Fracture and Reconstruction of a Clay Bowl
Ductile fracture |
Stevens–Johnson syndrome | Stevens–Johnson syndrome (SJS) is a type of severe skin reaction. Together with toxic epidermal necrolysis (TEN) and Stevens–Johnson/toxic epidermal necrolysis (SJS/TEN), it forms a spectrum of disease, with SJS being less severe. Erythema multiforme (EM) is generally considered a separate condition. Early symptoms of SJS include fever and flu-like symptoms. A few days later, the skin begins to blister and peel, forming painful raw areas. Mucous membranes, such as the mouth, are also typically involved. Complications include dehydration, sepsis, pneumonia and multiple organ failure.The most common cause is certain medications such as lamotrigine, carbamazepine, allopurinol, sulfonamide antibiotics and nevirapine. Other causes can include infections such as Mycoplasma pneumoniae and cytomegalovirus, or the cause may remain unknown. Risk factors include HIV/AIDS and systemic lupus erythematosus.The diagnosis of Stevens–Johnson syndrome is based on involvement of less than 10% of the skin. It is known as TEN when more than 30% of the skin is involved and considered an intermediate form when 10–30% is involved. SJS/TEN reactions are believed to follow a type IV hypersensitivity mechanism. It is also included with drug reaction with eosinophilia and systemic symptoms (DRESS syndrome), acute generalized exanthematous pustulosis (AGEP) and toxic epidermal necrolysis in a group of conditions known severe cutaneous adverse reactions (SCARs).Treatment typically takes place in hospital such as in a burn unit or intensive care unit. Efforts may include stopping the cause, pain medication, antihistamines, antibiotics, intravenous immunoglobulins or corticosteroids. Together with TEN, SJS affects 1 to 2 people per million per year. Typical onset is under the age of 30. Skin usually regrows over two to three weeks; however, complete recovery can take months. Overall, the risk of death with SJS is 5 to 10%.
Signs and symptoms
SJS usually begins with fever, sore throat, and fatigue, which is commonly misdiagnosed and therefore treated with antibiotics. SJS, SJS/TEN, and TEN are often heralded by fever, sore throat, cough, and burning eyes for 1 to 3 days. Patients with these disorders frequently experience burning pain of their skin at the start of disease. Ulcers and other lesions begin to appear in the mucous membranes, almost always in the mouth and lips, but also in the genital and anal regions. Those in the mouth are usually extremely painful and reduce the patients ability to eat or drink. Conjunctivitis occurs in about 30% of children who develop SJS. A rash of round lesions about an inch across arises on the face, trunk, arms and legs, and soles of the feet, but usually not the scalp.
Causes
SJS is thought to arise from a disorder of the immune system. The immune reaction can be triggered by drugs or infections. Genetic factors are associated with a predisposition to SJS. The cause of SJS is unknown in one-quarter to one-half of cases. SJS, SJS/TEN, and TEN are considered a single disease with common causes and mechanisms.Individuals expressing certain human leukocyte antigen (i.e. HLA) serotypes (i.e. genetic alleles), genetical-based T cell receptors, or variations in their efficiency to absorb, distribute to tissues, metabolize, or excrete (this combination is termed ADME) a drug are predisposed to develop SJS.
Medications
Although SJS can be caused by viral infections and malignancies, the main cause is medications. A leading cause appears to be the use of antibiotics, particularly sulfa drugs. Between 100 and 200 different drugs may be associated with SJS. No reliable test exists to establish a link between a particular drug and SJS for an individual case. Determining what drug is the cause is based on the time interval between first use of the drug and the beginning of the skin reaction. Drugs discontinued more than 1 month prior to onset of mucocutaneous physical findings are highly unlikely to cause SJS and TEN. SJS and TEN most often begin between 4 and 28 days after culprit drug administration. A published algorithm (ALDEN) to assess drug causality gives structured assistance in identifying the responsible medication.SJS may be caused by the medications rivaroxaban, vancomycin, allopurinol, valproate, levofloxacin, diclofenac, etravirine, isotretinoin, fluconazole, valdecoxib, sitagliptin, oseltamivir, penicillins, barbiturates, sulfonamides, phenytoin, azithromycin, oxcarbazepine, zonisamide, modafinil, lamotrigine, nevirapine, pyrimethamine, ibuprofen, ethosuximide, carbamazepine, bupropion, telaprevir, and nystatin.Medications that have traditionally been known to lead to SJS, erythema multiforme, and toxic epidermal necrolysis include sulfonamide antibiotics, penicillin antibiotics, cefixime (antibiotic), barbiturates (sedatives), lamotrigine, phenytoin (e.g., Dilantin) (anticonvulsants) and trimethoprim. Combining lamotrigine with sodium valproate increases the risk of SJS.Nonsteroidal anti-inflammatory drugs (NSAIDs) are a rare cause of SJS in adults; the risk is higher for older patients, women, and those initiating treatment. Typically, the symptoms of drug-induced SJS arise within a week of starting the medication. Similar to NSAIDs, paracetamol (acetaminophen) has also caused rare cases of SJS. People with systemic lupus erythematosus or HIV infections are more susceptible to drug-induced SJS.
Infections
The second most common cause of SJS and TEN is infection, particularly in children. This includes upper respiratory infections, otitis media, pharyngitis, and Epstein–Barr virus, Mycoplasma pneumoniae and cytomegalovirus infections. The routine use of medicines such as antibiotics, antipyretics and analgesics to manage infections can make it difficult to identify if cases were caused by the infection or medicines taken.Viral diseases reported to cause SJS include: herpes simplex virus (possibly; is debated), AIDS, coxsackievirus, influenza, hepatitis, and mumps.In pediatric cases, Epstein–Barr virus and enteroviruses have been associated with SJS.Recent upper respiratory tract infections have been reported by more than half of patients with SJS.Bacterial infections linked to SJS include group A beta-hemolytic streptococci, diphtheria, brucellosis, lymphogranuloma venereum, mycobacteria, Mycoplasma pneumoniae, rickettsial infections, tularemia, and typhoid.Fungal infections with coccidioidomycosis, dermatophytosis and histoplasmosis are also considered possible causes. Malaria and trichomoniasis, protozoal infections, have also been reported as causes.
Pathophysiology
SJS is a type IV hypersensitivity reaction in which a drug or its metabolite stimulates cytotoxic T cells (i.e. CD8+ T cells) and T helper cells (i.e. CD4+ T cells) to initiate autoimmune reactions that attack self tissues. In particular, it is a type IV, subtype IVc, delayed hypersensitivity reaction dependent in part on the tissue-injuring actions of natural killer cells. This contrasts with the other types of SCARs disorders, i.e., the DRESS syndrome which is a Type IV, Subtype IVb, hypersensitivity drug reaction dependent in part on the tissue-injuring actions of eosinophils and acute generalized exanthematous pustulosis which is a Type IV, subtype IVd, hypersensitivity reaction dependent in part on the tissue-injuring actions of neutrophils.Like other SCARs-inducing drugs, SJS-inducing drugs or their metabolites stimulate CD8+ T cells or CD4+ T cells to initiate autoimmune responses. Studies indicate that the mechanism by which a drug or its metabolites accomplishes this involves subverting the antigen presentation pathways of the innate immune system. The drug or metabolite covalently binds with a host protein to form a non-self, drug-related epitope. An antigen presenting cell (APC) takes up these alter proteins; digests them into small peptides; places the peptides in a groove on the human leukocyte antigen (i.e. HLA) component of their major histocompatibility complex (i.e. MHC); and presents the MHC-associated peptides to T-cell receptors on CD8+ T cells or CD4+ T cells. Those peptides expressing a drug-related, non-self epitope on one of their various HLA protein forms (HLA-A, HLA-B, HLA-C, HLA-DM, HLA-DO, HLA-DP, HLA-DQ, or HLA-DR) can bind to a T-cell receptor and thereby stimulate the receptor-bearing parent T cell to initiate attacks on self tissues. Alternatively, a drug or its metabolite may stimulate these T cells by inserting into the groove on a HLA protein to serve as a non-self epitope or bind outside of this groove to alter a HLA protein so that it forms a non-self epitope. In all these cases, however, a non-self epitope must bind to a specific HLA serotype (i.e. variation) in order to stimulate T cells. Since the human population expresses some 13,000 different HLA serotypes while an individual expresses only a fraction of them and since a SJS-inducing drug or metabolite interacts with only one or a few HLA serotypes, a drugs ability to induce SCARs is limited to those individuals who express HLA serotypes targeted by the drug or its metabolite. Accordingly, only rare individuals are predisposed to develop a SCARs in response to a particular drug on the bases of their expression of HLA serotypes: Studies have identified several HLA serotypes associated with development of SJS, SJS/TEN, or TEN in response to certain drugs. In general, these associations are restricted to the cited populations.In some East Asian populations studied (Han Chinese and Thai), carbamazepine- and phenytoin-induced SJS is strongly associated with HLA-B*1502 (HLA-B75), an HLA-B serotype of the broader serotype HLA-B15. A study in Europe suggested the gene marker is only relevant for East Asians. This has clinical relevance as it is agreed upon that prior to starting a medication such as allopurinol in a patient of Chinese descent, HLA-B*58:01 testing should be considered.Based on the Asian findings, similar studies in Europe showed 61% of allopurinol-induced SJS/TEN patients carried the HLA-B58 (phenotype frequency of the B*5801 allele in Europeans is typically 3%). One study concluded: "Even when HLA-B alleles behave as strong risk factors, as for allopurinol, they are neither sufficient nor necessary to explain the disease."Other HLA associations with the development of SJS, SJS/TEN, or TEN and the intake of specific drugs as determined in certain populations are given in HLA associations with SCARs.
T-cell receptors
In addition to acting through HLA proteins to bind with a T-cell receptor, a drug or its metabolite may bypass HLA proteins to bind directly to a T-cell receptor and thereby stimulate CD8+ T or CD4+ T cells to initiate autoimmune responses. In either case, this binding appears to develop only on certain T cell receptors. Since the genes for these receptors are highly edited, i.e. altered to encode proteins with different amino acid sequences, and since the human population may express more than 100 trillion different (i.e. different amino acid sequences) T-cell receptors while an individual express only a fraction of these, a drugs or its metabolites ability to induce the DRESS syndrome by interacting with a T cell receptor is limited to those individuals whose T cells express a T cell receptor(s) that can interact with the drug or its metabolite. Thus, only rare individuals are predisposed to develop SJS in response to a particular drug on the bases of their expression of specific T-cell receptor types. While the evidence supporting this T-cell receptor selectivity is limited, one study identified the preferential presence of the TCR-V-b and complementarity-determining region 3 in T-cell receptors found on the T cells in the blisters of patients with allopurinol-induced DRESS syndrome. This finding is compatible with the notion that specific types of T cell receptors are involved in the development of specific drug-induced SCARs.
ADME
Variations in ADME, i.e. an individuals efficiency in absorbing, tissue-distributing, metabolizing, or excreting a drug, have been found to occur in various severe cutaneous adverse reactions (SCARS) as well as other types of adverse drug reactions. These variations influence the levels and duration of a drug or its metabolite in tissues and thereby impact the drugs or metabolites ability to evoke these reactions. For example, CYP2C9 is an important drug-metabolizing cytochrome P450; it metabolizes and thereby inactivates phenytoin. Taiwanese, Japanese, and Malaysian individuals expressing the CYP2C9*3 variant of CYP2C9, which has reduced metabolic activity compared to the wild type (i.e. CYP2c9*1) cytochrome, have increased blood levels of phenytoin and a high incidence of SJS (as well as SJS/TEN and TEN) when taking the drug. In addition to abnormalities in drug-metabolizing enzymes, dysfunctions of the kidney, liver, or GI tract which increase a SCARs-inducing drug or metabolite levels are suggested to promote SCARs responses. These ADME abnormalities, it is also suggested, may interact with particular HLA proteins and T cell receptors to promote a SCARs disorder.
Diagnosis
The diagnosis is based on involvement of less than 10% of the skin. It is known as TEN when more than 30% of the skin is involved and an intermediate form with 10 to 30% involvement. A positive Nikolskys sign is helpful in the diagnosis of SJS and TEN. A skin biopsy is helpful, but not required, to establish a diagnosis of SJS and TEN.
Pathology
SJS, like TEN and erythema multiforme, is characterized by confluent epidermal necrosis with minimal associated inflammation. The acuity is apparent from the (normal) basket weave-like pattern of the stratum corneum.
Classification
Stevens–Johnson syndrome (SJS) is a milder form of toxic epidermal necrolysis (TEN). These conditions were first recognised in 1922. A classification first published in 1993, that has been adopted as a consensus definition, identifies Stevens–Johnson syndrome, toxic epidermal necrolysis, and SJS/TEN overlap. All three are part of a spectrum of severe cutaneous reactions (SCAR) which affect skin and mucous membranes. The distinction between SJS, SJS/TEN overlap, and TEN is based on the type of lesions and the amount of the body surface area with blisters and erosions. It is agreed that the most reliable method to classify EM, SJS, and TEN is based on lesion morphology and extent of epidermal detachment. Blisters and erosions cover between 3% and 10% of the body in SJS, 11–30% in SJS/TEN overlap, and over 30% in TEN. The skin pattern most commonly associated with SJS is widespread, often joined or touching (confluent), papuric spots (macules) or flat small blisters or large blisters which may also join. These occur primarily on the torso.SJS, TEN, and SJS/TEN overlap can be mistaken for erythema multiforme. Erythema multiforme, which is also within the SCAR spectrum, differs in clinical pattern and etiology.
Prevention
Screening individuals for certain predisposing gene variants before initiating treatment with particular SJS-, TEN/SJS-, or TEN-inducing drugs is recommended or under study. These recommendations are typically limited to specific populations that show a significant chance of having the indicated gene variant since screening of populations with extremely low incidences of expressing the variant is considered cost-ineffective. Individuals expressing the HLA allele associated with sensitivity to an indicated drug should not be treated with the drug. These recommendations include the following. Before treatment with carbamazepine, the Taiwan and USA Food and Drug Administrations recommend screening for HLA-B*15:02 in certain Asian groups. This has been implemented in Taiwan, Hong Kong, Singapore, and many medical centers in Thailand and Mainland China. Before treatment with allopurinol, the American College of Rheumatology guidelines for managing gout recommend HLA-B*58:01 screening. This is provided in many medical centers in Taiwan, Hong Kong, Thailand, and Mainland China. Before treatment with abacavir, the USA Food and Drug Administration recommends screening for HLA-B*57:01 in Caucasian populations. This screening is widely implemented. It has also been suggested that all individuals found to express this HLA serotype avoid treatment with abacovir. Current trials are underway in Taiwan to define the cost-effectiveness of avoiding phenytoin in SJS, SJS/TEN, and TEN for individuals expressing the CYP2C9*3 allele of CYP2C9.
Treatment
SJS constitutes a dermatological emergency. Patients with documented Mycoplasma infections can be treated with oral macrolide or oral doxycycline.Initially, treatment is similar to that for patients with thermal burns, and continued care can only be supportive (e.g., intravenous fluids and nasogastric or parenteral feeding) and symptomatic (e.g., analgesic mouth rinse for mouth ulcer). Dermatologists and surgeons tend to disagree about whether the skin should be debrided.Beyond this kind of supportive care, no treatment for SJS is accepted. Treatment with corticosteroids is controversial. Early retrospective studies suggested corticosteroids increased hospital stays and complication rates. No randomized trials of corticosteroids have been conducted for SJS, and it can be managed successfully without them.Other agents have been used, including cyclophosphamide and ciclosporin, but none have exhibited much therapeutic success. Intravenous immunoglobulin treatment has shown some promise in reducing the length of the reaction and improving symptoms. Other common supportive measures include the use of topical pain anesthetics and antiseptics, maintaining a warm environment, and intravenous analgesics.
An ophthalmologist should be consulted immediately, as SJS frequently causes the formation of scar tissue inside the eyelids, leading to corneal vascularization, impaired vision, and a host of other ocular problems. Those with chronic ocular surface disease caused by SJS may find some improvement with PROSE treatment (prosthetic replacement of the ocular surface ecosystem treatment).
Prognosis
SJS (with less than 10% of body surface area involved) has a mortality rate of around 5%. The mortality for toxic epidermal necrolysis (TEN) is 30–40%. The risk for death can be estimated using the SCORTEN scale, which takes a number of prognostic indicators into account. It is helpful to calculate a SCORTEN within the first 3 days of hospitalization. Other outcomes include organ damage/failure, cornea scratching, and blindness.. Restrictive lung disease may develop in patients with SJS and TEN after initial acute pulmonary involvement. Patients with SJS or TEN caused by a drug have a better prognosis the earlier the causative drug is withdrawn.
Epidemiology
SJS is a rare condition, with a reported incidence of around 2.6 to 6.1 cases per million people per year. In the United States, about 300 new diagnoses are made each year. The condition is more common in adults than in children.
History
SJS is named for Albert Mason Stevens and Frank Chambliss Johnson, American pediatricians who jointly published a description of the disorder in the American Journal of Diseases of Children in 1922.
Notable cases
Ab-Soul, American hip hop recording artist and member of Black Hippy
Padma Lakshmi, actress, model, television personality, and cookbook writer
Manute Bol, former NBA player. Bol died from complications of Stevens–Johnson syndrome as well as kidney failure.
Gene Sauers, three-time PGA Tour winner
Samantha Reckis, a seven-year-old Plymouth, Massachusetts girl who lost the skin covering 95% of her body after taking childrens Motrin in 2003. In 2013, a jury awarded her $63M in a lawsuit against Johnson & Johnson, one of the largest lawsuits of its kind. The decision was upheld in 2015.
Karen Elaine Morton, a model and actress who appeared in Tommy Tutones "867-5309/Jenny" video and was Playmate of the Month in the July 1978 issue of Playboy Magazine.
Research
In 2015, the NIH and the Food and Drug Administration (FDA) organized a workshop entitled "Research Directions in Genetically-Mediated Stevens–Johnson Syndrome/Toxic Epidermal Necrolysis".
References
External links
Bentley, John; Sie, David (8 October 2014). "Stevens-Johnson syndrome and toxic epidermal necrolysis". The Pharmaceutical Journal. 293 (7832). Retrieved 8 October 2014. |
Axenfeld–Rieger syndrome | Axenfeld–Rieger syndrome is a rare autosomal dominant disorder, which affects the development of the teeth, eyes, and abdominal region.Axenfeld–Rieger syndrome is part of the so-called iridocorneal or anterior segment dysgenesis syndromes, which were formerly known as anterior segment cleavage syndromes, anterior chamber segmentation syndromes or mesodermal dysgenesis. Although the exact classification of this set of signs and symptoms is somewhat confusing in current scientific literature, most authors agree with the classification cited here. Axenfeld Anomaly is known as the development of a posterior embryotoxon, associated with strands of the iris adhered to a Schwalbe line that has been displaced anteriorly, which when added to glaucoma is called Axenfeld Syndrome. Riegers Anomaly is defined by a universe of congenital anomalies of the iris, such as iris hypoplasia, corectopia or polycoria. When systemic findings are added to Riegers anomaly, such as bone, facial and/or dental defects, it is known as Rieger syndrome. The combination of both entities gives rise to the Axenfeld-Rieger Anomaly when there are no systemic abnormalities and Axenfeld-Rieger Syndrome when there are.Axenfeld-Rieger Syndrome, is a rare disease that affects the eye bilaterally, with an estimated prevalence of 1/200,000 people, without gender predilection, and is characterized by autosomal dominant inheritance with complete penetrance of variable expressivity. The genes that have been identified in approximately 50% of cases are PITX2 and FOXC1. Given the important hereditary factor, it is important to evaluate the most direct members of the family.To explain the ocular alterations, there is a theory of the mechanism postulated by Shields et al., which implies an arrest in the migration of neural crest cells towards the third trimester of gestation, which leads to the persistence of primordial endothelial tissue in the iris and anterior chamber angle. Contraction of these membranes after birth lead to the progressive changes seen in some patients. This primordial endothelium also generates an excessive and atypical basement membrane, especially near the limbal corneal junction, which accounts for the prominent Schwalbe line. In the case of secondary glaucoma, it would be the consequence of dysgenesis in the chamber sinus.
Signs and symptoms
Disease manifestations:
Regarding the age of diagnosis, this differs according to the intensity of the symptoms, ranging from asymptomatic to florid symptoms, characterized by ocular and systemic diseases, affecting multiple organs that have in common their origin in the neural crest.
Eye manifestations:
Bilateral ocular manifestations are usually pathognomonic of the disease. In the case of children who develop glaucoma, they may attend the consultation with signs and symptoms of buphthalmos, photosensitivity, tearing, corneal decompensation, which associated with poor vision, can be completed with a strabismus. In the case of the adult, there is a greater chance of not presenting symptoms, so an ophthalmological control may be required to detect the problem. Using a slit lamp, a posterior embryotoxon characterized by a prominent anteriorly displaced Schwalbes ring near the temporal corneal limbus can be revealed. The unexpected finding of a posterior embryotoxon as a single whitish irregular arcuate ridge, on routine examination, is not necessarily a diagnosis of ARS, as this occurs in a percentage estimated in the literature from 8% to 15% of the normal population. In the case of gonioscopy, we can observe that the extension of the posterior embryotoxon can be greater and be present in the 360○, with a variable thickness of the annulus and unusually detached and hanging within the anterior chamber. Regarding the iris, we can observe peripheral extensions to Schwalbes line, which can be thin or thick and extend over the trabecular meshwork, obscuring the scleral spur and even pulling the iris and producing corectopia in iris tissue that can range from atrophy mild stromal to the presence of uveal ectropion, pseudopolycoria or even absence of iris. These chamber sinus anomalies predispose half of the cases to open-angle glaucoma, which can manifest throughout life and therefore require regular ophthalmological check-ups. Other related anomalies are strabismus due to alteration in the insertions of the extraocular muscles or secondary to amblyopia, and with a predisposition to exotropia and retinal detachment.Systemic manifestations:
In the case of pathologies that affect the extraocular organs, greater attention must be paid to anomalies in the cardiovascular system, since they represent the most worrying associations due to their repercussions at the systemic level. These are present in different structures that make it up, such as heart valve defects, the presence of Fallots tetralogy, atrial septal defects or persistent truncus arteriosus. Other alterations described are craniofacial anomalies associated with hypoplasia of the midface, hypertelorism, telecanthus, maxillary hypoplasia, short nasolabial fold, thin upper lip and larger everted lower lip, which are typical facial characteristics although expressed in a variable way. Maxillary hypoplasia and poor tooth development produce a prognathic profile. Inspection of the oral cavity may show microdontia, hypodontia, oligodontia, and a thickened frenulum. The crowns of the anterior teeth may be conical or peg-shaped and the roots may be shortened, the gingival attachments may be reduced, and the enamel may be hypoplastic, contributing to poor dental health. There are other described associations such as umbilical, auditory, pituitary, psychomotor, size, urethral and anal anomalies as well as albinism.
Pathophysiology
The molecular genetics of Axenfeld–Rieger syndrome are poorly understood, but center on three genes identified by cloning of chromosomal breakpoints from patients.This disorder is inheritable as an autosomal dominant trait, which means the defective gene is located on an autosome, and only one copy of the gene is sufficient to cause the disorder when inherited from a parent who has the disorder. As shown in the diagram, this gives a 50/50 chance of offspring inheriting the condition from an affected parent.
Diagnosis
Although most recognized for its correlation with the onset of glaucoma, the malformation is not limited to the eye, as Axenfeld–Rieger syndrome when associated with the PITX2 genetic mutation usually presents congenital malformations of the face, teeth, and skeletal system.The most characteristic feature affecting the eye is a distinct corneal posterior arcuate ring, known as an "embryotoxon". In severe cases, iris may be adherent to the cornea anterior to the Schwalbes line.One of the three known genetic mutations which cause Rieger syndrome can be identified through genetic samples analysis. About 40% of people with Axenfeld–Rieger have displayed mutations in genes PITX2, FOXC1, and PAX6. The difference between Type 1, 2, and 3 Axenfeld–Rieger syndrome is the genetic cause, all three types display the same symptoms and abnormalities.
Classification
The OMIM classification is as follows:
Detection of any of these mutations can give patients a clear diagnosis and prenatal procedures such as preimplantation genetic diagnosis, chorionic villus sampling and amniocentesis can be offered to patients and prospective parents.
Management
One of the surgical techniques used to treat this rare disease is the Phakic Retroiridian Pupilloplasty is an original surgical technique involving the creation of a sclerocorneal incision through a peripheral iridotomy, with the surgeon working behind the iris and creating a neopupil with an anterior chamber vitrectome. It requires very few follow-up visits and the patients recovery is fast.
Eponym
It is named after the German ophthalmologist Theodor Axenfeld who studied anterior segment disorders, especially those such as Rieger syndrome and the Axenfeld anomaly.
Axenfeld–Rieger syndrome is characterized by abnormalities of the eyes, teeth, and facial structure. Rieger syndrome, by medical definition, is determined by the presence of malformed teeth, underdeveloped anterior segment of the eyes, and cardiac problems associated with the Axenfeld anomaly. The term "Rieger syndrome" is sometimes used to indicate an association with glaucoma. Glaucoma occurs in up to 50% of patients with Rieger syndrome. Glaucoma develops during adolescence or late childhood, but often occurs in infancy. In addition, a prominent Schwalbes line, an opaque ring around the cornea known as posterior embryotoxon, may arise with hypoplasia of the iris. Below average height and stature, stunted development of the mid-facial features and mental deficiencies may also be observed in patients.
References
Further reading
Amendt, Brad A., ed. (2005). The Molecular Mechanisms of Axenfeld-Rieger Syndrome. Medical Intelligence Unit. Springer. doi:10.1007/0-387-28672-1. ISBN 978-0-387-28672-3.
Agarwal, Sunita; Agarwal, Athiya; Apple, David J., eds. (2002). "Axenfeld-Rieger Syndrome". Textbook of Ophthalmology. Jaypee Brothers. pp. 1049–51. ISBN 978-81-7179-884-1.
Shields, M.Bruce; Buckley, Edward; Klintworth, Gordon K.; Thresher, Randy (1985). "Axenfeld-Rieger syndrome. A spectrum of developmental disorders". Survey of Ophthalmology. 29 (6): 387–409. doi:10.1016/0039-6257(85)90205-X. PMID 3892740.
Alward, Wallace L.M. (2000). "Axenfeld-Rieger syndrome in the age of molecular genetics". American Journal of Ophthalmology. 130 (1): 107–15. doi:10.1016/S0002-9394(00)00525-0. PMID 11004268.
External links
Axenfeld Rieger syndrome at NIHs Office of Rare Diseases
Axenfeld Rieger anomaly with cardiac defects and sensorineural hearing loss at NIHs Office of Rare Diseases |
Sodoku | Sodoku (鼠毒) is a bacterial zoonotic disease. It is caused by the Gram-negative rod Spirillum minus (also known as Spirillum minor). It is a form of rat-bite fever (RBF).
Signs and symptoms
The initial scratch or wound caused by a bite from a carrier rodent results in mild inflammatory reactions and ulcerations. The wounds may heal initially, but reappear with the onset of symptoms. The symptoms include recurring fever, with body temperature 101–104°F (38–40°C). The fever lasts for 2–4 days, but recurs generally at 4–8 weeks. This cycle may continue for months or years. The other symptoms include regional lymphadenitis, malaise, and headache. The complications include myocarditis, endocarditis, hepatitis, splenomegaly, and meningitis.
Causes
The infections are acquired through rat bites or scratches. It can occur as nosocomial infections (i.e., acquired from hospitals), or due to exposure or close associations with animals preying on rats, mice, squirrels, etc. Sodoku is mostly seen in Asia and Africa. Local transmission has been reported in US. The incubation period is 4 to 28 days.
Prognosis
Mortality is 6–10%.
References
== External links == |
Wheeze | A wheeze is a continuous, coarse, whistling sound produced in the respiratory airways during breathing. For wheezes to occur, some part of the respiratory tree must be narrowed or obstructed (for example narrowing of the lower respiratory tract in an asthmatic attack), or airflow velocity within the respiratory tree must be heightened. Wheezing is commonly experienced by persons with a lung disease; the most common cause of recurrent wheezing is asthma, though it can also be a symptom of lung cancer, congestive heart failure, and certain types of heart diseases.
The differential diagnosis of wheezing is wide, and the reason for wheezing in a given patient is determined by considering the characteristics of the wheezes and the historical and clinical findings made by the examining physician.
Characteristics
Wheeze
Wheezes occupy different portions of the respiratory cycle depending on the site of airway obstruction and its nature. The fraction of the respiratory cycle during which a wheeze is produced roughly corresponds to the degree of airway obstruction. Bronchiolar disease usually causes wheezing that occurs in the expiratory phase of respiration. As a rule, extrathoracic airway obstruction produce inspiratory sounds. Intrathoracic major airway obstruction produces inspiratory as well as expiratory sounds. Distal airway obstruction predominantly produces expiratory sounds.The presence of expiratory phase wheezing signifies that the patients peak expiratory flow rate is less than 50% of normal. Wheezing heard in the inspiratory phase, on the other hand, is often a sign of a stiff stenosis, usually caused by tumors, foreign bodies or scarring. This is especially true if the wheeze is monotonal, occurs throughout the inspiratory phase (i.e. is "holoinspiratory"), and is heard more proximally, in the trachea. Inspiratory wheezing also occurs in hypersensitivity pneumonitis. Wheezes heard at the end of both expiratory and inspiratory phases usually signify the periodic opening of deflated alveoli, as occurs in some diseases that lead to collapse of parts of the lungs.
The location of the wheeze can also be an important clue to the diagnosis. Diffuse processes that affect most parts of the lungs are more likely to produce wheezing that may be heard throughout the chest via a stethoscope. Localized processes, such as the occlusion of a portion of the respiratory tree, are more likely to produce wheezing at that location, hence the sound will be loudest and radiate outwardly. The pitch of a wheeze does not reliably predict the degree of narrowing in the affected airway.
Stridor
A special type of wheeze is stridor. Stridor — the word is from the Latin, strīdor — is a harsh, high-pitched, vibrating sound that is heard in respiratory tract obstruction. Stridor heard solely in the inspiratory phase of respiration usually indicates an upper respiratory tract obstruction, "as with aspiration of a foreign body (such as the fabled pediatric peanut)." Stridor in the inspiratory phase is usually heard with obstruction in the upper airways, such as the trachea, epiglottis, or larynx; because a block here means that no air may reach either lung, this condition is a medical emergency. Biphasic stridor (occurring during both the inspiratory and expiratory phases) indicates narrowing at the level of the glottis or subglottis, the point between the upper and lower airways.
See also
Crackles (also called "crepitations" or "rales")
Rhonchi
Squawk (sound)
References
Further reading
Godfrey S, Uwyyed K, Springer C, Avital A (Mar 2004). "Is clinical wheezing reliable as the endpoint for bronchial challenges in preschool children?". Pediatric Pulmonology. 37 (3): 193–200. doi:10.1002/ppul.10434. PMID 14966812. S2CID 25264776.
External links
Audio Breath Sounds - Multiple case studies with audio files of lung sounds.
R.A.L.E. Repository - sound files of breath sounds |
Homelessness | Homelessness or houselessness – also known as a state of being unhoused or unsheltered – is the condition of lacking stable, safe, and adequate housing. People can be categorized as homeless if they are:
living on the streets, also known as rough sleeping (primary homelessness);
moving between temporary shelters, including houses of friends, family, and emergency accommodation (secondary homelessness); and
living in private boarding houses without a private bathroom or security of tenure (tertiary homelessness).
have no permanent house or place to live safely
Internally Displaced Persons, persons compelled to leave their places of domicile, who remain as refugees within their countrys borders.The rights of people experiencing homelessness also varies from country to country. United States government homeless enumeration studies also include people who sleep in a public or private place, which is not designed for use as a regular sleeping accommodation for human beings. Homelessness and poverty are interrelated. There is no methodological consensus on counting homeless people and identifying their needs; therefore, in most cities, only estimated homeless populations are known.
In 2005, an estimated 100 million people worldwide were homeless, and as many as one billion people (one in 6.5 at the time) live as squatters, refugees, or in temporary shelter, all lacking adequate housing.
Scarce and expensive housing is the main cause of rising homelessness in the United States.
United Nations definition
In 2004, the United Nations sector of Economic and Social Affairs defined a homeless household as those households without a shelter that would fall within the scope of living quarters due to a lack of a steady income. They carry their few possessions with them, sleeping in the streets, in doorways or on piers, or in another space, on a more or less random basis.In 2009, at the United Nations Economic Commission for Europe Conference of European Statisticians (CES), held in Geneva, Switzerland, the Group of Experts on Population and Housing Censuses defined homelessness as:
In its Recommendations for the Censuses of Population and Housing, the CES identifies homeless people under two broad groups: (a) Primary homelessness (or rooflessness). This category includes persons living in the streets without a shelter that would fall within the scope of living quarters; (b) Secondary homelessness. This category may include persons with no place of usual residence who move frequently between various types of accommodations (including dwellings, shelters, and institutions for the homeless or other living quarters). This category includes persons living in private dwellings but reporting no usual address on their census form. The CES acknowledges that the above approach does not provide a full definition of the homeless.
Article 25 of the Universal Declaration of Human Rights, adopted 10 December 1948 by the UN General Assembly, contains this text regarding housing and quality of living:
Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing, and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.
The ETHOS Typology of Homelessness and Housing Exclusion was developed as a means of improving understanding and measurement of homelessness in Europe, and to provide a common "language" for transnational exchanges on homelessness. The ETHOS approach confirms that homelessness is a process (rather than a static phenomenon) that affects many vulnerable households at different points in their lives.The typology was launched in 2005 and is used for different purposes: as a framework for debate, for data collection purposes, for policy purposes, monitoring purposes, and in the media. This typology is an open exercise that makes abstraction of existing legal definitions in the EU member states. It exists in 25 language versions, the translations being provided mainly by volunteer translators.
Despite all the international agreements and efforts, many countries and individuals in power do not consider housing as a human right. Former US President, Jimmy Carter, addressed this issue in a 2017 interview. He said, “A lot of people dont look at housing as a human right, but it is.” His view contrasts with many American, especially those in power, that do not believe this right is inline with the US constitution.
Other terms
Recent homeless enumeration survey documentation utilizes the term unsheltered homeless. The common colloquial term street people does not fully encompass all unsheltered people, in that many such persons do not spend their time in urban street environments. Many shun such locales, because homeless people in urban environments may face the risk of being robbed or assaulted. Some people convert unoccupied or abandoned buildings ("squatting"), or inhabit mountainous areas or, more often, lowland meadows, creek banks, and beaches. Many jurisdictions have developed programs to provide short-term emergency shelter during particularly cold spells, often in churches or other institutional properties. These are referred to as warming centers, and are credited by their advocates as lifesaving.
History
Early history through the 19th century
United Kingdom
Following the Peasants Revolt, English constables were authorized under 1383 English Poor Laws statute to collar vagabonds and force them to show support; if they could not, the penalty was gaol. Vagabonds could be sentenced to the stocks for three days and nights; in 1530, whipping was added. The presumption was that vagabonds were unlicensed beggars. In 1547, a bill was passed that subjected vagrants to some of the more extreme provisions of the criminal law, namely two years servitude and branding with a "V" as the penalty for the first offense and death for the second. Large numbers of vagabonds were among the convicts transported to the American colonies in the 18th century. During the 16th century in England, the state first tried to give housing to vagrants instead of punishing them, by introducing bridewells to take vagrants and train them for a profession. In the 17th and 18th centuries, these were replaced by workhouses but these were intended to discourage too much reliance on state help.
United States
In the Antebellum South, the availability of slave labor made it difficult for poor white people to find work. To prevent poor white people from cooperating with enslaved black people, slaveowners policed poor whites with vagrancy laws.After the American Civil War, a large number of homeless men formed part of a counterculture known as "hobohemia" all over the United States. In smaller towns, hobos temporarily lived near train tracks and hopped onto trains to various destinations.The growing movement toward social concern sparked the development of rescue missions, such as the U.S. first rescue mission, the New York City Rescue Mission, founded in 1872 by Jerry and Maria McAuley.
Modern
20th century
The Great Depression of the 1930s caused a devastating epidemic of poverty, hunger, and homelessness in the United States. When Franklin D. Roosevelt took over the presidency from Herbert Hoover in 1933, he passed the New Deal, which greatly expanded social welfare, including providing funds to build public housing. This marked the end of the Great Depression.How the Other Half Lives and Jack Londons The People of the Abyss (1903) discussed homelessness and raised public awareness, which caused some changes in building codes and some social conditions. In England, dormitory housing called "spikes" was provided by local boroughs. By the 1930s in England, there were 30,000 people living in these facilities. In 1933, George Orwell wrote about poverty in London and Paris, in his book Down and Out in Paris and London. In general, in most countries, many towns and cities had an area that contained the poor, transients, and afflicted, such as a "skid row". In New York City, for example, there was an area known as "the Bowery", traditionally, where people with an alcohol use disorder were to be found sleeping on the streets, bottle in hand.
In the 1960s in the UK, nature and growing problem of homelessness changed in England as public concern grew. The number of people living "rough" in the streets had increased dramatically. However, beginning with the Conservative administrations Rough Sleeper Initiative, the number of people sleeping rough in London fell dramatically. This initiative was supported further by the incoming Labour administration from 2009 onwards with the publication of the Coming in from the Cold strategy published by the Rough Sleepers Unit, which proposed and delivered a massive increase in the number of hostel bed spaces in the capital and an increase in funding for street outreach teams, who work with rough sleepers to enable them to access services.Scotland saw a slightly different picture, with the impact of the right to buy ending in a devastating drop in available social housing, something that has never ever recovered. The 1980s and the 1990s resulted in an ever increasing picture of people becoming homeless, with very few rights to provide access to allow change.
2000s
However this picture changed in Scotland from 2001, as the Scottish Parliament came into place. It was agreed by all parties that a ten-year plan to eradicate homelessness by the end of 2012 would be implemented. The minister of housing met with the third sector and Local Authorities every 6 weeks, checking on progress, whilst consultations brought about legislative change, alongside work to prevent homelessness. There was a peak in applications around 2005, but from there onwards figures dropped year on year for the next 8 years. However, with a focus on the broader numbers of people experiencing homelessness, many people with higher levels of need got caught in the system. Work from 2017 started to address this, with a framework currently in place to work towards a day where everyone in Scotland has a home suitable to meet their needs.
In 2002, research showed that children and families were the largest growing segment of the homeless population in the United States, and this has presented new challenges to agencies.
In the US, the government asked many major cities to come up with a ten-year plan to end homelessness. One of the results of this was a "Housing First" solution. The Housing First program offers homeless people access to housing without having to undergo tests for sobriety and drug usage. The Housing First program seems to benefit homeless people in every aspect except for substance abuse, for which the program offers little accountability. An emerging consensus is that the Housing First program still gives clients a higher chance at retaining their housing once they get it. A few critical voices argue that it misuses resources and does more harm than good; they suggest that it encourages rent-seeking and that there is not yet enough evidence-based research on the effects of this program on the homeless population. Some formerly homeless people, who were finally able to obtain housing and other assets which helped to return to a normal lifestyle, have donated money and volunteer services to the organizations that provided aid to them during their homelessness. Alternatively, some social service entities that help homeless people now employ formerly homeless individuals to assist in the care process.
Homelessness has migrated toward rural and suburban areas. The number of homeless people has not changed dramatically but the number of homeless families has increased according to a report of HUD. The United States Congress appropriated $25 million in the McKinney-Vento Homeless Assistance Grants for 2008 to show the effectiveness of Rapid Re-housing programs in reducing family homelessness. In February 2009, President Obama signed the American Recovery and Reinvestment Act of 2009, part of which addressed homelessness prevention, allocating $1.5 billion for a Homeless Prevention Fund. The Emergency Shelter Grant (ESG) programs name was changed to Emergency Solution Grant (ESG) program, and funds were re-allocated to assist with homeless prevention and rapid re-housing for families and individuals.
Causes
Major reasons for homelessness include:
Rent and eviction
Gentrification is a process in which a formerly affordable neighborhood becomes popular with wealthier people, raising housing prices and pushing poorer residents out. Gentrification can cause or influence evictions, foreclosures, and rent regulation.
Increased wealth disparity and income inequality causes distortions in the housing market that push rent burdens higher, making housing unaffordable.In many countries, people lose their homes by government orders to make way for newer upscale high-rise buildings, roadways, and other governmental needs. The compensation may be minimal, in which case the former occupants cannot find appropriate new housing and become homeless.
Mortgage foreclosures where mortgage holders see the best solution to a loan default is to take and sell the house to pay off the debt can leave people homeless. Foreclosures on landlords often lead to the eviction of their tenants. "The Sarasota, Florida, Herald Tribune noted that, by some estimates, more than 311,000 tenants nationwide have been evicted from homes this year after lenders took over the properties."Rent regulation also has a small effect on shelter and street populations. This is largely due to rent control reducing the quality and quantity of housing. For example, a 2019 study found that San Franciscos rent control laws reduced tenant displacement from rent controlled units in the short-term, but resulted in landlords removing 30% of the rent controlled units from the rental market, (by conversion to condos or TICs) which led to a 15% citywide decrease in total rental units, and a 7% increase in citywide rents.
Economics
A substantial percentage of the US homeless population are individuals who are chronically unemployed, or have trouble managing their expenses. This can lead to poverty. Factors that can lead to economic struggle include neighbourhood gentrification (as previously discussed), drug addiction or gambling addiction, job loss, debt, chronic overspending of money, loss of money and/or assets due to divorce, death of breadwinning spouse, being denied jobs due to discrimination, living off of welfare or disability income, and many others.
Poverty
Poverty is a significant factor in homelessness. Alleviation of poverty, as a result, plays an essential role in eliminating homelessness. Some non-governmental organizations (NGOs) have studied ‘unconditional cash transfers’ (UCTs) to low-income families and individuals to reduce poverty in developing countries. Despite their initial concern about UTC’s potential negative effects on the recipients, the researchers found promising results. The study in Kenya found that the assisted households increased their consumption and savings. While the families spent more for their food and food security, they did not incur any expenses on unnecessary goods or services. This study shows that a proper approach to poverty could effectively eliminate this factor as part of a solution to homelessness. Providing access to education and employment to low-income families and individuals must also be considered to combat poverty and preventing homelessness.
Physical and mental health
It is not surprising that many researchers have identified that “Homelessness is closely connected to decline in physical and mental health”. Most people who use homeless shelters on a frequent basis face multiple disadvantages, such as increased prevalence of physical and mental health problems, disabilities, addiction, poverty, and discrimination. Studies have shown that homeless people have a high level of morbidity and mortality. Moreover, they suffer from a wide range of health problems, including contracting sever infectious diseases (such as tuberculosis, HIV/AIDs, and STDs), addictions, and mental illnesses. Notably, the lack of access to healthcare for homeless people adversely affects the healthcare system as well. Homeless people often obtain their care in emergency departments, partly due to their lack of adequate health insurance, especially for medications. Statistics demonstrate that they are admitted to hospital five times more than the public and stay under care much longer. Their prolonged healthcare imposes a significant cost to the healthcare system and deprives others from receiving their timely healthcare. Studies show that preventive and primary care (which homeless people are not receiving) substantially lower the overall healthcare costs. Unfortunately, in terms of providing adequate treatment to homeless people with respect to their mental illness, the healthcare system’s performance has not been promising, either. It is apparent that a comprehensive solution to homelessness must include an effective approach to providing healthcare to homeless people.
Disabilities, especially where disability services are non-existent, inconvenient, or poorly performing can impact a persons ability to support house payments, mortgages, or rent, especially if they are unable to work. Traumatic brain injury is one main disability that can account for homelessness. According to a Canadian survey, traumatic brain injury is widespread among homeless people and, for around 70% of respondents, can be attributed to a time "before the onset of homelessness"Being afflicted with a mental disorder, including substance use disorders, where mental health services are unavailable or difficult to access can also drive homelessness for the same reasons as disabilities. A United States federal survey done in 2005 indicated that at least one-third of homeless men and women have serious psychiatric disorders or problems. Autism spectrum disorders and schizophrenia are the top two common mental disabilities among the U.S. homeless. Personality disorders are also very prevalent, especially Cluster A. Substance abuse can also cause homelessness from behavioral patterns associated with addiction, that alienate an addicted individuals family and friends, who could otherwise provide support during difficult economic times.
Discrimination
A history of experiencing domestic violence can also attribute to homelessness. Compared to housed women, homeless women were more likely to report childhood histories of abuse, as well as more current physical abuse by male partners.Gender disparities also influence the demographics of homelessness. The experiences of homeless women and women in poverty are often overlooked, however, they experience specific gender-based victimization. As individuals with little to no physical or material capital, homeless women are particularly targeted by male law enforcement, and men living on the street. It has been found that "street-based homelessness dominates mainstream understanding of homelessness and it is indeed an environment in which males have far greater power (OGrady and Gaietz, 2004)." Women on the street are often motivated to gain capital through affiliation and relationships with men, rather than facing homelessness alone. Within these relationships, women are still likely to be physically and sexually abused.Social exclusion related to sexual orientation, gender identity or expression, or sex characteristics can also attribute to homelessness based on discrimination. Relationship breakdown, particularly in relation to young people and their parents, such as disownment due to sexuality or gender identity is one example.Former imprisonment status and a criminal history can also affect securing housing.
Human and natural disasters
Natural disasters, including but not limited to earthquakes, hurricanes, tsunamis, tornadoes, and volcanic eruptions can cause homelessness. An example is the 1999 Athens earthquake in Greece, in which many middle class people became homeless, with some of them living in containers, especially in the Nea Ionia earthquake survivors container city provided by the government; in most cases, their only property that survived the quake was their car. Such people are known in Greece as seismopathis, meaning earthquake-struck.War or armed conflict can create refugees fleeing the violence. Whether they be either domestic or foreign to the country, the number of migrants can outstrip the supply of affordable housing, leaving some section of this population to be homeless.
Foster care
Transitions from foster care and other public systems can also impact homelessness; specifically, youth who have been involved in, or are a part of the foster care system, are more likely to become homeless. Most leaving the system have no support and no income, making it nearly impossible to break the cycle, and forcing them to live on the streets. There is also a lack of shelter beds for youth; various shelters have strict and stringent admissions policies.
Choice
Although uncommon, some choose to be homeless as a personal lifestyle choice. There are different reasons why someone would choose to become homeless. They may not want to contribute to a capitalist society, which includes having a job, spending and owing money, and paying taxes to the government. The main aspect of freeganism is anti-consumerism, and avoiding spending excessive amounts of money at all costs. Some see homelessness as more free than living in a house or apartment, and prefer being in nature, as well as away from other people. Some may have had a traumatic experience in a house or apartment, such as a fire, and feel safer outside, being able to survey their surroundings.
Challenges
The basic problem of homelessness is the need for personal shelter, warmth, and safety. Other difficulties include:
Hygiene and sanitary facilities
Hostility from the public and laws against urban vagrancy
Cleaning and drying of clothes
Obtaining, preparing, and storing food
Keeping contact with friends, family, and government service providers without a permanent location or mailing address
Medical problems, including issues caused by an individuals homeless state (e.g., hypothermia or frostbite from sleeping outside in cold weather), or issues which are exacerbated by homelessness due to lack of access to treatment (e.g., mental health and the individual not having a place to store prescription drugs)
Personal security, quiet, and privacy, especially for sleeping, bathing, and other hygiene activities
Safekeeping of bedding, clothing, and possessions, which may have to be carried at all timesPeople experiencing homelessness face many problems beyond the lack of a safe and suitable home. They are often faced with reduced access to private and public services and vital necessities:
General rejection or discrimination from other people
Increased risk of suffering violence and abuse
Limited access to education
Loss of usual relationships with the mainstream
Not being seen as suitable for employment
Reduced access to banking services
Reduced access to communications technology
Reduced access to healthcare and dental services
Targeting by municipalities to exclude from public space
Implication of hostile architecture
Difficulty forming trust in relation to services, systems, and other people; exacerbating pre-existing difficulty accessing aid and escaping homelessness, particularly present in the chronically homeless. Statistics from the past twenty years, in Scotland, demonstrate that the biggest cause of homelessness is varying forms of relationship breakdown.There is sometimes corruption and theft by the employees of a shelter, as evidenced by a 2011 investigative report by FOX 25 TV in Boston, wherein a number of Boston public shelter employees were found stealing large amounts of food over a period of time from the shelters kitchen for their private use and catering. Homeless people are often obliged to adopt various strategies of self-presentation in order to maintain a sense of dignity, which constrains their interaction with passers-by, and leads to suspicion and stigmatization by the mainstream public.Homelessness is also a risk factor for depression caused by prejudice. When someone is prejudiced against people who are homeless, and then becomes homeless themselves, their anti-homelessness prejudice turns inward, causing depression. "Mental disorders, physical disability, homelessness, and having a sexually transmitted infection are all stigmatized statuses someone can gain despite having negative stereotypes about those groups." Difficulties can compound exponentially. A study found that in the city of Hong Kong over half of the homeless people in the city (56%) had some degree of mental illness. Only 13% of the 56% were receiving treatment for their condition leaving a huge portion of homeless untreated for their mental illness.The issue of anti-homeless architecture came to light in 2014, after a photo displayed hostile features (spikes on the floor) in London, and took social media by storm. The photo of an anti-homeless structure was a classic example of hostile architecture, in an attempt to discourage people from attempting to access or use public space in irregular ways. However, although this has only recently came to light, hostile architecture has been around for a long time in many places.: 68 An example of this is a low overpass that was put in place between New York City and Long Island. Robert Moses, an urban planner, designed it this way in an attempt to prevent public buses from being able to pass through it.
Healthcare
Health care for homeless people is a major public health challenge. When compared to the general population, people who are homeless experience higher rates of adverse physical and mental health outcomes. Chronic disease severity, respiratory conditions, rates of mental health illnesses and substance use are all often greater in homeless populations than the general population. Homelessness is also associated with a high risk of suicide attempts. Homeless people are more likely to suffer injuries and medical problems from their lifestyle on the street, which includes poor nutrition, exposure to the severe elements of weather, and a higher exposure to violence. Yet at the same time, they have reduced access to public medical services or clinics, in part because they often lack identification or registration for public healthcare services. There are significant challenges in treating homeless people who have psychiatric disorders because clinical appointments may not be kept, their continuing whereabouts are unknown, their medicines may not be taken as prescribed, medical and psychiatric histories are not accurate, and other reasons. Because many homeless people have mental illnesses, this has presented a crisis in care.The conditions affecting homeless people are somewhat specialized, and have opened a new area of medicine tailored to this population. Skin conditions, including scabies, are common, because homeless people are exposed to extreme cold in the winter, and have little access to bathing facilities. They have problems caring for their feet, and have more severe dental problems than the general population. Diabetes, especially untreated, is widespread in the homeless population. Specialized medical textbooks have been written to address this for providers.Due to the demand for free medical services by homeless people, it might take months to get a minimal dental appointment in a free-care clinic. Communicable diseases are of great concern, especially tuberculosis, which spreads more easily in crowded homeless shelters in high-density urban settings. There has been ongoing concern and studies about the health and wellness of the older homeless population, typically ages 50 to 64 and older, as to whether they are significantly more sickly than their younger counterparts, and if they are under-served.A 2011 study led by Dr. Rebecca T. Brown in Boston, conducted by the Institute for Aging Research (an affiliate of Harvard Medical School), Beth Israel Deaconess Medical Center, and the Boston Health Care for the Homeless Program found the elderly homeless population had "higher rates of geriatric syndromes, including functional decline, falls, frailty, and depression than seniors in the general population, and that many of these conditions may be easily treated if detected". The report was published in the Journal of Geriatric Internal Medicine. There are government avenues which provide resources for the development of healthcare for homeless people. In the United States, the Bureau of Primary Health Care has a program that provides grants to fund the delivery of healthcare to homeless people. According to 2011 UDS, data community health centers were able to provide service to 1,087,431 homeless individuals. There are also many nonprofit and religious organizations which provide healthcare services to homeless people. These organizations help meet the large need which exists for expanding healthcare for homeless people.
There have been significant numbers of unsheltered persons dying of hypothermia, adding impetus to the trend of establishing warming centers, as well as extending enumeration surveys with vulnerability indexes.
Effect on life expectancy
In 1999, Dr. Susan Barrow of the Columbia University Center for Homelessness Prevention Studies reported in a study that the "age-adjusted death rates of homeless men and women were four times those of the general U.S. population and two to three times those of the general population of New York City". A report commissioned by homeless charity Crisis in 2011 found that on average, homeless people in the UK have a life expectancy of 47 years, 30 years younger than the rest of the population.
Health impacts of extreme weather events
People experiencing homelessness are at a significant increased risk to the effects of extreme weather events. Such weather events include extreme heat and cold, floods, storm surges, heavy rain, and droughts. While there are many contributing factors to these events, climate change is driving an increasing frequency and intensity of these events. The homeless population is considerably more vulnerable to these weather events, due to their higher rates of chronic disease, and lower socioeconomic status. Despite having a minimal carbon footprint, homeless people unfortunately experience a disproportionate burden of the effects of climate change.Homeless persons have increased vulnerability to extreme weather events for many reasons. They are disadvantaged in most social determinants of health, including lack of housing and access to adequate food and water, reduced access to health care, and difficulty in maintaining health care. They have significantly higher rates of chronic disease including respiratory disease and infections, gastrointestinal disease, musculoskeletal problems and mental health disease. In fact, self-reported rates of respiratory diseases (including asthma, chronic bronchitis and emphysema) are double that of the general population.The homeless population often live in higher risk urban areas, with increased exposure and little protection from the elements. They also have limited access to clean drinking water, and other methods of cooling down. The built environment in urban areas also contributes to the "heat island effect", the phenomenon whereby cities experience higher temperatures due to the predominance of dark, paved surfaces, and lack of vegetation. Homeless populations are often excluded from disaster planning efforts, further increasing their vulnerability when these events occur. Without the means to escape extreme temperatures and seek proper shelter, and cooling or warming resources, homeless people are often left to suffer the |
Homelessness | brunt of the extreme weather.
The health effects that result from extreme weather include exacerbation of chronic diseases and acute illnesses. Pre-existing conditions can be greatly exacerbated by extreme heat and cold, including cardiovascular, respiratory, skin and renal disease, often resulting in higher morbidity and mortality during extreme weather. Acute conditions such as sunburn, dehydration, heat stroke, and allergic reactions are also common. In addition, a rise in insect bites can lead to vector-borne infections. Mental health conditions can also be impacted by extreme weather events as a result of lack of sleep, increased alcohol consumption, reduced access to resources and reduced ability to adjust to the environmental changes. In fact, pre-existing psychiatric illness has been shown to triple the risk of death from extreme heat. Overall, extreme weather events appear to have a "magnifying effect" in exacerbating the underlying prevalent mental and physical health conditions of homeless populations.
Case study: Hurricane Katrina
In 2005, Hurricane Katrina, a category 5 hurricane, made landfall on Florida and Louisiana. It particularly affected the city of New Orleans and the surrounding areas. Hurricane Katrina was the deadliest hurricane in the US in seven decades, with more than 1,600 confirmed deaths, and more than 1,000 people missing. The hurricane disproportionately affected marginalized individuals, and individuals with lower socioeconomic status (i.e., 93% of shelter residents were African–American, 32% had household incomes below $10,000/year and 54% were uninsured). The storm nearly doubled the number of homeless people in New Orleans. While in most cities, homeless people account for 1% of the population, in New Orleans, the homeless account for 4% of the population. In addition to its devastating effects on infrastructure and the economy, the estimated prevalence of mental illness and the incidence of West Nile Virus more than doubled after Hurricane Katrina in the hurricane-affected regions.
Legal documentation
Homeless people may find it difficult to document their date of birth or their address. Due to the fact that homeless people usually have no place to store possessions, they often lose their belongings, including identification and other documents, or find them destroyed by police or others. Without a photo ID, homeless persons cannot get a job or access many social services, including healthcare. They can be denied access to even the most basic assistance: clothing closets, food pantries, certain public benefits, and in some cases, emergency shelters. Obtaining replacement identification is difficult. Without an address, birth certificates cannot be mailed. Fees may be cost-prohibitive for impoverished persons. And some states will not issue birth certificates unless the person has photo identification, creating a Catch-22. This problem is far less acute in countries that provide free-at-use health care, such as the UK, where hospitals are open-access day and night and make no charges for treatment. In the U.S., free-care clinics for homeless people and other people do exist in major cities, but often attract more demand than they can meet.
Victimization by violent crimes
Homeless people are often the victims of violent crime. A 2007 study found that the rate of violent crimes against homeless people in the United States is increasing. A study of women veterans found that homelessness is associated with domestic violence, both directly, as the result of leaving an abusive partner, and indirectly, due to trauma, mental health conditions, and substance abuse.
Stigma
Conditions such as alcoholism and mental illness are often associated with homelessness. Many people fear homeless people, due to the stigma surrounding the homeless community. Surveys have revealed that before spending time with the homeless, most people fear them, but after spending time with homeless people, that fear is lessened, or is no longer there. Another effect of this stigma is isolation.The stigmas of homelessness can thus be divided into three major categories in general: (1) attributing homelessness to personal incompetency and health conditions (e.g., unemployment, mental health issues, substance abuse, etc.); (2) seeing homeless people as posing threats to ones personal safety; and (3) de-sanitizing homeless people (i.e., seeing them as pathogens). Past research has shown that those types of stigmas are being reinforced through the fact that one is homeless and have a negative impact on effective public policymaking in terms of reducing homelessness. When a person lives on a street, many aspects of their personal situations, such as mental health issues and alcoholism, are more likely to be exposed to the public as compared to people who are not homeless and have access to resources that will help improve their personal crises. Such lack of privacy inevitably reinforces stigma by increasing observations of stereotypes for the public. Furthermore, media often attributes those personal crises to the direct cause of crimes, further leading the public to believe that homeless people are a threat to their personal safety. Many also believe that contacts with homeless people increase their chance of contracting diseases given that they lack access to stable, sanitary living conditions. Those types of stigmas are intertwined with each other when shaping public opinions on policies related to the homeless population, resulting in many ineffective policies that do not reduce homelessness at all. An example of such ineffective but somewhat popular policies is imposing bans on sleeping on the streets.Relying on the famous contact hypothesis, researchers argue that increasing contact between the homeless population and non-homeless population is likely to change public opinions on this out-group and make the public more well-informed when it comes to policymaking. While some believe that the contact hypothesis is only valid on the condition that the context and type of contact are specified, in the case of reducing discrimination against the homelessness population, some survey data indicate that the context (e.g., the proportion of the homeless population in ones city) and type of contact (e.g., TV shows about the homelessness population or interpersonal conversations about homelessness) do not produce much variation as they all increase positive attitudes towards homeless people and public policies that aid this group. Given that the restrictions of contexts and types of contact to reduce stigma are minimal, this finding is informative and significant to the government when it comes to making policies to offer institutional support for reducing discrimination in a country and for gauging public opinions on their proposed policies to reduce homelessness.
Global statistics
Demographics
In western countries such as the United States, the typical homeless person is male and single, with the Netherlands reporting 80% of homeless people aged 18–65 to be men. Some cities have particularly high percentages of males in homeless populations, with men comprising eighty-five percent of the homeless in Dublin. Non-white people are also overrepresented in homeless populations, with such groups two and one-half times more likely to be homeless in the U.S. The median age of homeless people is approximately 35.
Statistics for developed countries
In 2005, an estimated 100 million people worldwide were homeless. The following statistics indicate the approximate average number of homeless people at any one time. Each country has a different approach to counting homeless people, and estimates of homelessness made by different organizations vary wildly, so comparisons should be made with caution.
European Union: 3,000,000 (UN-HABITAT 2004)
England: 11,580 single households were assessed as rough sleeping at the point of approach in 2021, up 39% from 2019–20, with 119,400 households owed a prevention duty in 2020–21
Scotland: 27,571 households were assessed as homeless in 2020/21, a decrease of 13% compared to 2019/20
Canada: 150,000
Australia: On census night in 2006 there were 105,000 people homeless across Australia, an increase from the 99,900 Australians who were counted as homeless in the 2001 census
United States: The HUD 2018 Annual Homeless Assessment Report (AHAR) to Congress reports that in a single night, roughly 553,000 people were experiencing homelessness in the United States. According to HUDs July 2010 fifth Homeless Assessment Report to Congress, in a single night in January 2010, single-point analysis reported to HUD showed 649,917 people experiencing homelessness. This number had increased from January 2009s 643,067. The unsheltered count increased by 2.8 percent while the sheltered count remained the same. Also, HUD reported the number of chronically homeless people (persons with severe disabilities and long homeless histories) decreased one percent between 2009 and 2010, from 110,917 to 109,812. Since 2007 this number had decreased by 11 percent. This was mostly due to the expansion of permanent supportive housing programs.The change in numbers has occurred due to the prevalence of homelessness in local communities rather than other changes. According to HUDs July 2010 Homeless Assessment Report to Congress, more than 1.59 million people spent at least one night in an emergency shelter or transitional housing program during the 2010 reporting period, a 2.2 percent increase from 2009. Most users of homeless shelters used only an emergency shelter, while 17 percent used only transitional housing, and less than 5 percent used both during the reporting period. Since 2007, the annual number of those using homeless shelters in cities has decreased from 1.22 million to 1.02 million, a 17 percent decrease. The number of persons using homeless shelters in suburban and rural areas increased 57 percent, from 367,000 to 576,000. In the U.S., the federal governments HUD agency has required federally-funded organizations to use a computer tracking system for homeless people and their statistics, called HMIS (Homeless Management Information System). There has been some opposition to this kind of tracking by privacy advocacy groups, such as EPIC.However, HUD considers its reporting techniques to be reasonably accurate for homeless in shelters and programs in its Annual Homeless Assessment Report to Congress. Actually determining and counting the number of homeless is very difficult in general due to their lifestyle habits. There are so-called "hidden homeless" out of sight of the normal population and perhaps staying on private property. Various countries, states, and cities have come up with differing means and techniques to calculate an approximate count. For example, a one night "homeless census count", called a point-in-time (PIT) count, usually held in early winter for the year, is a technique used by a number of American cities, such as Boston. Los Angeles uses a mixed set of techniques for counting, including the PIT street count.In 2003, The United States Department of Housing and Urban Development (HUD) had begun requiring a PIT count in all "Continuum of Care" communities which required them to report a count of people, housing status, and geographic locations of individuals counted. Some communities provide sub-population information to the PIT, such as information on veterans, youth, and elderly individuals, as done in Boston.Japan: 20,000–100,000 (some figures put it at 200,000–400,000). Reports show that homelessness is on the rise in Japan since the mid-1990s. There are more homeless men than homeless women in Japan because it is usually easier for women to get a job and they are less isolated than men. Also Japanese families usually provide more support for women than they do for men.
Developing and undeveloped countries
The number of homeless people worldwide grew steadily in 2005. In some developing countries such as Nigeria and South Africa, homelessness is rampant, with millions of children living and working on the streets. Homelessness has become a problem in the countries of China, India, Thailand, Indonesia, and the Philippines despite their growing prosperity, partly due to migrant workers who have trouble finding permanent homes.Determining the true number of homeless people worldwide varies between 100 million and 1 billion people based on the exact definition used. Refugees, asylum-seekers, and internally displaced persons can also be considered homeless in that they, too, experience "marginalization, minority status, socioeconomic disadvantage, poor physical health, collapse of social supports, psychological distress, and difficulty adapting to host cultures" such as the domestic homeless.In the past twenty years, scholars such as Tipple and Speak have begun to refer to homelessness as the "antithesis or absence of home" rather than rooflessness or the "lack of physical shelter." This complication in the homelessness debate further delineates the idea that home actually consists of an adequate shelter, an experienced and dynamic place that serves as a "base" for nurturing human relationships and the "free development of individuals" and their identity. Thus, the home is perceived to be an extension of ones self and identity. In contrast, the homeless experience, according to Moore, constitutes more as a "lack of belonging" and a loss of identity that leads to individuals or communities feeling "out of place" once they can no longer call a place of their own home.This new perspective on homelessness sheds light on the plight of refugees, a population of stateless people who are not normally included in the mainstream definition of homelessness. It has also created problems for researchers because the nature of "counting" homeless people across the globe relies heavily on who is considered a homeless person. Homeless individuals, and by extension refugees, can be seen as lacking lack the "crucible of our modern society" and lacking a way of actively belonging to and engaging with their respective communities or cultures. As Casavant demonstrates, a spectrum of definitions for homelessness, called the "continuum of homelessness", should refer to refugees as homeless individuals because they not only lose their home, but they are also afflicted with a myriad of problems that parallel those affecting the domestic homeless, such as "[a lack of] stable, safe and healthy housing, an extremely low income, adverse discrimination in access to services, with problems of mental health, alcohol, and drug abuse or social disorganization". Refugees, like the domestic homeless, lose their source of identity and way of connecting with their culture for an indefinite period of time.
Thus, the current definition of homelessness unfortunately allows people to simplistically assume that homeless people, including refugees, are merely "without a place to live" when that is not the case. As numerous studies show, forced migration and displacement brings with it another host of problems including socioeconomic instability, "increased stress, isolation, and new responsibilities" in a completely new environment.For people in Russia, especially the youth, alcohol and substance use is a major cause and reason for becoming and continuing to be homeless. The United Nations Centre for Human Settlements (UN-Habitat) wrote in its Global Report on Human Settlements in 1995: "Homelessness is a problem in developed as well as in developing countries. In London, for example, life expectancy among homeless people is more than 25 years lower than the national average."
Poor urban housing conditions are a global problem, but conditions are worst in developing countries. Habitat says that today 600 million people live in life- and health-threatening homes in Africa, Asia, and Latin America. For example, more than three in four young people had insufficient means of shelter and sanitation in some African countries like Malawi. "The threat of mass homelessness is greatest in those regions because that is where population is growing fastest. By 2015, the 10 largest cities in the world will be in Asia, Latin America, and Africa. Nine of them will be in developing countries: Mumbai, India – 27.4 million; Lagos, Nigeria – 24.4; Shanghai, China – 23.4; Jakarta, Indonesia – 21.2; São Paulo, Brazil – 20.8; Karachi, Pakistan – 20.6; Beijing, China – 19.4; Dhaka, Bangladesh – 19; Mexico City, Mexico – 18.8. The only city in a developed country that will be in the top ten is Tokyo, Japan – 28.7 million."In 2008, Dr. Anna Tibaijuka, executive director of UN-HABITAT, referring to the recent report "State of the Worlds Cities Report 2008/2009", said that the world economic crisis we are in should be viewed as a "housing finance crisis" in which the poorest of poor were left to fend for themselves.
Refuges and alternative accommodation
There are various places where a homeless person might seek refuge:
24-hour Internet cafes are now used by over 5,000 Japanese "Net cafe refugees". An estimated 75% of Japans 3,200 all-night internet cafes cater to regular overnight guests, who in some cases have become their main source of income.
24-hour McDonalds restaurants are used by "McRefugees" in Japan, China and Hong Kong. There are about 250 McRefugees in Hong Kong.
Couch surfing: temporary sleeping arrangements in dwellings of friends or family members ("couch surfing"). This can also include housing in exchange for labor or sex. Couch surfers may be harder to recognize than street homeless people and are often omitted from housing counts.
Homeless shelters: including emergency cold-weather shelters opened by churches or community agencies, which may consist of cots in a heated warehouse, or temporary Christmas Shelters. More elaborate homeless shelters such as Pinellas Hope in Florida provide residents with a recreation tent, a dining tent, laundry facilities, outdoor tents, casitas, and shuttle services that help inhabitants get to their jobs each day.
Inexpensive boarding houses: have also been called flophouses. They offer cheap, low-quality temporary lodging.
Inexpensive motels offer cheap, low-quality temporary lodging. However, some who can afford housing live in a motel by choice. For example, David and Jean Davidson spent 22 years at various UK Travelodges.
Public places: Parks, bus or train stations, public libraries, airports, public transportation vehicles (by continual riding where unlimited passes are available), hospital lobbies or waiting areas, college campuses, and 24-hour businesses such as coffee shops. Many public places use security guards or police to prevent people from loitering or sleeping at these locations for a variety of reasons, including image, safety, and comfort of patrons.
Shantytowns: ad hoc dwelling sites of improvised shelters and shacks, usually near rail yards, interstates and high transportation veins. Some shantytowns have interstitial tenting areas, but the predominant feature consists of hard structures. Each pad or site tends to accumulate roofing, sheathing, plywood, and nailed two by fours.
Single room occupancy (more commonly abbreviated to SRO):a form of housing that is typically aimed at residents with low or minimal incomes who rent small, furnished single rooms with a bed, chair, and sometimes a small desk. SRO units are rented out as permanent residence or primary residence to individuals, within a multi-tenant building where tenants share a kitchen, toilets or bathrooms. In the 2010s, some SRO units may have a small refrigerator, microwave and sink. (also called a "residential hotel").
Squatting in an unoccupied structure where a homeless person may live without payment and without the owners knowledge or permission. Often these buildings are long abandoned and not safe to occupy.
Tent cities: ad hoc campsites of tents and improvised shelters consisting of tarpaulins and blankets, often near industrial and institutionally zoned real estate such as rail yards, highways and high transportation veins. A few more elaborate tent cities, such as Dignity Village, are hybrids of tent cities and shantytowns. Tent cities frequently consist only of tents and fabric improvised structures, with no semi-permanent structures at all.
Outdoors: on the ground or in a sleeping bag, tent, or improvised shelter, such as a large cardboard box, under a bridge, in an urban doorway, in a park or a vacant lot.
Tunnels such as abandoned subway, maintenance, or train tunnels are popular among the long-term or permanent homeless. The inhabitants of such refuges are called in some places, like New York City, "Mole People". Natural caves beneath urban centers allow for places where people can congregate. Leaking water pipes, electric wires, and steam pipes allow for some of the essentials of living.
Vehicles: cars or trucks used as temporary or sometimes long-term living quarters, for example by those recently evicted from a home. Some people live in recreational vehicles (RVs), school buses, vans, sport utility vehicles, covered pickup trucks, station wagons, sedans, or hatchbacks. The vehicular homeless, according to homeless advocates and researchers, comprise the fastest-growing segment of the homeless population. Many cities have safe parking programs in which lawful sites are permitted at churches or in other out-of-the-way locations. For example, because it is illegal to park on the street in Santa Barbara, the New Beginnings Counseling Center worked with the city to make designated parking lots available to homeless people.
Other housing options
Transitional housingTransitional housing provides temporary housing for certain segments of the homeless population, including the working homeless, and is meant to transition residents into permanent, affordable housing. This is usually a room or apartment in a residence with support services. The transitional time can be relatively short, for example, one or two years, and in that time the person must file for and obtain permanent housing along with gainful employment or income, even if Social Security or assistance. Sometimes transitional housing programs charge a room and board fee, maybe 30% of an individuals income, which is sometimes partially or fully refunded after the person procures a permanent residence. In the U.S., federal funding for transitional housing programs was originally allocated in the McKinney–Vento Homeless Assistance Act of 1986.FoyersFoyers are a specific type of transitional housing designed for homeless or at-risk teens. Foyers are generally institutions that provide affordable accommodation as well as support and training services for residents. They were pioneered in the 1990s in the United Kingdom, but have been adopted in areas in Australia and the United States as well.
Supportive housingSupportive housing is a combination of housing and services intended as a cost-effective way to help people live more stable, productive lives. Supportive housing works well for those who face the most complex challenges – individuals and families confronted with homelessness who also have very low incomes or serious, persistent issues such as substance use disorder, addictions, alcohol use disorder, mental illness, HIV/AIDS, or other serious challenges. A 2021 systematic review of 28 interventions, mostly in North America, showed that interventions with highest levels of support led to improved outcomes for both housing stability, and health outcomes.Government initiatives
In South Australia, the state government of Premier Mike Rann (2002–2011) committed substantial funding to a series of initiatives designed to combat homelessness. Advised by Social Inclusion Commissioner David Cappo and the founder of New Yorks Common Ground program, Rosanne Haggerty, the Rann government established Common Ground Adelaide, building high-quality inner city apartments (combined with intensive support) for "rough sleeping" homeless people. The government also funded the Street to Home program and a hospital liaison service designed to assist homeless people admitted to the emergency departments of Adelaides major public hospitals. Rather than being released back into homelessness, patients identified as rough sleepers were found accommodation backed by professional support. Common Ground and Street to Home now operate across Australia in other States.
Assistance and resources
Most countries provide a variety of services to assist homeless people. Provisions of food, shelter, and clothing and may be organized and run by community organizations, often with the help of volunteers, or by government departments. Assistance programs may be supported by government, charities, churches, and individual donors. However, not all homeless people can access these resources. In 1998, a study by Koegel and Schoeni of a homeless population in Los Angeles, California, found that a significant minority of homeless did not participate in government assistance programs, with high transaction costs being a likely contributing factor.
Social supports
While some homeless people are known to have a community with one another, providing each other various types of support, people who are not homeless also may provide them friendship, relational care, and other forms of assistance. Such social supports may occur through a formal process, such as under the auspices of a non-governmental organization, religious organization, or homeless ministry, or may be done on an individual basis.
Income
Employment
The United States Department of Labor has sought to address one of the main causes of homelessness, a lack of meaningful and sustainable employment, through targeted training programs and increased access to employment opportunities that can help homeless people develop sustainable lifestyles. This has included the development of the United States Interagency Council on Homelessness, which addresses homelessness on the federal level in addition to connecting homeless individuals to resources at the state level. All individuals who are in need of assistance are able, in theory, to access employment and training services under the Workforce Investment Act (WIA), although this is contingent upon funding and program support by the government.
Income sources outside of regular employment
Waste management
Homeless people can also use waste management services to earn money. Some homeless people find returnable bottles and cans and bring them to recycling centers to earn money. They can sort out organic trash from other trash or separate out trash made of the same material (for example, different types of plastics, and different types of metal). In addition, rather than picking waste at landfills, they can also collect litter found on/beside the road to earn an income.
Street newspapers
Street newspapers are newspapers or magazines sold by homeless or poor individuals and produced mainly to support these populations. Most such newspapers primarily provide coverage about homelessness and poverty-related issues and seek to strengthen social networks within homeless communities, making them a tool for allowing homeless individuals to work.
Medicine
The 2010 passage of the Patient Protection and Affordable Care Act could provide new healthcare options for homeless people in the United States, particularly through the optional expansion of Medicaid. A 2013 Yale study indicated that a substantial proportion of the chronically homeless population in America would be able to obtain Medicaid coverage if states expanded Medicaid under the Affordable Care Act.In 1985, the Boston Health Care for the Homeless Program was founded to assist the growing numbers of homeless living on the streets and in shelters in Boston and who were suffering from a lack of effective medical services. In 2004, Boston Health Care for the Homeless in conjunction with the National Health Care for the Homeless Council published a medical manual called The Health Care of Homeless Persons, edited by James J. OConnell, M.D., specifically for the treatment of the homeless population. In June 2008 in Boston, the Jean Yawkey Place, a four-story, 7,214.2-square-metre (77,653 sq ft) building, was opened by the Boston Health Care for the Homeless Program. It is an entire full-service building on the Boston Medical Center campus dedicated to providing healthcare for homeless people. It also contains a long-term care facility, the Barbara McInnis House, which expanded to 104 beds and is the first and largest medical respite program for homeless people in the United States.In Los Angeles, a collaboration between the Ostrow School of Dentistry of the University of Southern California and the Union Rescue Mission shelter offer homeless people in the Skid Row area free dental services.Studies on the effects of intensive mental health interventions have demonstrated some improvements in housing stability and to be economically beneficial on cost-analysis.
Housing
Permanent supportive housing (PSH) interventions appear to have improvements in housing stability for people living with homelessness even in long-term.
Savings from housing homeless in the US
In 2013, a Central Florida Commission on Homelessness study indicated that the region spends $31,000 a year per homeless person to cover "salaries of law enforcement officers to arrest and transport homeless individuals – largely for nonviolent offenses such as trespassing, public intoxication or sleeping in parks – as well as the cost of jail stays, emergency room visits and hospitalization for medical and psychiatric issues. This did not include "money spent by nonprofit agencies to feed, clothe and sometimes shelter these individuals". In contrast, the report estimated the cost of permanent supportive housing at "$10,051 per person per year" and concluded that "[h]ousing even half of the regions chronically homeless population would save taxpayers $149 million over the next decade – even allowing for 10 percent to end up back on the streets again." This particular study followed 107 long-term-homeless residents living in Orange, Osceola or Seminole Counties. There are similar studies showing large financial savings in Charlotte and Southeastern Colorado from focusing on simply housing the homeless."In general, housing interventions had mixed economic results on cost-analysis studies.
Innovative solutions
Los Angeles conducted a competition promoted by Mayor Eric Garcetti soliciting ideas from developers to use bond money more efficiently in building housing for the citys homeless population. The top five winners were announced on 1 February 2019 and the concepts included using assembly-ready molded polymer panels that can be put together with basic tools, prefabricated 5-story stack-able houses, erecting privately financed modular buildings on properties |
Homelessness | that do not require City Council approval, using bond money to convert residential garages into small apartments which are then dedicated to homeless rentals, and the redeveloping of Bungalow-court units, the small low-income iconic buildings that housed 7% of the citys population in the 1920s.In the neighborhood of Westlake, Los Angeles, the city is funding the first transitionally homeless housing building using "Cargotecture", or "architecture built from repurposed shipping containers." The Hope on Alvarado micro-apartment building will consist of 4-stories of 84 containers stacked together like Lego bricks on top of a traditionally constructed ground floor. Completion is anticipated by the end of 2019.
Political action
Voting for elected officials is important for the homeless population to have a voice in the democratic process.There are also many community organizations and social movements around the world which are taking action to reduce homelessness. They have sought to counteract the causes and reduce the consequences by starting initiatives that help homeless people transition to self-sufficiency. Social movements and initiatives tend to follow a grassroots, community-based model of organization – generally characterized by a loose, informal and decentralized structure, with an emphasis on radical protest politics. By contrast, an interest group aims to influence government policies by relying on more of a formal organizational structure. These groups share a common element: they are both made up of and run by a mix of allies of the homeless population and former or current members of the homeless population. Both grassroots groups and interest groups aim to break stereotyped images of homeless people as being weak and hapless, or defiant criminals and drug addicts, and to ensure that the voice of homeless people and their representatives is clearly heard by policymakers.
Organizing in homeless shelters
Homeless shelters can become grounds for community organization and the recruitment of homeless individuals into social movements for their own cause. Cooperation between the shelter and an elected representative from the homeless community at each shelter can serve as the backbone of this type of initiative. The representative presents and forwards problems raises concerns and provides new ideas to the director and staff of the shelters. Examples of possible problems are ways to deal with substance use disorders by certain shelter users, and resolution of interpersonal conflicts. SAND, the Danish National Organization for Homeless People, is one example of an organization that uses this empowerment approach. Issues reported at the homeless shelters are then addressed by SAND at the regional or national level. To open further dialogue, SAND organizes regional discussion forums where staff and leaders from the shelters, homeless representatives, and local authorities meet to discuss issues and good practices at the shelters.
Veteran specific
There are a number of homeless organizations that support homeless veterans, an issue most commonly seen in the United States.Non-governmental organizations house or redirect homeless veterans to care facilities. Social Security Income/Social Security Disability Income, Access, Outreach, Recovery Program (SOAR) is a national project funded by the Substance Abuse and Mental Health Services Administration. It is designed to increase access to SSI/SSDI for eligible adults who are homeless or at risk of becoming homeless and have a mental illness or a co-occurring substance use disorder. Using a three-pronged approach of Strategic Planning, Training, and Technical Assistance (TA), the SOAR TA Center coordinates this effort at the state and community level.The United States Department of Housing and Urban Development and Veterans Administration have a special Section 8 housing voucher program called VASH (Veterans Administration Supported Housing), or HUD-VASH, which gives out a certain number of Section 8 subsidized housing vouchers to eligible homeless and otherwise vulnerable US armed forces veterans. The HUD-VASH program has shown success in housing many homeless veterans. The support available to homeless veterans varies internationally, however. For example, in England, where there is a national right to housing, veterans are only prioritized by local authority homelessness teams if they are found to be vulnerable due to having served in the Armed Forces.Under the Department of Labor, the Veterans Employment and Training Service (VETS) offers a variety of programs targeted at ending homelessness among veterans. The Homeless Veterans Reintegration Program (HVRP) is the only national program that is exclusively focused on assisting veterans as they reenter the workforce. The VETS program also has an Incarcerated Veterans Transition Program, as well as services that are unique to female Veterans. Mainstream programs initiated by the Department of Labor have included the Workforce Investment Act, One-Stop Career Centers, and a Community Voice Mail system that helps to connect homeless individuals around the United States with local resources. Targeted labor programs have included the Homeless Veterans Reintegration Project, the Disability Program Navigator Initiative, efforts to end chronic homelessness through providing employment and housing projects, Job Corps, and the Veterans Workforce Investment Program (VWIP).
By location
Africa
Egypt
Homelessness in Egypt is a significant social issue affecting some 12 million people in the country. Egypt has over 1,200 areas designated for irregular dwellings that do not conform to standard building laws, allowing homeless people to build shacks and other shelters for themselves.Reportedly, in Egypt, homelessness is defined to include those living in marginal housing. Some scholars have stated that there is no agreed upon definition of homelessness in Egypt due to the difficulties government would face if an official definition were accepted.According to UNICEF, there are 1 million children living on the streets in Egypt. Other researchers estimate the number to be some 3 million. Homelessness NGOs assisting street children include those such as Hope Village Society, and NAFAS. Other NGOs, such as Plan International Egypt, work to reintegrate street children back into their families.
South Africa
Homelessness in South Africa dates back to the apartheid period. Increasing unemployment, lack of affordable housing, social disintegration, and social and economic policies have all been identified as contributing factors to the issue. Some scholars argue that solutions to homelessness in South Africa lie more within the private sphere than in the legal and political spheres.There is no national census on homeless people in South Africa, researchers instead rely on individual studies of homeless persons in particular cities. The South African homeless population has been estimated at 200,000 people from a diverse range of backgrounds. Most South African municipalities primarily view homelessness as a social dependency issue, responding with social interventions.
One study found that three out of four South African metropolitan municipalities viewed homelessness primarily as a social dependency issue, responding with social interventions. At the same time, homeless South Africans indicated that the most important thing the municipality could assist them with was employment and well-located affordable housing.
Asia
China
In 2011, there were approximately 2.41 million homeless adults and 179,000 homeless children living in the country. However, one publication estimated that there were one million homeless children in China in 2012.Housing in China is highly regulated by the Hukou system. This gives rise to a large number of migrant workers, numbering at 290.77 million in 2019. These migrant workers have rural Hukou, but they move to the cities in order to find better jobs, though due to their rural Hukou they are entitled to fewer privileges than those with urban Hukou. According to Huili et al., these migrant workers "live in overcrowded and unsanitary conditions" and are always at risk of displacement to make way for new real estate developments. In 2017, the government responded to a deadly fire in a crowded building in Beijing by cracking down on dense illegal shared accommodations and evicting the residents, leaving many migrant laborers homeless. This comes in the context of larger attempts by the government to limit the population increase in Beijing, often targeting migrant laborers. However, according to official government statistics, migrant workers in China have an average of 20.4 square metres (220 sq ft) of living space per capita, and the vast majority of migrant workers have basic living facilities such as heating, bathing, refrigerators, and washing machines.
Several natural disasters have led to homelessness in China. The 2000 Yunnan earthquake left 92,479 homeless and destroyed over 41,000 homes.Homelessness among people with mental health problems is much less common in China than in high-income countries, due to stronger family ties, but is increasing due to migration within families and as a result of the one-child policy. A study in Xiangtan found at least 2439 schizophrenic people that have been homeless on a total population of 2.8 million. It was found that "homelessness was more common in individuals from rural communities (where social support services are limited), among those who wander away from their communities (i.e., those not from Xiangtan municipality), and among those with limited education (who are less able to mobilize social supports). Homelessness was also associated with greater age; [the cause] may be that older patients have burned their bridges with relatives and, thus, end up on the streets."During the Cultural Revolution a large part of child welfare homes were closed down, leaving their inhabitants homeless. By the late 1990s, many new homes were set up to accommodate abandoned children. In 1999, the Ministry of Civil Affairs estimated the number of abandoned children in welfare homes to be 66,000.According to the Ministry of Civil Affairs, China had approximately 2,000 shelters and 20,000 social workers to aid approximately 3 million homeless people in 2014.From 2017 to 2019, the government of Guangdong Province assisted 5,388 homeless people in reuniting with relatives elsewhere in China. The Guangdong government assisted more than 150,000 people over a three-year period.In 2020, in the wake of the COVID-19 pandemic, the Chinese Ministry of Civil Affairs announced several actions of the Central Committee in response to homelessness, including increasing support services and reuniting homeless people with their families. In Wuhan, the situation for homeless people was particularly bad, as the lockdown made it impossible for homeless migrants to return to other parts of the country. The Wuhan Civil Affairs Bureau set up 69 shelters in the city to house 4,843 people.
India
The Universal Declaration of Human Rights defines homeless as those who do not live in a regular residence due to lack of adequate housing, safety, and availability. The United Nations Economic and Social Council Statement has a broader definition for homelessness; it defines homelessness as follows: When we are talking about housing, we are not just talking about four walls and a roof. The right to adequate housing is about security of tenure, affordability, access to services and cultural adequacy. It is about protection from forced eviction and displacement, fighting homelessness, poverty and exclusion. India defines homeless as those who do not live in census houses, but rather stay on pavements, roadsides, railway platforms, staircases, temples, streets, in pipes, or other open spaces. There are 1.77 million homeless people in India, or 0.15% of the countrys total population, according to the 2011 census consisting of single men, women, mothers, the elderly, and the disabled. However, it is argued that the numbers are far greater than accounted by the point in time method. For example, while the Census of 2011 counted 46.724 homeless individuals in Delhi, the Indo-Global Social Service Society counted them to be 88,410, and another organization called the Delhi Development Authority counted them to be 150,000. Furthermore, there is a high proportion of mentally ill and street children in the homeless population. There are 18 million street children in India, the largest number of any country in the world, with 11 million being urban. Finally, more than three million men and women are homeless in Indias capital city of New Delhi; the same population in Canada would make up approximately 30 electoral districts. A family of four members has an average of five homeless generations in India.There is a shortage of 18.78 million houses in the country. Total number of houses has increased from 52.06 million to 78.48 million (as per 2011 census). However, the country still ranks as the 124th wealthiest country in the world as of 2003. More than 90 million people in India make less than US$1 per day, thus setting them below the global poverty threshold. The ability of the Government of India to tackle urban homelessness and poverty may be affected in the future by both external and internal factors. The number of people living in slums in India has more than doubled in the past two decades and now exceeds the entire population of Britain, the Indian Government has announced. About 78 million people in India live in slums and tenements. 17% of the worlds slum dwellers reside in India. Subsequent to the release of Slumdog Millionaire in 2008, Mumbai was a slum tourist destination for slumming where homeless people and slum dwellers alike could be openly viewed by tourists.
Israel
Homelessness in Israel is a phenomenon that mostly developed after the mid-1980s.Homelessness increased following the wave of Soviet immigration in 1991. As many as 70 percent of homeless people in Tel Aviv are immigrants from the former Soviet Union, nearly all of them men. According to homeless shelter founder Gilad Harish, "when the recession hit Israel in the early 90s, the principle of last in, first out kicked in, and many Russian immigrants lost their jobs. Being new to the country, they didnt have a strong family support system to fall back on like other Israelis do. Some ended up on the street with nowhere to go."The number of homeless people in Israel grew in the 2000s, and the Association for Civil Rights in Israel claimed that the authorities were ignoring the issue.Some 2,000 families in Israel lose their homes every year after defaulting on their mortgage loans. However, a law amendment passed in 2009 protects the rights of mortgage debtors and ensures that they are not evicted after failing to meet mortgage payments. The amendment is part of a wider reform in the law in the wake of a lengthy battle by the Association for Civil Rights in Israel and other human rights groups.In 2007, the number of homeless youth was on the rise. More than 25% of all homeless youth in 2007 were girls, compared to 15% in 2004. A report by Elem, a non-profit organization that helps youth at risk, pointed to a 5% rise in the number of youths either homeless or wandering the streets late at night while their parents worked or due to strained relations at home. The organization estimated that in 2007 it provided programs or temporary shelter to roughly 32,000 youths in some 30 locations countrywide.In 2014, the number of homeless individuals in Israel was estimated at 1,831, about 600 of whom were living on the streets of Tel Aviv. This makes up 0.02% of the countrys population, a low figure compared to other developed nations. In July 2015, the Welfare Ministry estimated the number of homeless to be between 800 and 900, including 450 receiving services and treatment from their municipalities but continuing to live on the streets. Elem claimed the true figure was much higher. In December 2015, a large study by the Welfare Ministry found that 2,300 people in Israel were homeless.Homeless people in Israel are entitled to a monthly government stipend of NIS 1,000. In addition, there are both state-run homeless shelters operated by the Welfare Ministry and privately run shelters.
Adi Nes, an Israeli photographer, has brought public attention to the issue by taking pictures of Israels homeless.
Japan
Homelessness in Japan (ホームレス, 浮浪者)) is a social issue primarily affecting middle-aged and elderly males. Homelessness is thought to have peaked in the 1990s as a consequence of the collapse of the Japanese asset price bubble and has largely fallen since then.
According to the "Special Act in regards to Supporting the Autonomy of the Homeless Population" (Japanese: ホームレスの自立の支援等に関する特別措置法)), the term "homeless" is defined as "those who utilize city parks, river banks, roads, train stations, and other facilities as their place of stay in order to live their daily lives".Names for the homeless in Japan include hōmuresu (ホームレス, from the English "homeless"), furousha (浮浪者, meaning "wandering person"), kojiki (乞食, meaning beggar), and runpen (ルンペン, from German [[wikt:Lumpen|Lumpen]]). More recently, nojukusha (野宿者, "person who sleeps outside") and nojuku roudousha (野宿労働者, "laborer who sleeps outside") have been used to avoid negative connotations associated with the word "homeless".
Philippines
There are approximately 4.5 million homeless people in the Philippines, about 3 million of them in Manila.
Europe
One fifth of the total population of the European Union – totalling 91.4 million people – are still at risk of poverty or social exclusion, and access to housing remains difficult for many Europeans. According to a Eurostat survey, three-in-100 people say they have already had to live with relatives on a temporary basis while one-in-100 people say they have already lived on the street, in emergency or temporary accommodation or in a place not suitable for housing.
Homelessness in Denmark (6,431)
Homelessness in Finland (4,300)
Homelessness in France (300,000)
Homelessness in Germany (678,000)
Homelessness in Greece (40,000)
Homelessness in Hungary (30,000)
Homelessness in Ireland (8,313)
Homelessness in the Netherlands (39,300)
Homelessness in Portugal (8,209)
Homelessness in Spain (40,000)
Homelessness in Sweden (34,000)
Switzerland
Homelessness in Switzerland is a known social issue, however, there are few estimates as to the number of Swiss people affected. Homelessness is less visible in Switzerland than in many other Western countries. The majority of homeless people in Geneva are Swiss or French, with a minority from other countries.One Swiss study found that 1.6 percent of all patients admitted to psychiatric wards were homeless. The study reported that social factors and psychopathology are independently contributing to the risk of homelessness.In 2014, Swiss authorities reportedly began allowing homeless people to sleep in fallout shelters built during the Cold War.There are a number of centers for providing food for the homeless, including the Suneboge community center.
United Kingdom
Homelessness across the UK is a devolved matter, resulting in different legislation, frameworks, and even definitions, from country to country.
Since the late 1990s, housing policy has been a devolved matter, and state support for homeless people, together with legal rights in housing, have therefore diverged to a certain degree. A national service, called Streetlink, was established in 2012 to help members of the public obtain near-immediate assistance for specific rough sleepers, with the support of the Government (as housing is a devolved matter, the service currently only extends to England).
The annual number of homeless households in England peaked in 2003–04 at 135,420 before falling to a low of 40,020 in 2009–10. In 2017–18, there were 56,600 homeless households, which was 60 per cent below the 2003–04 peak, and 40 per cent higher than the 2009–10 low. The UK has more than 120,000 children in temporary accommodation, a number which has increased from 69,050 children in 2010.In 2007 the official figures for England were that an average of 498 people slept rough each night, with 248 of those in London.Homelessness in England since 2010 has been rising. By 2016 it is estimated the numbers sleeping rough had more than doubled since 2010. The National Audit Office say in relation to homelessness in England 2010–17 there has been a 60% rise in households living in temporary accommodation and a rise of 134% in rough sleepers. It is estimated 4,751 people bedded down outside overnight in England in 2017, up 15% on previous year. The housing charity Shelter used data from four sets of official 2016 statistics and calculated 254,514 people in England were homeless.The Homelessness Reduction Act 2017 places a new duty on local authorities in England to assist people threatened with homelessness within 56 days and to assess, prevent and relieve homelessness for all eligible applicants including single homeless people from April 2018. Before the 2017 HRA, homeless households were defined and measured as those who were owed a main homelessness duty by local authorities. But since 2018, the definition of homeless households has broadened as households are owed a new relief duty and a prevention duty. The main homelessness duty definition has not been changed by the 2017 HRA. However, these households are now only owed a main duty if their homelessness has not been successfully prevented or relieved. In 2019–20, 288,470 households were owed the new prevention or relief duties, which is four times the number of households owed the main duty in 2017–18 prior to implementation of the Homelessness Reduction Act.The picture in Scotland is considerably different, with law that entitles everyone to a roof over their head if they are homeless. This accommodation is often in the form of somewhere temporary, until something permanent becomes available. Though across the course of 2022 this will change, reducing the use of temporary accommodation, in line with the Homeless and Rough Sleeping Action Group (HARSAG) recommendations. Currently people are spending an average of 199 days(April 2020 to March 2021)in temporary accommodation before being housed in somewhere permanent.Most recently updated in October 2020, Scotland is working to eradicate homelessness through the Ending Homelessness Together action plan. It is anticipated that with this, alongside a focus on prevention, and Local Authorities working with the third sector on plans known as Rapid Rehousing Transition Plans, that people will no longer be homeless for any length of time.
In terms of figures, in 2020/21, there were 42,149 people in homeless households – 30,345 adults and 11,804 children in Scotland. This was a drop of 9% from the previous year, though it is unclear if this was partly due to statistics being collected differently during the start of the pandemic.
North America
Canada
United States
After Franklin D. Roosevelt took over the presidency from Herbert Hoover in 1933, he oversaw passage of the New Deal, which greatly expanded social welfare, including providing funds to build public housing. This marked the end of the Great Depression. However, the number of homeless people grew in the 1980s, when Ronald Reagan decimated the public housing budget in the 1980s, including the federally funded affordable housing production put in place by the New Deal. By the mid-1980s, there was a dramatic increase in family homelessness. Tied into this was an increasing number of impoverished and runaway children, teenagers, and young adults, which created a new substratum of the homeless population (street children or street youth).In 2015, the United States reported that there were 564,708 homeless people within its borders, one of the higher reported figures worldwide.Housing First is an initiative to help homeless people reintegrate into society, and out of homeless shelters. It was initiated by the federal governments Interagency Council on Homelessness. It asks cities to come up with a plan to end chronic homelessness. In this direction, there is the belief that if homeless people are given independent housing to start, with some proper social supports, then there would be no need for emergency homeless shelters, which it considers a good outcome. However this is a controversial position.There is evidence that the Housing First program works more efficiently than Treatment First programs. Studies show that having the stability of housing through the Housing First program will encourage homeless people to focus on other struggles they are facing, such as substance abuse. Meanwhile, Treatment First programs promote an "all or nothing" approach which require clients to participate in programs applicable to their struggles as a condition for housing assistance. Treatment First utilizes a less individualistic approach than Housing First and solutions are created under one standard rather than fit to each clients specific needs.
In 2009 it was estimated that one out of 50 children or 1.5 million children in the United States would experience some form of homelessness each year.In 2010 in New York City, where there were over 36,000 homeless people in 2009, there was a mobile video exhibit in the streets showing a homeless person on a screen and asking onlookers and passersby to text with their cellphones a message for him, and they also could donate money by cellphones to the organization Pathways to Housing. In September 2010, it was reported that the Housing First Initiative had significantly reduced the chronic homeless single person population in Boston, Massachusetts, although homeless families were still increasing in number. Some shelters were reducing the number of beds due to lowered numbers of homeless, and some emergency shelter facilities were closing, especially the emergency Boston Night Center. In 2011, the Department of Veterans Affairs Supportive Services for Veterans Families Initiative, SSVF, began funding private non-profit organizations and consumer cooperatives to provide supportive services to very low-income veteran families living in or transitioning to permanent housing.In 2019, in an interview with CBS News, scholar Sara Goldrick-Rab said that her study on college student homelessness found that "[n]early one in ten college students said they were homeless in the last year, meaning they had at least one night where they did not know where they were going to sleep."
Puerto Rico
According to the count by the Puerto Rico Department of Family, in January 2017 there were 3,501 homeless persons in the territory. The study shows that 26% of this population live in the capital, San Juan. Other municipalities percentage of this population was Ponce with 6.3%, Arecibo with 6%, Caguas with 5.3%, and Mayagüez with 4.7%. Results from the study determined that 76% of the homeless population were men and 24% were women and that both men and women populations, were on average age, 40 years old. This steadily increasing population, might have increased more drastically as a result of Hurricane María which caused over 90 billion dollars in damage to the island of Puerto Rico.Data provided by the Department of Community Social Development of San Juan, indicates that in 1988 the number of homeless in the municipality was of 368, while in 2017 there are about 877 persons without a home. While the average age for the overall homeless population is 40 years old for both women and men, in San Juan the median is 48 years for men and 43 years for women. Other data obtained showed that more than 50% have university level education. Also it revealed that 35% of men and 25% of women have relapsed more than four times after unsuccessful attempts to reinsert themselves socially. Reasons given to wander are varied with the most common causes being drug abuse (30.6%), family problems (22.4%), financial or economic problems (15.0%), and others such as unemployment, mental health problems, domestic violence, evictions, or lack of support when released from prison.
Oceania
Australia
In Australia the Supported Accommodation Assistance Program (SAAP) is a joint Commonwealth and state government program which provides funding for more than 1,200 organizations which are aimed to assist homeless people or those in danger of becoming homeless, as well as women and children escaping domestic violence. They provide accommodation such as refuges, shelters, and half-way houses, and offer a range of supported services. The Commonwealth has assigned over $800 million between 2000 and 2005 for the continuation of SAAP. The current program, governed by the Supported Assistance Act 1994, specifies that "the overall aim of SAAP is to provide transitional supported accommodation and related support services, in order to help people who are homeless to achieve the maximum possible degree of self-reliance and independence. This legislation has been established to help the homeless people of the nation and help rebuild the lives of those in need. The cooperation of the states also helps enhance the meaning of the legislation and demonstrates their desire to improve the nation as best they can." In 2011, the Specialist Homelessness Services (SHS) program replaced the SAAP program.
Indonesia
Homelessness in Indonesia refers to the issue of homelessness, a condition wherein people lack a stable and appropriate place of housing. The number of homeless people in Indonesia is estimated to be up to 3 million people in the country, |
Homelessness | over 28,000 in Jakarta alone. A number of terms are used to describe homeless people in Indonesia, including tunawisma, which is used by the government, and gelandangan, meaning "tramp".Squatters and street homeless people are often targeted by police raids who say that homeless people "disturb the attractiveness of the city".One cause of homelessness in Indonesia is forced evictions. According to researchers, between the years 2000 and 2005 over 92,000 people were forcefully evicted from their homes.
New Zealand
Homelessness in New Zealand has been linked to the general issue of lack of suitable housing. The homeless population is generally measured through the countrys census and by universities and other academic centres. In 2009, urban homelessness (rough sleepers or improvised dwellings) were estimated at less than 300, while rural homelessness (improvised dwellings) was estimated between 500 and 1000. An additional 8,000–20,000 live in "temporary accommodation unsuited for long-term habitation (caravans, campgrounds, substandard housing and boarding houses)." Homelessness in New Zealand has traditionally been reduced by the provision of state housing, similar to Germany and other developed countries.Statistical authorities in New Zealand have expanded their definition of homelessness to include people living in improvised shelters, people staying in camping grounds/motor camps and people sharing accommodation with someone elses household.The issue is believed to have become increasingly visible in recent years. Media in New Zealand have published an accusatory account of the presence of homeless people in public spaces, positioning homeless men as disruptive threats. Though community members have shown support though writing opinion pieces.In late January 2019, the New York Times reported rising housing prices to be a major factor in the increasing homelessness in New Zealand so that "smaller markets like Tauranga, a coastal city on the North Island with a population of 128,000, had seen an influx of people who had left Auckland in search of more affordable housing. Average property values in Tauranga had risen to $497,000 from $304,000 in the last five years, and Demographia now rated it among the 10 least affordable cities in the world — along with famously expensive locales such as Hong Kong, San Francisco, Sydney and Vancouver, British Columbia."In mid August 2019, the Associate Housing Minister Kris Faafoi and Social Development Minister Carmel Sepuloni announced that the Government would be launching a NZ$54 million program to tackle homelessness in New Zealand. This includes investing $31 million over the next four years for 67 intensive case managers and navigators to work with homeless people and a further $16 million for the Sustaining Tenancies Programme. This funding complements the Governments Housing First programme.
Russia and the USSR
After the abolition of serfdom in Russia in 1861, major cities experienced a large influx of former peasants who sought jobs as industrial workers in rapidly developing Russian industry. These people often lived in harsh conditions, sometimes renting a room shared between several families. There also was a large number of shelterless homeless. Immediately after the October Revolution a special program of "compression" (уплотнение) was enabled: people who had no shelter were settled in flats of those who had large (4, 5, or 6-room) flats with only one room left to previous owners. The flat was declared state property. This led to a large number of shared flats where several families lived simultaneously. Nevertheless, the problem of complete homelessness was mostly solved as anybody could apply for a room or a place in dormitory (the number of shared flats steadily decreased after large-scale residential building program was implemented starting in the 1960s).
By 1922 there were at least 7 million homeless children in Russia as a result of nearly a decade of devastation from World War I and the Russian Civil War. This led to the creation of a large number of orphanages. By the 1930s the USSR declared the abolition of homelessness and any citizen was obliged to have a propiska – a place of permanent residency. Nobody could be stripped of propiska without substitution or refuse it without a confirmed permission (called "order") to register in another place. If someone wanted to move to another city or expand their living area, he had to find a partner who wanted to mutually exchange the flats. The right for shelter was secured in the Soviet constitution. Not having permanent residency was legally considered a crime.
After the breakup of the USSR, the problem of homelessness sharpened dramatically, partially because of the legal vacuum of the early 1990s with some laws contradicting each other and partially because of a high rate of frauds in the realty market. In 1991 articles 198 and 209 of Russian criminal code which instituted a criminal penalty for not having permanent residence were abolished. In Moscow, the first overnight shelter for homeless was opened in 1992. In the late 1990s, certain amendments in law were implemented to reduce the rise in homelessness, such as the prohibition of selling last flat with registered children. In 2002, there were 300,000 homeless people in Moscow.Nevertheless, the state is still obliged to give permanent shelter for free to anybody who needs better living conditions or has no permanent registration, because the right to shelter is still included in the constitution. Several projects of special cheap social flats for those who failed to repay mortgages were proposed to facilitate mortgage market.In 2022, it was reported that Russian authorities were targeting homeless people to conscript them into the war in Ukraine.
Popular culture
Homelessness in popular culture is depicted in various works.
Films
Modern Times, 1936 film, shows negative effects of vagrancy laws.
Cathy Come Home, 1966, shows the effects of homelessness on parenthood.
God Bless the Child, 1988, made-for-TV movie about a single mother (Mare Winningham) living on the streets of New York City with her young daughter.
Homeless Sam & Sally, a 2020 dark comedy film and a television series with the same name released in 2019, is about a story regarding a mother named Sally Silver and her mentally ill son Sam Silver who comes up with ways to live normal lives while being homeless in Koreatown, Los Angeles.
Dark Days, 2000, 81 minutes, documentary by Marc Singer, who followed the lives of people living in the Freedom Tunnel, an Amtrak tunnel in New York City.
Homeless to Harvard: The Liz Murray Story, 2003 film about a homeless girl, Liz Murray, who works her way up to admission to Harvard University.
66 Months, a 2011 British documentary film about a homeless man who makes it on his own for 6 years without any government programs helping him.
The Pursuit of Happyness, a 2006 biographical film where a father and son struggle to get a job and end up homeless after an eviction and later a tax garnishment. After several weeks living from place to place in 1981 San Francisco, he lands a permanent position in a brokerage firm after successfully completing an unpaid internship.
Same Kind of Different as Me, 2017 American film about a successful art dealer, his wife and an initially violent member of a homeless shelter community. It is based on the 2006 book of the same name.
Curly Sue, a 1991 comedy drama film that focuses on a homeless con artist and his friend who gets lucky with a roof over their heads by tricking a wealthy attorney.
Life Stinks, a 1991 comedy film about a wealthy businessman who bets a corporate rival that he can live his life as a homeless man but finds out later on in the story that being homeless isnt easy or fun.
The Saint of Fort Washington, a 1993 drama film where a homeless disabled man gets guidance from a friendly veteran as they cope with the realities of being on the streets.
See also
Ghost town repopulation
Grave dwellers
Homeless Jesus, a bronze sculpture by Canadian sculptor Timothy Schmalz depicting Jesus as a homeless person sleeping on a park bench, which since 2013 has been installed in many places across the world
Hunter-gatherers
Internally displaced person
Nomads
Right to housing
References
Further reading
External links
Homeless of New York – Article + Video Archived 26 March 2020 at the Wayback Machine – The Uncommon Magazine, by Avery Kim, 6 July 2016
Homeless Statistics for Australia, Canada, United Kingdom and the United States, all data from around the year 2001.
PBS, "Home at Last?", NOW series program, first aired on 2 February 2007. The topic was what will most help homeless people reenter the fabric of society.
Homelessness at Curlie
Homelessness in Europe FEANTSA is the European Federation of National Organisations Working with the Homeless is an umbrella of not-for-profit organizations which participate in or contribute to the fight against homelessness in Europe.
About | Policy Scotland | Edinburgh PolicyScotland.org work with organisations across the country to input to policy changes and implement good practice
Report Card on Child Homelessness by the American Institutes for Research. Summarized in Child homelessness on the rise in US Archived 29 November 2014 at the Wayback Machine (November 2014), Palm Beach Post
Utah found a brilliantly effective solution for homelessness (February 2015), Natasha Bertrand, Business Insider |
Glycogen storage disease type V | Glycogen storage disease type V (GSD5, GSD-V), also known as McArdles disease, is a metabolic disorder, one of the metabolic myopathies, more specifically a muscle glycogen storage disease, caused by a deficiency of myophosphorylase. Its incidence is reported as one in 100,000, roughly the same as glycogen storage disease type I.The disease was first reported in 1951 by Dr. Brian McArdle of Guys Hospital, London.
Signs and symptoms
The onset of this disease is usually noticed in childhood, but often not diagnosed until the third or fourth decade of life. Symptoms include exercise intolerance with muscle pain, early fatigue, painful cramps, inappropriate rapid heart rate response to exercise, and may include myoglobin in the urine (often provoked by a bout of exercise). "In McArdles, our heart rate tends to increase in what is called an inappropriate response. That is, after the start of exercise it increases much more quickly than would be expected in someone unaffected by McArdles." Myoglobinuria may be seen due to the breakdown of skeletal muscle known as rhabdomyolysis, a condition in which muscle cells breakdown, sending their contents into the bloodstream. In a recent study of 269 GSD-V patients, 39.4% reported no previous episodes of myoglobinuria and 6.8% had normal CK even with fixed muscle weakness, so an absence of myoglobinuria and normal CK should not rule out the possibility of the disease.As skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity, individuals with GSD-V experience during exercise: sinus tachycardia, tachypnea, muscle fatigue and pain, during the aforementioned activities and time frames. Patients may exhibit a “second wind” phenomenon. This is characterized by the patients better tolerance for aerobic exercise such as walking and cycling after approximately 10 minutes. This is attributed to the combination of increased blood flow and the ability of the body to find alternative sources of energy, like fatty acids and proteins. In the long term, patients may exhibit kidney failure due to the myoglobinuria, and with age, patients may exhibit progressively increasing weakness and substantial muscle loss.Younger people may display unusual symptoms, such as difficulty in chewing, swallowing or utilizing normal oral motor functions. A number of comorbidities were found in GSD-V patients at a higher rate than found in the general population, including (but not limited to): hypertension (17%), endocrine diseases (15.7%), muskuloskeletal/rheumatic disease (12.9%), hyperuricemia/gout (11.6%), gastrointestinal diseases (11.2%), neurological disease (10%), respiratory disease (9.5%), and coronary artery disease (8.3%). Patients may have hypertrophy, particularly of the legs, and may have lower bone mineral content and density in the legs.GSD-V patients may experience myogenic hyperuricemia (exercise-induced accelerated breakdown of purine nucleotides in skeletal muscle). The Purine Nucleotide Cycle (PNC) is activated when the ATP (energy) reservoir in muscle cells runs low, and is a part of Protein Metabolism. Adenine (a purine) is converted into AMP (adenosine monophosphate), IMP (inosine monophosphate), and S-AMP (adenylosuccinate) in a circular fashion; the byproducts are fumarate (which goes on to produce ATP via oxidative phosphorylation), ammonia (from the conversion of AMP into IMP), and uric acid (from excess IMP).
To avoid health complications, GSD-V patients need to get their ATP (energy) primarily from Free Fatty Acids (Lipid Metabolism) rather than Protein Metabolism. Over-reliance on protein metabolism can be best avoided by not depleting the ATP reservoir, such as by not pushing through the pain and by not going too fast, too soon. "Be wary of pushing on when you feel pain start. This pain is a result of damaging muscles, and repeated damage will cause problems in the long term. But also this is counterproductive--it will stop you from getting into second wind. By pressing on despite the pain, you start your protein metabolism which then effectively blocks your glucose and fat metabolism. If you ever get into this situation, you need to stop completely for 30 minutes or more and then start the whole process again."Patients may present at emergency rooms with severe fixed contractures of the muscles and often severe pain. These require urgent assessment for rhabdomyolysis as in about 30% of cases this leads to acute kidney injury. Left untreated, this can be life-threatening. In a small number of cases compartment syndrome has developed, requiring prompt surgical referral.
Genetics
"GSDV is inherited in an autosomal recessive manner. At conception, each sibling of an affected individual has a 25% chance of being affected, a 50% chance of being a carrier, and a 25% chance of being unaffected and not a carrier."Two autosomal recessive forms of this disease occur, childhood-onset and adult-onset. The gene for myophosphorylase, PYGM (the muscle-type of the glycogen phosphorylase gene), is located on chromosome 11q13. According to the most recent publications, 95 different mutations have been reported. The forms of the mutations may vary between ethnic groups. For example, the R50X (Arg50Stop) mutation (previously referred to as R49X) is most common in North America and western Europe, and the Y84X mutation is most common among central Europeans.The exact method of protein disruption has been elucidated in certain mutations. For example, R138W is known to disrupt to pyridoxal phosphate binding site. In 2006, another mutation (c.13_14delCT) was discovered which may contribute to increased symptoms in addition to the common Arg50Stop mutation.
Myophosphorylase
Structure
The myophosphorylase structure consists of 842 amino acids. Its molecular weight of the unprocessed precursor is 97 kDa. The three-dimensional structure has been determined for this protein. The interactions of several amino acids in myophosphorylases structure are known. Ser-14 is modified by phosphorylase kinase during activation of the enzyme. Lys-680 is involved in binding the pyridoxal phosphate, which is the active form of vitamin B6, a cofactor required by myophosphorylase. By similarity, other sites have been estimated: Tyr-76 binds AMP, Cys-109 and Cys-143 are involved in subunit association, and Tyr-156 may be involved in allosteric control.
Function
Myophosphorylase is the form of the glycogen phosphorylase found in muscle that catalyses the following reaction:((1→4)-alpha-D-glucosyl) (n) + phosphate = ((1→4)-alpha-D-glucosyl) (n-1) + alpha-D-glucose 1-phosphate
Failure of this enzyme ultimately impairs the operation of ATPases. This is due to the lack of normal pH fall during exercise, which impairs the creatine kinase equilibrium and exaggerates the rise of ADP.
Pathophysiology
Myophosphorylase is involved in the breakdown of glycogen to glucose for use in muscle. The enzyme removes 1,4 glycosyl residues from outer branches of glycogen and adds inorganic phosphate to form glucose-1-phosphate. Ordinarily, the removal of 1,4 glycosyl residues by myophosphorylase leads to the formation of glucose-1-phosphate during glycogen breakdown and the polar, phosphorylated glucose cannot leave the cell membrane and so is marked for intracellular catabolism. In McArdles Disease, deficiency of myophosphorylase leads to accumulation of intramuscular glycogen and a lack of glucose-1-phosphate for cellular fuel.
Myophosphorylase exists in the active form when phosphorylated. The enzyme phosphorylase kinase plays a role in phosphorylating glycogen phosphorylase to activate it and another enzyme, protein phosphatase-1, inactivates glycogen phosphorylase through dephosphorylation.
Diagnosis
There are some laboratory tests that may aid in diagnosis of GSD-V. A muscle biopsy will note the absence of myophosphorylase in muscle fibers. In some cases, acid-Schiff stained glycogen can be seen with microscopy.Genetic sequencing of the PYGM gene (which codes for the muscle isoform of glycogen phosphorylase) may be done to determine the presence of gene mutations, determining if McArdles is present. This type of testing is considerably less invasive than a muscle biopsy.The physician can also perform an ischemic forearm exercise test as described above. Some findings suggest a nonischemic test could be performed with similar results. The nonischemic version of this test would involve not cutting off the blood flow to the exercising arm. Findings consistent with McArdles disease would include a failure of lactate to rise in venous blood and exaggerated ammonia levels. These findings would indicate a severe muscle glycolytic block.
Serum lactate may fail to rise in part because of increased uptake via the monocarboxylate transporter (MCT1), which is upregulated in skeletal muscle in McArdle disease. Lactate may be used as a fuel source once converted to pyruvate. Ammonia levels may rise given ammonia is a by-product of the adenylate kinase pathway, an alternative pathway for ATP production. In this pathway, two ADP molecules combine to make ATP; AMP is deaminated in this process, producing inosine monophosphate (IMP) and ammonia (NH3).Physicians may also check resting levels of creatine kinase, which are moderately increased in 90% of patients. In some, the level is increased by multitudes - a person without GSD-V will have a CK between 60 and 400IU/L, while a person with the syndrome may have a level of 5,000 IU/L at rest, and may increase to 35,000 IU/L or more with muscle exertion. This can help distinguish McArdles syndrome from carnitine palmitoyltransferase II deficiency (CPT-II), a lipid-based metabolic disorder which prevents fatty acids from being transported into mitochondria for use as an energy source. Also, serum electrolytes and endocrine studies (such as thyroid function, parathyroid function and growth hormone levels) will also be completed. Urine studies are required only if rhabdomyolysis is suspected. Urine volume, urine sediment and myoglobin levels would be ascertained. If rhabdomyolysis is suspected, serum myoglobin, creatine kinase, lactate dehydrogenase, electrolytes and renal function will be checked.Physicians may also conduct an Exercise Stress Test to test for an inappropriate rapid heart rate (Sinus Tachycardia) in response to exercise. Due to the rare nature of the disease, the inappropriate rapid heart rate in response to exercise may be misdiagnosed as Inappropriate Sinus Tachycardia (which is a diagnosis of exclusion). The 12 Minute Walk Test (12MWT) can be used to determine "Second Wind," which requires a treadmill (no incline), heart rate monitor, stop watch, pain scale, and that the patient has rested for 30 minutes prior to the test to ensure that oxidative phosphorylation has stopped.
Treatment
Supervised exercise programs have been shown in small studies to improve exercise capacity by several measures.Oral sucrose treatment (for example a sports drink with 75 grams of sucrose in 660 ml.) taken 30 minutes prior to exercise has been shown to help improve exercise tolerance including a lower heart rate and lower perceived level of exertion compared with placebo.A low dosage treatment with creatine showed a significant improvement of muscle problems compared to placebo in a small clinical study.
History
The deficiency was the first metabolic myopathy to be recognized, when Dr. McArdle described the first case in a 30-year-old man who always experienced pain and weakness after exercise. Dr. McArdle noticed this patients cramps were electrically silent and his venous lactate levels failed to increase upon ischemic exercise. (The ischemic exercise consists of the patient squeezing a hand dynamometer at maximal strength for a specific period of time, usually a minute, with a blood pressure cuff, which is placed on the upper arm and set at 250 mmHg, blocking blood flow to the exercising arm.) Notably, this is the same phenomenon that occurs when muscle is poisoned by iodoacetate, a substance that blocks breakdown of glycogen into glucose and prevents the formation of lactate. Dr. McArdle accurately concluded that the patient had a disorder of glycogen breakdown that specifically affected skeletal muscle. The associated enzyme deficiency was discovered in 1959 by W. F. H. M. Mommaerts et al.
References
External links
Euromac, an EU-funded consortium of medical and research institutes across Europe which is building a patient registry and raising standards of care for people with McArdle Disease.
International Association for Muscle Glycogen Storage Disease (IamGSD).
Walking With McArdles - IAMGSD videos
EUROMAC Introduction - Video about McArdle Disease and the EUROMAC Registry of McArdle Disease and other rare glycogenoses patients |
Arthus reaction | In immunology, the Arthus reaction () is a type of local type III hypersensitivity reaction. Type III hypersensitivity reactions are immune complex-mediated, and involve the deposition of antigen/antibody complexes mainly in the vascular walls, serosa (pleura, pericardium, synovium), and glomeruli. This reaction is usually encountered in experimental settings following the injection of antigens.
History
The Arthus reaction was discovered by Nicolas Maurice Arthus in 1903. Arthus repeatedly injected horse serum subcutaneously into rabbits. After four injections, he found that there was edema and that the serum was absorbed slowly. Further injections eventually led to gangrene.
Process
The Arthus reaction involves the in situ formation of antigen/antibody complexes after the intradermal injection of an antigen. If the animal/patient was previously sensitized (has circulating antibody), an Arthus reaction occurs. Typical of most mechanisms of the type III hypersensitivity, Arthus manifests as local vasculitis due to deposition of IgG-based immune complexes in dermal blood vessels. Activation of complement primarily results in cleavage of soluble complement proteins forming C5a and C3a, which activate recruitment of PMNs and local mast cell degranulation (requiring the binding of the immune complex onto FcγRIII), resulting in an inflammatory response. Further aggregation of immune complex-related processes induce a local fibrinoid necrosis with ischemia-aggravating thrombosis in the tissue vessel walls. The end result is a localized area of redness and induration that typically lasts a day or so.
Arthus reactions have been infrequently reported after vaccinations containing diphtheria and tetanus toxoid. The CDCs description:
Arthus reactions (type III hypersensitivity reactions) are rarely reported after vaccination and can occur after tetanus toxoid–containing or diphtheria toxoid–containing vaccines. An Arthus reaction is a local vasculitis associated with deposition of immune complexes and activation of complement. Immune complexes form in the setting of high local concentration of vaccine antigens and high circulating antibody concentration. Arthus reactions are characterized by severe pain, swelling, induration, edema, hemorrhage, and occasionally by necrosis. These symptoms and signs usually occur 4–12 hours after vaccination. ACIP has recommended that persons who experienced an Arthus reaction after a dose of tetanus toxoid–containing vaccine should not receive Td more frequently than every 10 years, even for tetanus prophylaxis as part of wound management.
See also
Serum sickness
References
== External links == |
Benign tumor | A benign tumor is a mass of cells (tumor) that does not invade neighboring tissue or metastasize (spread throughout the body). Compared to malignant (cancerous) tumors, benign tumors generally have a slower growth rate. Benign tumors have relatively well differentiated cells. They are often surrounded by an outer surface (fibrous sheath of connective tissue) or stay contained within the epithelium. Common examples of benign tumors include moles and uterine fibroids.
Some forms of benign tumors may be harmful to health. Benign tumor growth causes a mass effect that can compress neighboring tissues. This can lead to nerve damage, blood flow reduction (ischemia), tissue death (necrosis), or organ damage. The health effects of benign tumor growth may be more prominent if the tumor is contained within an enclosed space such as the cranium, respiratory tract, sinus, or bones. For example, unlike most benign tumors elsewhere in the body, benign brain tumors can be life-threatening. Tumors may exhibit behaviors characteristic of their cell type of origin; as an example, endocrine tumors such as thyroid adenomas and adrenocortical adenomas may overproduce certain hormones.
Many types of benign tumors have the potential to become cancerous (malignant) through a process known as tumor progression. For this reason and other possible harms, some benign tumors are removed by surgery. When removed, benign tumors usually do not return. Exceptions to this rule may indicate malignant transformation.
Signs and symptoms
Benign tumors are very diverse; they may be asymptomatic or may cause specific symptoms, depending on their anatomic location and tissue type. They grow outward, producing large, rounded masses which can cause what is known as a "mass effect". This growth can cause compression of local tissues or organs, leading to many effects, such as blockage of ducts, reduced blood flow (ischaemia), tissue death (necrosis) and nerve pain or damage. Some tumors also produce hormones that can lead to life-threatening situations. Insulinomas can produce large amounts of insulin, causing hypoglycemia. Pituitary adenomas can cause elevated levels of hormones such as growth hormone and insulin-like growth factor-1, which cause acromegaly; prolactin; ACTH and cortisol, which cause Cushings disease; TSH, which causes hyperthyroidism; and FSH and LH. Bowel intussusception can occur with various benign colonic tumors. Cosmetic effects can be caused by tumors, especially those of the skin, possibly causing psychological or social discomfort for the person with the tumor. Vascular tissue tumors can bleed, in some cases leading to anemia.
Causes
PTEN hamartoma syndrome
PTEN hamartoma syndrome encompasses hamartomatous disorders characterized by genetic mutations in the PTEN tumor suppressor gene, including Cowden syndrome, Bannayan–Riley–Ruvalcaba syndrome, Proteus syndrome and Proteus-like syndrome. Absent or dysfunctional PTEN protein allows cells to over-proliferate, causing hamartomas. Cowden syndrome is an autosomal dominant genetic disorder characterized by multiple benign hamartomas (trichilemmomas and mucocutaneous papillomatous papules) as well as a predisposition for cancers of multiple organs including the breast and thyroid. Bannayan–Riley–Ruvalcaba syndrome is a congenital disorder characterized by hamartomatous intestinal polyposis, macrocephaly, lipomatosis, hemangiomatosis and glans penis macules. Proteus syndrome is characterized by nevi, asymmetric overgrowth of various body parts, adipose tissue dysregulation, cystadenomas, adenomas, vascular malformation.
Familial adenomatous polyposis
Familial adenomatous polyposis (FAP) is a familial cancer syndrome caused by mutations in the APC gene. In FAP, adenomatous polyps are present in the colon. The polyps progress into colon cancer unless removed. The APC gene is a tumor suppressor. Its protein product is involved in many cellular processes. Inactivation of the APC gene leads to the buildup of a protein called β-catenin. This protein activates two transcription factors: T-cell factor (TCF) and lymphoid enhancer factor (LEF). These factors cause the upregulation of many genes involved in cell proliferation, differentiation, migration and apoptosis (programmed cell death), causing the growth of benign tumors.
Tuberous sclerosis complex
Tuberous sclerosis complex (TSC) is an autosomal dominant genetic disorder caused by mutations in the genes TSC1 and TSC2. TSC1 produces the protein hamartin. TSC2 produces the protein tuberin. This disorder presents with many benign hamartomatous tumors including angiofibromas, renal angiomyolipomas, and pulmonary lymphangiomyomatosis. Tuberin and hamartin inhibit the mTOR protein in normal cellular physiology. Inactivation of the TSC tumor suppressors causes an increase in mTOR activity. This leads to the activation of genes and the production of proteins that increase cell growth.
Von Hippel–Lindau disease
Von Hippel–Lindau disease is a dominantly-inherited cancer syndrome that significantly increases the risk of various tumors. This includes benign hemangioblastomas and malignant pheochromocytomas, renal cell carcinomas, pancreatic endocrine tumors, and endolymphatic sac tumors. It is caused by genetic mutations in the Von Hippel–Lindau tumor suppressor gene. The VHL protein (pVHL) is involved in cellular signaling in oxygen starved (hypoxic) cells. One role of pVHL is to cause the cellular degradation of another protein, HIF1α. Dysfunctional pVHL leads to accumulation of HIF1α. This activates several genes responsible for the production of substances involved in cell growth and blood vessel production: VEGF, PDGFβ, TGFα and erythropoietin.
Bone tumors
Benign tumors of bone can be similar macroscopically and require a combination of a clinical history with cytogenetic, molecular, and radiologic tests for diagnosis. Three common forms of benign bone tumors with are giant cell tumor of bone, osteochondroma, and enchondroma; other forms of benign bone tumors exist but may be less prevalent.
Giant cell tumors
Giant cell tumors of bone frequently occur in long bone epiphyses of the appendicular skeleton or the sacrum of the axial skeleton. Local growth can cause destruction of neighboring cortical bone and soft tissue, leading to pain and limiting range of motion. The characteristic radiologic finding of giant cell tumors of bone is a lytic lesion that does not have marginal sclerosis of bone. On histology, giant cells of fused osteoclasts are seen as a response to neoplastic mononucleated cells. Notably, giant cells are not unique among benign bone tumors to giant cell tumors of bone. Molecular characteristics of the neoplastic cells causing giant cell tumors of bone indicate an origin of pluripotent mesenchymal stem cells that adopt preosteoblastic markers. Cytogenetic causes of giant cell tumors of bone involve telomeres. Treatment involves surgical curettage with adjuvant bisphosphonates.
Osteochondroma
Osteochondromas form cartilage-capped projections of bone. Structures such as the marrow cavity and cortical bone of the osteochondroma are contiguous to those of the originating bone. Sites of origin often involve metaphyses of long bones. While many osteochondromas occur spontaneously, there are cases in which several osteochondromas can occur in the same individual; these may be linked to a genetic condition known as hereditary multiple osteochondromas. Osteochondroma appears on X-ray as a projecting mass that often points away from joints. These tumors stop growing with the closure of the parental bones growth plates. Failure to stop growth can be indicative of transformation to malignant chondrosarcoma. Treatment is not indicated unless symptomatic. In that case, surgical excision is often curative.
Enchondroma
Enchondromas are benign tumors of hyaline cartilage. Within a bone, enchondromas are often found in metaphyses. They can be found in many types of bone, including small bones, long bones, and the axial skeleton. X-ray of enchondromas shows well-defined borders and a stippled appearance. Presentation of multiple enchondromas is consistent with multiple enchondromatosis (Ollier Disease). Treatment of enchondromas involves surgical curettage and grafting.
Benign soft tissue tumors
Lipomas
Lipomas are benign, subcutaneous tumors of fat cells (adipocytes). They are usually painless, slow-growing, and mobile masses that can occur anywhere in the body where there are fat cells, but are typically found on the trunk and upper extremities. Although lipomas can develop at any age, they more commonly appear between the ages of 40 and 60. Lipomas affect about 1% of the population, with no documented sex bias, and about 1 in every 1000 people will have a lipoma within their lifetime. The cause of lipomas is not well defined. Genetic or inherited causes of lipomas play a role in around 2-3% of patients. In individuals with inherited familial syndromes such as Proteus syndrome or Familial multiple lipomatosis, it is common to see multiple lipomas across the body. These syndromes are also associated with specific symptoms and sub-populations. Mutations in chromosome 12 have been identified in around 65% of lipoma cases. Lipomas have also been shown to be increased in those with obesity, hyperlipidemia, and diabetes mellitus.Lipomas are usually diagnosed clinically, although imaging (ultrasound, computed tomography, or magnetic resonance imaging) may be utilized to assist with the diagnosis of lipomas in atypical locations. The main treatment for lipomas is surgical excision, after which the tumor is examined with histopathology to confirm the diagnosis. The prognosis for benign lipomas excellent and recurrence after excision is rare, but may occur if the removal was incomplete.
Mechanism
Benign vs malignant
One of the most important factors in classifying a tumor as benign or malignant is its invasive potential. If a tumor lacks the ability to invade adjacent tissues or spread to distant sites by metastasizing then it is benign, whereas invasive or metastatic tumors are malignant. For this reason, benign tumors are not classed as cancer. Benign tumors will grow in a contained area usually encapsulated in a fibrous connective tissue capsule. The growth rates of benign and malignant tumors also differ; benign tumors generally grow more slowly than malignant tumors. Although benign tumors pose a lower health risk than malignant tumors, they both can be life-threatening in certain situations. There are many general characteristics which apply to either benign or malignant tumors, but sometimes one type may show characteristics of the other. For example, benign tumors are mostly well differentiated and malignant tumors are often undifferentiated. However, undifferentiated benign tumors and differentiated malignant tumors can occur. Although benign tumors generally grow slowly, cases of fast-growing benign tumors have also been documented. Some malignant tumors are mostly non-metastatic such as in the case of basal-cell carcinoma. CT and chest radiography can be a useful diagnostic exam in visualizing a benign tumor and differentiating it from a malignant tumor. The smaller the tumor on a radiograph the more likely it is to be benign as 80% of lung nodules less than 2 cm in diameter are benign. Most benign nodules are smoothed radiopaque densities with clear margins but these are not exclusive signs of benign tumors.
Multistage carcinogenesis
Tumors are formed by carcinogenesis, a process in which cellular alterations lead to the formation of cancer. Multistage carcinogenesis involves the sequential genetic or epigenetic changes to a cells DNA, where each step produces a more advanced tumor. It is often broken down into three stages; initiation, promotion and progression, and several mutations may occur at each stage. Initiation is where the first genetic mutation occurs in a cell. Promotion is the clonal expansion (repeated division) of this transformed cell into a visible tumor that is usually benign. Following promotion, progression may take place where more genetic mutations are acquired in a sub-population of tumor cells. Progression changes the benign tumor into a malignant tumor. A prominent and well studied example of this phenomenon is the tubular adenoma, a common type of colon polyp which is an important precursor to colon cancer. The cells in tubular adenomas, like most tumors that frequently progress to cancer, show certain abnormalities of cell maturation and appearance collectively known as dysplasia. These cellular abnormalities are not seen in benign tumors that rarely or never turn cancerous, but are seen in other pre-cancerous tissue abnormalities which do not form discrete masses, such as pre-cancerous lesions of the uterine cervix.
Diagnosis
Classification
Benign neoplasms are typically, but not always, composed of cells which bear a strong resemblance to a normal cell type in their organ of origin. These tumors are named for the cell or tissue type from which they originate. The suffix "-oma" (but not -carcinoma, -sarcoma, or -blastoma, which are generally cancers) is applied to indicate a benign tumor. For example, a lipoma is a common benign tumor of fat cells (lipocytes), and a chondroma is a benign tumor of cartilage-forming cells (chondrocytes). Adenomas are benign tumors of gland-forming cells, and are usually specified further by their cell or organ of origin, as in hepatic adenoma (a benign tumor of hepatocytes, or liver cells). Teratomas contain many cell types such as skin, nerve, brain and thyroid, among others, because they are derived from germ cells. Hamartomas are a group of benign tumors that have relatively normal cellular differentiation but exhibit disorganized tissue organization.Exceptions to the nomenclature rules exist for historical reasons; malignant examples include melanoma (a cancer of pigmented skin cells, or melanocytes) and seminoma (a cancer of male reproductive cells).Benign tumors do not encompass all benign growths. Skin tags, vocal chord polyps, and hyperplastic polyps of the colon are often referred to as benign, but they are overgrowths of normal tissue rather than neoplasms.
Treatment
Benign tumors typically need no treatment unless if they cause problems such as seizures, discomfort or cosmetic concerns. Surgery is usually the most effective approach and is used to treat most benign tumors. In some cases, other treatments may be used. Adenomas of the rectum may be treated with sclerotherapy, in which chemicals are used to shrink blood vessels in order to cut off the blood supply. Most benign tumors do not respond to chemotherapy or radiation therapy, although there are exceptions; benign intercranial tumors are sometimes treated with radiation therapy and chemotherapy under certain circumstances. Radiation can also be used to treat hemangiomas in the rectum. Benign skin tumors are usually surgically resected but other treatments such as cryotherapy, curettage, electrodesiccation, laser therapy, dermabrasion, chemical peels and topical medication are used.
Name
The word "benign" means "favourable, kind, fortunate, salutary, propitious". However, a "benign" tumour is not benign in the usual sense; the name merely specifies that it is not "malignant", i.e. cancerous. While benign tumours usually do not pose a serious health risk, they can be harmful or fatal.
References
== External links == |
Cardiovascular disease | Cardiovascular disease (CVD) is a class of diseases that involve the heart or blood vessels. CVD includes coronary artery diseases (CAD) such as angina and myocardial infarction (commonly known as a heart attack). Other CVDs include stroke, heart failure, hypertensive heart disease, rheumatic heart disease, cardiomyopathy, abnormal heart rhythms, congenital heart disease, valvular heart disease, carditis, aortic aneurysms, peripheral artery disease, thromboembolic disease, and venous thrombosis.The underlying mechanisms vary depending on the disease. It is estimated that dietary risk factors are associated with 53% of CVD deaths. Coronary artery disease, stroke, and peripheral artery disease involve atherosclerosis. This may be caused by high blood pressure, smoking, diabetes mellitus, lack of exercise, obesity, high blood cholesterol, poor diet, excessive alcohol consumption, and poor sleep, among other things. High blood pressure is estimated to account for approximately 13% of CVD deaths, while tobacco accounts for 9%, diabetes 6%, lack of exercise 6%, and obesity 5%. Rheumatic heart disease may follow untreated strep throat.It is estimated that up to 90% of CVD may be preventable. Prevention of CVD involves improving risk factors through: healthy eating, exercise, avoidance of tobacco smoke and limiting alcohol intake. Treating risk factors, such as high blood pressure, blood lipids and diabetes is also beneficial. Treating people who have strep throat with antibiotics can decrease the risk of rheumatic heart disease. The use of aspirin in people, who are otherwise healthy, is of unclear benefit.Cardiovascular diseases are the leading cause of death worldwide except Africa. Together CVD resulted in 17.9 million deaths (32.1%) in 2015, up from 12.3 million (25.8%) in 1990. Deaths, at a given age, from CVD are more common and have been increasing in much of the developing world, while rates have declined in most of the developed world since the 1970s. Coronary artery disease and stroke account for 80% of CVD deaths in males and 75% of CVD deaths in females. Most cardiovascular disease affects older adults. In the United States 11% of people between 20 and 40 have CVD, while 37% between 40 and 60, 71% of people between 60 and 80, and 85% of people over 80 have CVD. The average age of death from coronary artery disease in the developed world is around 80, while it is around 68 in the developing world. CVD is typically diagnosed seven to ten years earlier in men than in women.: 48
Types
There are many cardiovascular diseases involving the blood vessels. They are known as vascular diseases.
Coronary artery disease (also known as coronary heart disease and ischemic heart disease)
Peripheral arterial disease – disease of blood vessels that supply blood to the arms and legs
Cerebrovascular disease – disease of blood vessels that supply blood to the brain (includes stroke)
Renal artery stenosis
Aortic aneurysmThere are also many cardiovascular diseases that involve the heart.
Cardiomyopathy – diseases of cardiac muscle
Hypertensive heart disease – diseases of the heart secondary to high blood pressure or hypertension
Heart failure - a clinical syndrome caused by the inability of the heart to supply sufficient blood to the tissues to meet their metabolic requirements
Pulmonary heart disease – a failure at the right side of the heart with respiratory system involvement
Cardiac dysrhythmias – abnormalities of heart rhythm
Inflammatory heart disease
Endocarditis – inflammation of the inner layer of the heart, the endocardium. The structures most commonly involved are the heart valves.
Inflammatory cardiomegaly
Myocarditis – inflammation of the myocardium, the muscular part of the heart, caused most often by viral infection and less often by bacterial infections, certain medications, toxins, and autoimmune disorders. It is characterized in part by infiltration of the heart by lymphocyte and monocyte types of white blood cells.
Eosinophilic myocarditis - inflammation of the myocardium caused by pathologically activated eosinophilic white blood cells. This disorder differs from myocarditis in its causes and treatments.
Valvular heart disease
Congenital heart disease – heart structure malformations existing at birth
Rheumatic heart disease – heart muscles and valves damage due to rheumatic fever caused by Streptococcus pyogenes a group A streptococcal infection.
Risk factors
There are many risk factors for heart diseases: age, sex, tobacco use, physical inactivity, non-alcoholic fatty liver disease, excessive alcohol consumption, unhealthy diet, obesity, genetic predisposition and family history of cardiovascular disease, raised blood pressure (hypertension), raised blood sugar (diabetes mellitus), raised blood cholesterol (hyperlipidemia), undiagnosed celiac disease, psychosocial factors, poverty and low educational status, air pollution, and poor sleep. While the individual contribution of each risk factor varies between different communities or ethnic groups the overall contribution of these risk factors is very consistent. Some of these risk factors, such as age, sex or family history/genetic predisposition, are immutable; however, many important cardiovascular risk factors are modifiable by lifestyle change, social change, drug treatment (for example prevention of hypertension, hyperlipidemia, and diabetes). People with obesity are at increased risk of atherosclerosis of the coronary arteries.
Genetics
Cardiovascular disease in a persons parents increases their risk by ~3 fold, and genetics is an important risk factor for cardiovascular diseases. Genetic cardiovascular disease can occur either as
a consequence of single variant (Mendelian) or polygenic influences. There are more than 40 inherited cardiovascular disease that can be traced to a single disease-causing DNA variant, although these conditions are rare. Most common cardiovascular diseases are non-Mendelian and are thought to be due to hundreds or thousands of genetic variants (known as single nucleotide polymorphisms), each associated with a small effect.
Age
Age is the most important risk factor in developing cardiovascular or heart diseases, with approximately a tripling of risk with each decade of life. Coronary fatty streaks can begin to form in adolescence. It is estimated that 82 percent of people who die of coronary heart disease are 65 and older. Simultaneously, the risk of stroke doubles every decade after age 55.Multiple explanations are proposed to explain why age increases the risk of cardiovascular/heart diseases. One of them relates to serum cholesterol level. In most populations, the serum total cholesterol level increases as age increases. In men, this increase levels off around age 45 to 50 years. In women, the increase continues sharply until age 60 to 65 years.Aging is also associated with changes in the mechanical and structural properties of the vascular wall, which leads to the loss of arterial elasticity and reduced arterial compliance and may subsequently lead to coronary artery disease.
Sex
Men are at greater risk of heart disease than pre-menopausal women. Once past menopause, it has been argued that a womans risk is similar to a mans although more recent data from the WHO and UN disputes this. If a female has diabetes, she is more likely to develop heart disease than a male with diabetes.Coronary heart diseases are 2 to 5 times more common among middle-aged men than women. In a study done by the World Health Organization, sex contributes to approximately 40% of the variation in sex ratios of coronary heart disease mortality. Another study reports similar results finding that sex differences explains nearly half the risk associated with cardiovascular diseases One of the proposed explanations for sex differences in cardiovascular diseases is hormonal difference. Among women, estrogen is the predominant sex hormone. Estrogen may have protective effects on glucose metabolism and hemostatic system, and may have direct effect in improving endothelial cell function. The production of estrogen decreases after menopause, and this may change the female lipid metabolism toward a more atherogenic form by decreasing the HDL cholesterol level while increasing LDL and total cholesterol levels.Among men and women, there are differences in body weight, height, body fat distribution, heart rate, stroke volume, and arterial compliance. In the very elderly, age-related large artery pulsatility and stiffness are more pronounced among women than men. This may be caused by the womens smaller body size and arterial dimensions which are independent of menopause.
Tobacco
Cigarettes are the major form of smoked tobacco. Risks to health from tobacco use result not only from direct consumption of tobacco, but also from exposure to second-hand smoke. Approximately 10% of cardiovascular disease is attributed to smoking; however, people who quit smoking by age 30 have almost as low a risk of death as never smokers.
Physical inactivity
Insufficient physical activity (defined as less than 5 x 30 minutes of moderate activity per week, or less than 3 x 20 minutes of vigorous activity per week) is currently the fourth leading risk factor for mortality worldwide. In 2008, 31.3% of adults aged 15 or older (28.2% men and 34.4% women) were insufficiently physically active.
The risk of ischemic heart disease and diabetes mellitus is reduced by almost a third in adults who participate in 150 minutes of moderate physical activity each week (or equivalent). In addition, physical activity assists weight loss and improves blood glucose control, blood pressure, lipid profile and insulin sensitivity. These effects may, at least in part, explain its cardiovascular benefits.
Diet
High dietary intakes of saturated fat, trans-fats and salt, and low intake of fruits, vegetables and fish are linked to cardiovascular risk, although whether all these associations indicate causes is disputed. The World Health Organization attributes approximately 1.7 million deaths worldwide to low fruit and vegetable consumption. Frequent consumption of high-energy foods, such as processed foods that are high in fats and sugars, promotes obesity and may increase cardiovascular risk. The amount of dietary salt consumed may also be an important determinant of blood pressure levels and overall cardiovascular risk. There is moderate quality evidence that reducing saturated fat intake for at least two years reduces the risk of cardiovascular disease. High trans-fat intake has adverse effects on blood lipids and circulating inflammatory markers, and elimination of trans-fat from diets has been widely advocated. In 2018 the World Health Organization estimated that trans fats were the cause of more than half a million deaths per year. There is evidence that higher consumption of sugar is associated with higher blood pressure and unfavorable blood lipids, and sugar intake also increases the risk of diabetes mellitus. High consumption of processed meats is associated with an increased risk of cardiovascular disease, possibly in part due to increased dietary salt intake.
Alcohol
The relationship between alcohol consumption and cardiovascular disease is complex, and may depend on the amount of alcohol consumed. There is a direct relationship between high levels of drinking alcohol and cardiovascular disease. Drinking at low levels without episodes of heavy drinking may be associated with a reduced risk of cardiovascular disease, but there is evidence that associations between moderate alcohol consumption and protection from stroke are non-causal. At the population level, the health risks of drinking alcohol exceed any potential benefits.
Celiac disease
Untreated celiac disease can cause the development of many types of cardiovascular diseases, most of which improve or resolve with a gluten-free diet and intestinal healing. However, delays in recognition and diagnosis of celiac disease can cause irreversible heart damage.
Sleep
A lack of good sleep, in amount or quality, is documented as increasing cardiovascular risk in both adults and teens. Recommendations suggest that Infants typically need 12 or more hours of sleep per day, adolescent at least eight or nine hours, and adults seven or eight. About one-third of adult Americans get less than the recommended seven hours of sleep per night, and in a study of teenagers, just 2.2 percent of those studied got enough sleep, many of whom did not get good quality sleep. Studies have shown that short sleepers getting less than seven hours sleep per night have a 10 percent to 30 percent higher risk of cardiovascular disease.Sleep disorders such as sleep-disordered breathing and insomnia, are also associated with a higher cardiometabolic risk.
An estimated 50 to 70 million Americans have insomnia, sleep apnea or other chronic sleep disorders.
In addition, sleep research displays differences in race and class. Short sleep and poor sleep tend to be more frequently reported in ethnic minorities than in whites. African-Americans report experiencing short durations of sleep five times more often than whites, possibly as a result of social and environmental factors. Black children and children living in disadvantaged neighborhoods have much higher rates of sleep apnea.
Socioeconomic disadvantage
Cardiovascular disease affects low- and middle-income countries even more than high-income countries. There is relatively little information regarding social patterns of cardiovascular disease within low- and middle-income countries, but within high-income countries low income and low educational status are consistently associated with greater risk of cardiovascular disease. Policies that have resulted in increased socio-economic inequalities have been associated with greater subsequent socio-economic differences in cardiovascular disease implying a cause and effect relationship. Psychosocial factors, environmental exposures, health behaviours, and health-care access and quality contribute to socio-economic differentials in cardiovascular disease. The Commission on Social Determinants of Health recommended that more equal distributions of power, wealth, education, housing, environmental factors, nutrition, and health care were needed to address inequalities in cardiovascular disease and non-communicable diseases.
Air pollution
Particulate matter has been studied for its short- and long-term exposure effects on cardiovascular disease. Currently, airborne particles under 2.5 micrometers in diameter (PM2.5) are the major focus, in which gradients are used to determine CVD risk. Overall, long-term PM exposure increased rate of atherosclerosis and inflammation. In regards to short-term exposure (2 hours), every 25 μg/m3 of PM2.5 resulted in a 48% increase of CVD mortality risk. In addition, after only 5 days of exposure, a rise in systolic (2.8 mmHg) and diastolic (2.7 mmHg) blood pressure occurred for every 10.5 μg/m3 of PM2.5. Other research has implicated PM2.5 in irregular heart rhythm, reduced heart rate variability (decreased vagal tone), and most notably heart failure. PM2.5 is also linked to carotid artery thickening and increased risk of acute myocardial infarction.
Cardiovascular risk assessment
Existing cardiovascular disease or a previous cardiovascular event, such as a heart attack or stroke, is the strongest predictor of a future cardiovascular event. Age, sex, smoking, blood pressure, blood lipids and diabetes are important predictors of future cardiovascular disease in people who are not known to have cardiovascular disease. These measures, and sometimes others, may be combined into composite risk scores to estimate an individuals future risk of cardiovascular disease. Numerous risk scores exist although their respective merits are debated. Other diagnostic tests and biomarkers remain under evaluation but currently these lack clear-cut evidence to support their routine use. They include family history, coronary artery calcification score, high sensitivity C-reactive protein (hs-CRP), ankle–brachial pressure index, lipoprotein subclasses and particle concentration, lipoprotein(a), apolipoproteins A-I and B, fibrinogen, white blood cell count, homocysteine, N-terminal pro B-type natriuretic peptide (NT-proBNP), and markers of kidney function. High blood phosphorus is also linked to an increased risk.
Depression and traumatic stress
There is evidence that mental health problems, in particular depression and traumatic stress, is linked to cardiovascular diseases. Whereas mental health problems are known to be associated with risk factors for cardiovascular diseases such as smoking, poor diet, and a sedentary lifestyle, these factors alone do not explain the increased risk of cardiovascular diseases seen in depression, stress, and anxiety. Moreover, posttraumatic stress disorder is independently associated with increased risk for incident coronary heart disease, even after adjusting for depression and other covariates.
Occupational exposure
Little is known about the relationship between work and cardiovascular disease, but links have been established between certain toxins, extreme heat and cold, exposure to tobacco smoke, and mental health concerns such as stress and depression.
Non-chemical risk factors
A 2015 SBU-report looking at non-chemical factors found an association for those:
with mentally stressful work with a lack of control over their working situation — with an effort-reward imbalance
who experience low social support at work; who experience injustice or experience insufficient opportunities for personal development; or those who experience job insecurity
those who work night schedules; or have long working weeks
those who are exposed to noiseSpecifically the risk of stroke was also increased by exposure to ionizing radiation. Hypertension develops more often in those who experience job strain and who have shift-work. Differences between women and men in risk are small, however men risk having and dying of heart attacks or stroke twice as often as women during working life.
Chemical risk factors
A 2017 SBU report found evidence that workplace exposure to silica dust, engine exhaust or welding fumes is associated with heart disease. Associations also exist for exposure to arsenic, benzopyrenes, lead, dynamite, carbon disulphide, carbon monoxide, metalworking fluids and occupational exposure to tobacco smoke. Working with the electrolytic production of aluminium or the production of paper when the sulphate pulping process is used is associated with heart disease. An association was also found between heart disease and exposure to compounds which are no longer permitted in certain work environments, such as phenoxy acids containing TCDD(dioxin) or asbestos.Workplace exposure to silica dust or asbestos is also associated with pulmonary heart disease. There is evidence that workplace exposure to lead, carbon disulphide, phenoxyacids containing TCDD, as well as working in an environment where aluminum is being electrolytically produced, is associated with stroke.
Somatic mutations
As of 2017, evidence suggests that certain leukemia-associated mutations in blood cells may also lead to increased risk of cardiovascular disease. Several large-scale research projects looking at human genetic data have found a robust link between the presence of these mutations, a condition known as clonal hematopoiesis, and cardiovascular disease-related incidents and mortality.
Radiation therapy
Radiation treatments for cancer can increase the risk of heart disease and death, as observed in breast cancer therapy. Therapeutic radiation increases the risk of a subsequent heart attack or stroke by 1.5 to 4 times; the increase depends on the dose strength, volume, and location.
Side-effects from radiation therapy for cardiovascular diseases have been termed radiation-induced heart disease or radiation-induced vascular disease. Symptoms are dose-dependent and include cardiomyopathy, myocardial fibrosis, valvular heart disease, coronary artery disease, heart arrhythmia and peripheral artery disease. Radiation-induced fibrosis, vascular cell damage and oxidative stress can lead to these and other late side-effect symptoms.
Pathophysiology
Population-based studies show that atherosclerosis, the major precursor of cardiovascular disease, begins in childhood. The Pathobiological Determinants of Atherosclerosis in Youth (PDAY) study demonstrated that intimal lesions appear in all the aortas and more than half of the right coronary arteries of youths aged 7–9 years.Obesity and diabetes mellitus are linked to cardiovascular disease, as are a history of chronic kidney disease and hypercholesterolaemia. In fact, cardiovascular disease is the most life-threatening of the diabetic complications and diabetics are two- to four-fold more likely to die of cardiovascular-related causes than nondiabetics.
Screening
Screening ECGs (either at rest or with exercise) are not recommended in those without symptoms who are at low risk. This includes those who are young without risk factors. In those at higher risk the evidence for screening with ECGs is inconclusive. Additionally echocardiography, myocardial perfusion imaging, and cardiac stress testing is not recommended in those at low risk who do not have symptoms. Some biomarkers may add to conventional cardiovascular risk factors in predicting the risk of future cardiovascular disease; however, the value of some biomarkers is questionable. Ankle-brachial index (ABI), high-sensitivity C-reactive protein (hsCRP), and coronary artery calcium, are also of unclear benefit in those without symptoms as of 2018.The NIH recommends lipid testing in children beginning at the age of 2 if there is a family history of heart disease or lipid problems. It is hoped that early testing will improve lifestyle factors in those at risk such as diet and exercise.Screening and selection for primary prevention interventions has traditionally been done through absolute risk using a variety of scores (ex. Framingham or Reynolds risk scores). This stratification has separated people who receive the lifestyle interventions (generally lower and intermediate risk) from the medication (higher risk). The number and variety of risk scores available for use has multiplied, but their efficacy according to a 2016 review was unclear due to lack of external validation or impact analysis. Risk stratification models often lack sensitivity for population groups and do not account for the large number of negative events among the intermediate and low risk groups. As a result, future preventative screening appears to shift toward applying prevention according to randomized trial results of each intervention rather than large-scale risk assessment.
Prevention
Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Currently practised measures to prevent cardiovascular disease include:
Maintaining a healthy diet, such as the Mediterranean diet, a vegetarian, vegan or another plant-based diet.
Replacing saturated fat with healthier choices: Clinical trials show that replacing saturated fat with polyunsaturated vegetable oil reduced CVD by 30%. Prospective observational studies show that in many populations lower intake of saturated fat coupled with higher intake of polyunsaturated and monounsaturated fat is associated with lower rates of CVD.
Decrease body fat if overweight or obese. The effect of weight loss is often difficult to distinguish from dietary change, and evidence on weight reducing diets is limited. In observational studies of people with severe obesity, weight loss following bariatric surgery is associated with a 46% reduction in cardiovascular risk.
Limit alcohol consumption to the recommended daily limits. People who moderately consume alcoholic drinks have a 25–30% lower risk of cardiovascular disease. However, people who are genetically predisposed to consume less alcohol have lower rates of cardiovascular disease suggesting that alcohol itself may not be protective. Excessive alcohol intake increases the risk of cardiovascular disease and consumption of alcohol is associated with increased risk of a cardiovascular event in the day following consumption.
Decrease non-HDL cholesterol. Statin treatment reduces cardiovascular mortality by about 31%.
Stopping smoking and avoidance of second-hand smoke. Stopping smoking reduces risk by about 35%.
At least 150 minutes (2 hours and 30 minutes) of moderate exercise per week.
Lower blood pressure, if elevated. A 10 mmHg reduction in blood pressure reduces risk by about 20%. Lowering blood pressure appears to be effective even at normal blood pressure ranges.
Decrease psychosocial stress. This measure may be complicated by imprecise definitions of what constitute psychosocial interventions. Mental stress–induced myocardial ischemia is associated with an increased risk of heart problems in those with previous heart disease. Severe emotional and physical stress leads to a form of heart dysfunction known as Takotsubo syndrome in some people. Stress, however, plays a relatively minor role in hypertension. Specific relaxation therapies are of unclear benefit.
Not enough sleep also raises the risk of high blood pressure. Adults need about 7–9 hours of sleep. Sleep apnea is also a major risk as it causes breathing to stop briefly, which can put stress on the body which can raise the risk of heart disease.Most guidelines recommend combining preventive strategies. There is some evidence that interventions aiming to reduce more than one cardiovascular risk factor may have beneficial effects on blood pressure, body mass index and waist circumference; however, evidence was limited and the authors were unable to draw firm conclusions on the effects on cardiovascular events and mortality.There is additional evidence to suggest that providing people with a cardiovascular disease risk score may reduce risk factors by a small amount compared to usual care. However, there was some uncertainty as to whether providing these scores had any effect on cardiovascular disease events. It is unclear whether or not dental care in those with periodontitis affects their risk of cardiovascular disease.
Diet
A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death.A 2021 review found that plant-based diets can provide a risk reduction for CVD if a healthy plant-based diet is consumed. Unhealthy plant-based diets do not provide benefits over diets including meat. A similar meta-analysis and systematic review also looked into dietary patterns and found "that diets lower in animal foods and unhealthy plant foods, and higher in healthy plant foods are beneficial for CVD prevention". A 2018 meta-analysis of observational studies concluded that "In most countries, a vegan diet is associated with a more favourable cardio-metabolic profile compared to an omnivorous diet."Evidence suggests that the Mediterranean diet may improve cardiovascular outcomes. There is also evidence that a Mediterranean diet may be more effective than a low-fat diet in bringing about long-term changes to cardiovascular risk factors (e.g., lower cholesterol level and blood pressure).The DASH diet (high in nuts, fish, fruits and vegetables, and low in sweets, red meat and fat) has been shown to reduce blood pressure, lower total and low density lipoprotein cholesterol and improve metabolic syndrome; but the long-term benefits have been questioned. A high-fiber diet is associated with lower risks of cardiovascular disease.Worldwide, dietary guidelines recommend a reduction in saturated fat, and although the role of dietary fat in cardiovascular disease is complex and controversial there is a long-standing consensus that replacing saturated fat with unsaturated fat in the diet is sound medical advice. Total fat intake has not been found to be associated with cardiovascular risk. A 2020 systematic review found moderate quality evidence that reducing saturated fat intake for at least 2 years caused a reduction in cardiovascular events. A 2015 meta-analysis of observational studies however did not find a convincing association between saturated fat intake and cardiovascular disease. Variation in what is used as a substitute for saturated fat may explain some differences in findings. The benefit from replacement with polyunsaturated fats appears greatest, while replacement of saturated fats with carbohydrates does not appear to have a beneficial effect. A diet high in trans fatty acids is associated with higher rates of cardiovascular disease, and in 2015 the Food and Drug Administration (FDA) determined that there was no longer a consensus among qualified experts that partially hydrogenated oils (PHOs), which are the primary dietary source of industrially produced trans fatty acids (IP-TFA), are generally recognized as safe (GRAS) for any use in human food. There is conflicting evidence concerning whether dietary supplements of omega-3 fatty acids (a type of polysaturated fat in oily fish) added to diet improve cardiovascular risk.The benefits of recommending a low-salt diet in people with high or normal blood pressure are not clear. In those with heart failure, after one study was left out, the rest of the trials show a trend to benefit. Another review of dietary salt concluded that there is strong evidence that high dietary salt intake increases blood pressure and worsens hypertension, and that it increases the number of cardiovascular disease events; both as a result of the increased blood pressure and probably through other mechanisms. Moderate evidence was found that high salt intake increases cardiovascular mortality; and some evidence was found for an increase in overall mortality, strokes, and left ventricular hypertrophy.
Intermittent fasting
Overall, the current body of scientific evidence is uncertain on whether intermittent fasting could prevent cardiovascular disease. Intermittent fasting may help people lose more weight than regular eating patterns, but was not different than energy restriction diets.
Medication
Blood pressure medication reduces cardiovascular disease in people at risk, irrespective of age, the baseline level of cardiovascular risk, or baseline blood pressure. The commonly-used drug regimens have similar efficacy in reducing the risk of all major cardiovascular events, although there may be differences between drugs in their ability to prevent specific outcomes. Larger reductions in blood pressure produce larger reductions in risk, and most people with high blood pressure require more than one drug to achieve adequate reduction in blood pressure. Adherence to medications is often poor, and while mobile phone text messaging has been tried to improve adherence, there is insufficient evidence that it alters secondary prevention of cardiovascular disease.Statins are effective in preventing further cardiovascular disease in people with a history of cardiovascular disease. As the event rate is higher in men than in women, the decrease in events is more |
Cardiovascular disease | easily seen in men than women. In those at risk, but without a history of cardiovascular disease (primary prevention), statins decrease the risk of death and combined fatal and non-fatal cardiovascular disease. The benefit, however, is small. A United States guideline recommends statins in those who have a 12% or greater risk of cardiovascular disease over the next ten years. Niacin, fibrates and CETP Inhibitors, while they may increase HDL cholesterol do not affect the risk of cardiovascular disease in those who are already on statins. Fibrates lower the risk of cardiovascular and coronary events, but there is no evidence to suggest that they reduce all-cause mortality.Anti-diabetic medication may reduce cardiovascular risk in people with Type 2 diabetes, although evidence is not conclusive. A meta-analysis in 2009 including 27,049 participants and 2,370 major vascular events showed a 15% relative risk reduction in cardiovascular disease with more-intensive glucose lowering over an average follow-up period of 4.4 years, but an increased risk of major hypoglycemia.Aspirin has been found to be of only modest benefit in those at low risk of heart disease, as the risk of serious bleeding is almost equal to the protection against cardiovascular problems. In those at very low risk, including those over the age of 70, it is not recommended. The United States Preventive Services Task Force recommends against use of aspirin for prevention in women less than 55 and men less than 45 years old; however, it is recommended for some older people.The use of vasoactive agents for people with pulmonary hypertension with left heart disease or hypoxemic lung diseases may cause harm and unnecessary expense.Antibiotics for secondary prevention of coronary heart disease
Antibiotics may help patients with coronary disease to reduce the risk of heart attacks and strokes. However, evidence in 2021 suggests that antibiotics for secondary prevention of coronary heart disease are harmful, with increased mortality and occurrence of stroke; the use of antibiotics is not supported for preventing secondary coronary heart disease.
Physical activity
Exercise-based cardiac rehabilitation following a heart attack reduces the risk of death from cardiovascular disease and leads to less hospitalizations. There have been few high-quality studies of the benefits of exercise training in people with increased cardiovascular risk but no history of cardiovascular disease.A systematic review estimated that inactivity is responsible for 6% of the burden of disease from coronary heart disease worldwide. The authors estimated that 121,000 deaths from coronary heart disease could have been averted in Europe in 2008 if people had not been physically inactive. Low-quality evidence from a limited number of studies suggest that yoga has beneficial effects on blood pressure and cholesterol. Tentative evidence suggests that home-based exercise programs may be more efficient at improving exercise adherence.
Dietary supplements
While a healthy diet is beneficial, the effect of antioxidant supplementation (vitamin E, vitamin C, etc.) or vitamins has not been shown to protect against cardiovascular disease and in some cases may possibly result in harm. Mineral supplements have also not been found to be useful. Niacin, a type of vitamin B3, may be an exception with a modest decrease in the risk of cardiovascular events in those at high risk. Magnesium supplementation lowers high blood pressure in a dose-dependent manner. Magnesium therapy is recommended for people with ventricular arrhythmia associated with torsades de pointes who present with long QT syndrome, and for the treatment of people with digoxin intoxication-induced arrhythmias. There is no evidence that omega-3 fatty acid supplementation is beneficial.
Management
Cardiovascular disease is treatable with initial treatment primarily focused on diet and lifestyle interventions. Influenza may make heart attacks and strokes more likely and therefore influenza vaccination may decrease the chance of cardiovascular events and death in people with heart disease.Proper CVD management necessitates a focus on MI and stroke cases due to their combined high mortality rate, keeping in mind the cost-effectiveness of any intervention, especially in developing countries with low or middle-income levels. Regarding MI, strategies using aspirin, atenolol, streptokinase or tissue plasminogen activator have been compared for quality-adjusted life-year (QALY) in regions of low and middle income. The costs for a single QALY for aspirin and atenolol were less than US$25, streptokinase was about $680, and t-PA was $16,000. Aspirin, ACE inhibitors, beta-blockers, and statins used together for secondary CVD prevention in the same regions showed single QALY costs of $350.There are also surgical or procedural interventions that can save someones life or prolong it. For heart valve problems, a person could have surgery to replace the valve. For arrhythmias, a pacemaker can be put in place to help reduce abnormal heart rhythms and for a heart attack, there are multiple options two of these are a coronary angioplasty and a coronary artery bypass surgery.There is probably no additional benefit in terms of mortality and serious adverse events when blood pressure targets were lowered to ≤ 135/85 mmHg from ≤ 140 to 160/90 to 100 mmHg.
Epidemiology
Cardiovascular diseases are the leading cause of death worldwide and in all regions except Africa. In 2008, 30% of all global death was attributed to cardiovascular diseases. Death caused by cardiovascular diseases are also higher in low- and middle-income countries as over 80% of all global deaths caused by cardiovascular diseases occurred in those countries. It is also estimated that by 2030, over 23 million people will die from cardiovascular diseases each year.
It is estimated that 60% of the worlds cardiovascular disease burden will occur in the South Asian subcontinent despite only accounting for 20% of the worlds population. This may be secondary to a combination of genetic predisposition and environmental factors. Organizations such as the Indian Heart Association are working with the World Heart Federation to raise awareness about this issue.
Research
There is evidence that cardiovascular disease existed in pre-history, and research into cardiovascular disease dates from at least the 18th century. The causes, prevention, and/or treatment of all forms of cardiovascular disease remain active fields of biomedical research, with hundreds of scientific studies being published on a weekly basis.
Recent areas of research include the link between inflammation and atherosclerosis the potential for novel therapeutic interventions, and the genetics of coronary heart disease.
References
External links
2021 ESC Guidelines on cardiovascular disease prevention in clinical practice
Cardiovascular disease at Curlie
Heart Disease MedicineNet Slides, photos, descriptions
Risk calculator |
Anterior spinal artery syndrome | Anterior spinal artery syndrome (also known as "anterior spinal cord syndrome") is syndrome caused by ischemia of the anterior spinal artery, resulting in loss of function of the anterior two-thirds of the spinal cord. The region affected includes the descending corticospinal tract, ascending spinothalamic tract, and autonomic fibers. It is characterized by a corresponding loss of motor function, loss of pain and temperature sensation, and hypotension.
Anterior spinal artery syndrome is the most common form of spinal cord infarction. The anterior spinal cord is at increased risk for infarction because it is supplied by the single anterior spinal artery and has little collateral circulation, unlike the posterior spinal cord which is supplied by two posterior spinal arteries.
Signs and symptoms
Complete motor paralysis below the level of the lesion due to interruption of the corticospinal tract
Loss of pain and temperature sensation at and below the level of the lesion due to interruption of the spinothalamic tract
Retained proprioception and vibratory sensation due to intact dorsal columns
Autonomic dysfunction may be present and can manifest as hypotension (either orthostatic or frank hypotension), sexual dysfunction, and/or bowel and bladder dysfunction
Areflexia, flaccid internal and external anal sphincter, urinary retention and intestinal obstruction may also be present in individuals with anterior cord syndrome.Symptoms usually occur very quickly and are often experienced within one hour of the initial damage. MRI can detect the magnitude and location of the damage 10–15 hours after the initiation of symptoms. Diffusion-weighted imaging may be used as it is able to identify the damage within a few minutes of symptomatic onset.Clinical features include paraparesis or quadriparesis (depending on the level of the injury) and impaired pain and temperature sensation. Complete motor paralysis below the level of the lesion due to interruption of the corticospinal tract, and loss of pain and temperature sensation at and below the level of the lesion. Proprioception and vibratory sensation is preserved, as it is in the dorsal side of the spinal cord.
Causes
Due to the branches of the aorta that supply the anterior spinal artery, the most common causes are insufficiencies within the aorta. These include aortic aneurysms, dissections, direct trauma to the aorta, surgeries, and atherosclerosis. Acute disc herniation, cervical spondylosis, kyphoscoliosis, damage to the spinal column and neoplasia all could result in ischemia from anterior spinal artery occlusion leading to anterior cord syndrome. Other causes include vasculitis, polycythemia, sickle cell disease, decompression sickness, and collagen and elastin disorders. A thrombus in the artery of Adamkiewicz can lead to an anterior spinal syndrome. This is the most feared, though rare complication of bronchial artery embolization done in massive hemoptysis.
Anatomy
The anterior portion of the spinal cord is supplied by the anterior spinal artery. It begins at the foramen magnum where branches of the two vertebral arteries exit, merge, and descend along the anterior spinal cord. As the anterior spinal artery proceeds inferiorly, it receives branches originating mostly from the aorta. The largest aortic branch is the artery of Adamkiewicz.
Diagnosis
An MRI is used in the process of making a diagnosis for this condition
Treatment
Treatment is determined based on the primary cause of anterior cord syndrome. When the diagnosis of anterior cord syndrome is determined, the prognosis is unfortunate. The mortality rate is approximately 20%, with 50% of individuals living with anterior cord syndrome having very little or no changes in symptoms.
Eponym
It is also known as "Becks syndrome".
See also
Spinal cord injury
References
== External links == |
Vogt–Koyanagi–Harada disease | Vogt–Koyanagi–Harada disease (VKH) is a multisystem disease of presumed autoimmune cause that affects melanin-pigmented tissues. The most significant manifestation is bilateral, diffuse uveitis, which affects the eyes. VKH may variably also involve the inner ear, with effects on hearing, the skin, and the meninges of the central nervous system.
Signs and symptoms
Overview
The disease is characterised by bilateral diffuse uveitis, with pain, redness and blurring of vision. The eye symptoms may be accompanied by a varying constellation of systemic symptoms, such as auditory (tinnitus, vertigo, and hypoacusis), neurological (meningismus, with malaise, fever, headache, nausea, abdominal pain, stiffness of the neck and back, or a combination of these factors; meningitis, CSF pleocytosis, cranial nerve palsies, hemiparesis, transverse myelitis and ciliary ganglionitis), and cutaneous manifestations, including poliosis, vitiligo, and alopecia. The vitiligo often is found at the sacral region.
Phases
The sequence of clinical events in VKH is divided into four phases - prodromal, acute uveitic, convalescent, and chronic recurrent.The prodromal phase may have no symptoms, or may mimic a nonspecific viral infection, marked by flu-like symptoms that typically last for a few days. Fever, headache, nausea, meningismus, dysacusia (discomfort caused by loud noises or a distortion in the quality of the sounds being heard), tinnitus, and/or vertigo may occur. Eye symptoms can include orbital pain, photophobia, and tearing. The skin and hair may be sensitive to touch. Cranial nerve palsies and optic neuritis are uncommon.The acute uveitic phase occurs a few days later and typically lasts for several weeks. This phase is heralded by bilateral panuveitis causing blurring of vision. In 70% of VKH cases, the onset of visual blurring is bilaterally contemporaneous; if initially unilateral, the other eye is involved within several days. The process can include bilateral granulomatous anterior uveitis, variable degree of vitritis, thickening of the posterior choroid with elevation of the peripapillary retinal choroidal layer, optic nerve hyperemia and papillitis, and multiple exudative bullous serous retinal detachments.The convalescent phase is characterized by gradual tissue depigmentation of skin with vitiligo and poliosis, sometimes with nummular depigmented scars, as well as alopecia and diffuse fundus depigmentation resulting in a classic orange-red discoloration ("sunset glow fundus") and retinal pigment epithelium clumping and/or migration.The chronic recurrent phase may be marked by repeated bouts of uveitis, but is more commonly a chronic, low-grade, often subclinical, uveitis that may lead to granulomatous anterior inflammation, cataracts, glaucoma, and ocular hypertension. Full-blown recurrences, though, are rare after the acute stage is over. Dysacusia may occur in this phase.
Cause
Although sometimes a viral infection, or skin or eye trauma precedes an outbreak, the exact underlying initiator of VKH disease remains unknown. VKH is attributed, however, to aberrant T-cell-mediated immune response directed against self-antigens found on melanocytes. Stimulated by interleukin 23 (IL-23), T helper 17 cells and cytokines, such as interleukin 17, appear to target proteins in the melanocytes.
Risk factors
Affected individuals are typically 20 to 50 years old. The female-to-male ratio is 2:1. By definition, affected people have no history of either surgical or accidental ocular trauma. VKH is more common in Asians, Latinos, Middle Easterners, American Indians, and Mexican Mestizos; it is much less common in Caucasians and in Blacks from sub-Saharan Africa.VKH is associated with a variety of genetic polymorphisms that relate to immune function. For example, t has been associated with human leukocyte antigens (HLA) HLA-DR4 and DRB1/DQA1, copy-number variations of complement component 4, a variant IL-23R locus and with various other non-HLA genes. HLA-DRB1*0405 in particular appears to play an important susceptibility role.
Diagnosis
If tested in the prodromal phase, cerebrospinal fluid pleocytosis is found in more than 80% of cases, with mainly lymphocytes. This pleocytosis resolves in about 8 weeks even if chronic uveitis persists.Functional tests may include electroretinogram and visual field testing. Diagnostic confirmation and an estimation of disease severity may involve imaging tests such as retinography, fluorescein or indocyanine green angiography, optical coherence tomography and ultrasound. For example, indocyanine green angiography may detect continuing choroidal inflammation in the eyes without clinical symptoms or signs. Ocular MRI may be helpful and auditory symptoms should undergo audiologic testing. Histopathology findings from eye and skin are discussed by Walton.The diagnosis of VKH is based on the clinical presentation; the diagnostic differential is extensive, and includes sympathetic ophthalmia, sarcoidosis, primary intraocular B-cell lymphoma, posterior scleritis, uveal effusion syndrome, tuberculosis, syphilis, and multifocal choroidopathy syndromes.
Types
Based on the presence of extraocular findings, such as neurological, auditory, and integumentary manifestations, the "revised diagnostic criteria" of 2001 classify the disease as complete (eyes along with both neurological and skin), incomplete (eyes along with either neurological or skin), or probable (eyes without either neurological or skin) . By definition, for research homogeneity purposes, the two exclusion criteria are previous ocular penetrating trauma or surgery, and other concomitant ocular disease similar to VKH disease.
Management
The acute uveitis]phase of VKH is usually responsive to high-dose oral corticosteroids; parenteral administration is usually not required. However, ocular complications may require a subtenon or intravitreous injection of corticosteroids or bevacizumab. In refractory situations, other immunosuppressives such as cyclosporine, or tacrolimus, antimetabolites (azathioprine, mycophenolate mofetil or methotrexate), or biological agents such as intravenous immunoglobulins (IVIG) or infliximab may be needed.
Outcomes
Visual prognosis is generally good with prompt diagnosis and aggressive immunomodulatory treatment. Inner ear symptoms usually respond to corticosteroid therapy within weeks to months; hearing usually recovers completely. Chronic eye effects such as cataracts, glaucoma, and optic atrophy can occur. Skin changes usually persist despite therapy.
Eponym
VKH syndrome is named for ophthalmologists Alfred Vogt from Switzerland and Yoshizo Koyanagi and Einosuke Harada from Japan. Several authors, including Arabic doctor Mohammad-al-Ghâfiqî in the 12th century ,as well as Jacobi, Nettelship, and Tay in the 19th century, had described poliosis, neuralgias, and hearing disorders. This constellation was probably often due to sympathetic ophthalmia, but likely included examples of VKH. Koyanagis first description of the disease was in 1914, but was preceded by Jujiro Komoto, professor of ophthalmology at the University of Tokyo, in 1911. A much later article, published in 1929, definitively associated Koyanagi with the disease. Haradas 1926 paper is recognized for its comprehensive description of what is now known as Vogt–Koyanagi–Harada disease.
References
External links
American Academy of Ophthalmology: Identify and Treat Vogt-Koyanagi-Harada Syndrome |
Heart failure | Heart failure (HF), also known as congestive heart failure (CHF), is a syndrome, a group of signs and symptoms caused by an impairment of the hearts blood pumping function. Symptoms typically include shortness of breath, excessive fatigue, and leg swelling. The shortness of breath may occur with exertion or while lying down, and may wake people up during the night. Chest pain, including angina, is not usually caused by heart failure, but may occur if the heart failure was caused by a heart attack. The severity of the heart failure is measured by the severity of symptoms during exercise. Other conditions that may have symptoms similar to heart failure include obesity, kidney failure, liver disease, anemia, and thyroid disease.Common causes of heart failure include coronary artery disease, heart attack, high blood pressure, atrial fibrillation, valvular heart disease, excessive alcohol consumption, infection, and cardiomyopathy. These cause heart failure by altering the structure or the function of the heart or in some cases both. There are different types of heart failure: right-sided heart failure, which affects the right heart, left-sided heart failure, which affects the left heart, and biventricular heart failure, which affects both sides of the heart. Left-sided heart failure may be present with a reduced ejection fraction or with a preserved ejection fraction. Heart failure is not the same as cardiac arrest, in which blood flow stops completely due to the failure of the heart to pump effectively.Diagnosis is based on symptoms, physical findings, and echocardiography. Blood tests, and a chest x-ray may be useful to determine the underlying cause.Treatment depends on severity and case. For people with chronic, stable, mild heart failure, treatment usually consists of lifestyle changes, such as not smoking, physical exercise, and dietary changes, as well as medications. In heart failure due to left ventricular dysfunction, angiotensin-converting-enzyme inhibitors, angiotensin receptor blockers, or valsartan/sacubitril along with beta blockers are recommended. In severe disease, aldosterone antagonists or hydralazine with a nitrate can be used. Diuretics may also be prescribed to prevent fluid retention and the resulting shortness of breath. Depending on the case, an implanted device such as a pacemaker or implantable cardiac defibrillator may sometimes be recommended. In some moderate or more severe cases, cardiac resynchronization therapy (CRT) or cardiac contractility modulation may be beneficial. In severe disease that persists despite all other measures, a cardiac assist device ventricular assist device (for the left, right, or both heart chambers), or, occasionally, heart transplantation may be recommended.Heart failure is a common, costly, and potentially fatal condition, and is the leading cause of hospitalization and readmission in older adults. Heart failure often leads to more drastic health impairments than failure of other, similarly complex organs such as the kidneys or liver. In 2015, it affected about 40 million people worldwide. Overall, heart failure affects about 2% of adults, and as many as 6-10% of those over the age of 65. Rates are predicted to increase. The risk of death in the first year after diagnosis is about 35%, while the risk of death in the second year is less than 10% in those still alive. The risk of death is comparable to that of some cancers. In the United Kingdom, the disease is the reason for 5% of emergency hospital admissions. Heart failure has been known since ancient times, it is mentioned in the Ebers Papyrus around 1550 BCE.
Definition
Heart failure is not a disease but a syndrome - a combination of signs and symptoms caused by the failure of the heart to pump blood to support the circulatory system at rest or during activity. It develops when the heart fails to properly fill with blood during diastole, resulting in a decrease in intracardiac pressures or in ejection during systole, reducing cardiac output to the rest of the body. The filling failure and high intracardiac pressure can lead to fluid accumulation in the veins and tissue. This manifests as water retention and swelling due to fluid accumulation (edema) called congestion. Impaired ejection can lead to inadequate blood flow to the body tissues, resulting in ischemia.
Signs and symptoms
Congestive heart failure is a pathophysiological condition in which the hearts output is insufficient to meet the needs of the body and lungs. The term "congestive heart failure" is often used because one of the most common symptoms is congestion or fluid accumulation in the tissues and veins of the lungs or other parts of a persons body. Congestion manifests itself particularly in the form of fluid accumulation and swelling (edema), both in the form of peripheral edema (causing swollen limbs and feet) and pulmonary edema (causing difficulty breathing) and ascites (swollen abdomen).Symptoms of heart failure are traditionally divided into left-sided and right-sided because the left and right ventricles supply different parts of the circulation, but sufferers often have both types of signs and symptoms. In biventricular heart failure, both sides of the heart are affected. Left-sided heart failure is the more common.
Left-sided failure
The left side of the heart takes oxygen-rich blood from the lungs and pumps it to the rest of the circulatory system in the body, except for the pulmonary circulation). Failure of the left side of the heart causes blood to back up into the lungs, causing breathing difficulties and fatigue due to an insufficient supply of oxygenated blood. Common respiratory signs include increased respitatory rate and labored breathing (nonspecific signs of shortness of breath). Rales or crackles heard initially in the lung bases and when severe in all lung fields indicate the development of pulmonary edema (fluid in the alveoli). Cyanosis, indicates deficiency of oxygen in the blood, is a late sign of extremely severe pulmonary edema.Other signs of left ventricular failure include a laterally displaced apex beat (which occurs when the heart is enlarged) and a gallop rhythm (additional heart sounds), which may be heard as a sign of increased blood flow or increased intracardiac pressure. Heart murmurs may indicate the presence of valvular heart disease, either as a cause (e.g., aortic stenosis) or as a consequence (e.g., mitral regurgitation) of heart failure.Reverse insufficiency of the left ventricle causes congestion in the blood vessels of the lungs, so that symptoms are predominantly respiratory. Reverse insufficiency can be divided into the failure of the left atrium, the left ventricle, or both within the left circuit. Patients will experience shortness of breath (dyspnea) on exertion and, in severe cases, dyspnea at rest. Increasing breathlessness while lying down, called orthopnea, also occurs. It can be measured by the number of pillows required to lie comfortably, with extreme cases of orthopnea forcing the patient to sleep sitting up. Another symptom of heart failure is paroxysmal nocturnal dyspnea: a sudden nocturnal attack of severe shortness of breath, usually occurring several hours after falling asleep. There may be "cardiac asthma" or wheezing. Impaired left ventricular forward function can lead to symptoms of poor systemic perfusion such as dizziness, confusion, and cool extremities at rest.
Right-sided failure
Right-sided heart failure is often caused by pulmonary heart disease (cor pulmonale), which is typically caused by issues with pulmonary circulation such as pulmonary hypertension or pulmonic stenosis. Physical examination may reveal pitting peripheral edema, ascites, liver enlargement, and spleen enlargement. Jugular venous pressure is frequently assessed as a marker of fluid status, which can be accentuated by testing hepatojugular reflux. If the right ventricular pressure is increased, a parasternal heave which causes the compensatory increase in contraction strength may be present.Backward failure of the right ventricle leads to congestion of systemic capillaries. This generates excess fluid accumulation in the body. This causes swelling under the skin (peripheral edema or anasarca) and usually affects the dependent parts of the body first, causing foot and ankle swelling in people who are standing up and sacral edema in people who are predominantly lying down. Nocturia (frequent night-time urination) may occur when fluid from the legs is returned to the bloodstream while lying down at night. In progressively severe cases, ascites (fluid accumulation in the abdominal cavity causing swelling) and liver enlargement may develop. Significant liver congestion may result in impaired liver function (congestive hepatopathy), jaundice, and coagulopathy (problems of decreased or increased blood clotting).
Biventricular failure
Dullness of the lung fields when percussed and reduced breath sounds at the base of the lungs may suggest the development of a pleural effusion (fluid collection between the lung and the chest wall). Though it can occur in isolated left- or right-sided heart failure, it is more common in biventricular failure because pleural veins drain into both the systemic and pulmonary venous systems. When unilateral, effusions are often right-sided.If a person with a failure of one ventricle lives long enough, it will tend to progress to failure of both ventricles. For example, left ventricular failure allows pulmonary edema and pulmonary hypertension to occur, which increase stress on the right ventricle. Though still harmful, right ventricular failure is not as deleterious to the left side.
Causes
Since heart failure is a syndrome and not a disease, establishing the underlying cause is vital to diagnosis and treatment.Heart failure is the potential end stage of all heart diseases. Common causes of heart failure include coronary artery disease, including a previous myocardial infarction (heart attack), high blood pressure, atrial fibrillation, valvular heart disease, excess alcohol use, infection, and cardiomyopathy of an unknown cause. In addition, viral infections of the heart can lead to inflammation of the muscular layer of the heart and subsequently contribute to the development of heart failure. Genetic predisposition plays an important role. If more than one cause is present, progression is more likely and prognosis is worse.Heart damage can predispose a person to develop heart failure later in life and has many causes including systemic viral infections (e.g., HIV), chemotherapeutic agents such as daunorubicin, cyclophosphamide, trastuzumab and substance use disorders of substances such as alcohol, cocaine, and methamphetamine. An uncommon cause is exposure to certain toxins such as lead and cobalt. Additionally, infiltrative disorders such as amyloidosis and connective tissue diseases such as systemic lupus erythematosus have similar consequences. Obstructive sleep apnea (a condition of sleep wherein disordered breathing overlaps with obesity, hypertension, and/or diabetes) is regarded as an independent cause of heart failure. Recent reports from clinical trials have also linked variation in blood pressure to heart failure and cardiac changes that may give rise to heart failure.
High-output heart failure
High-output heart failure happens when the amount of blood pumped out is more than typical and the heart is unable to keep up. This can occur in overload situations such as blood or serum infusions, kidney diseases, chronic severe anemia, beriberi (vitamin B1/thiamine deficiency), hyperthyroidism, cirrhosis, Pagets disease, multiple myeloma, arteriovenous fistulae, or arteriovenous malformations.
Acute decompensation
Chronic stable heart failure may easily decompensate. This most commonly results from a concurrent illness (such as myocardial infarction (a heart attack) or pneumonia), abnormal heart rhythms, uncontrolled hypertension, or a persons failure to maintain a fluid restriction, diet, or medication.Other factors that may worsen CHF include: anemia, hyperthyroidism, excessive fluid or salt intake, and medication such as NSAIDs and thiazolidinediones. NSAIDs increase the risk twofold.
Medications
A number of medications may cause or worsen the disease. This includes NSAIDs, COX-2 inhibitors, a number of anesthetic agents such as ketamine, thiazolidinediones, some cancer medications, several antiarrhythmic medications, pregabalin, alpha-2 adrenergic receptor agonists, minoxidil, itraconazole, cilostazol, anagrelide, stimulants (e.g., methylphenidate), tricyclic antidepressants, lithium, antipsychotics, dopamine agonists, TNF inhibitors, calcium channel blockers (especially verapamil and diltiazem), salbutamol, and tamsulosin.By inhibiting the formation of prostaglandins, NSAIDs may exacerbate heart failure through several mechanisms, including promotion of fluid retention, increasing blood pressure, and decreasing a persons response to diuretic medications. Similarly, the ACC/AHA recommends against the use of COX-2 inhibitor medications in people with heart failure. Thiazolidinediones have been strongly linked to new cases of heart failure and worsening of pre-existing congestive heart failure due to their association with weight gain and fluid retention. Certain calcium channel blockers, such as diltiazem and verapamil, are known to decrease the force with which the heart ejects blood, thus are not recommended in people with heart failure with a reduced ejection fraction.
Supplements
Certain alternative medicines carry a risk of exacerbating existing heart failure, and are not recommended. This includes aconite, ginseng, gossypol, gynura, licorice, lily of the valley, tetrandrine, and yohimbine. Aconite can cause abnormally slow heart rates and abnormal heart rhythms such as ventricular tachycardia. Ginseng can cause abnormally low or high blood pressure, and may interfere with the effects of diuretic medications. Gossypol can increase the effects of diuretics, leading to toxicity. Gynura can cause low blood pressure. Licorice can worsen heart failure by increasing blood pressure and promoting fluid retention. Lily of the valley can cause abnormally slow heart rates with mechanisms similar to those of digoxin. Tetrandrine can lead to low blood pressure through inhibition of L-type calcium channels. Yohimbine can exacerbate heart failure by increasing blood pressure through alpha-2 adrenergic receptor antagonism.
Pathophysiology
Heart failure is caused by any condition that reduces the efficiency of the heart muscle, through damage or overloading. Over time, these increases in workload, which are mediated by long-term activation of neurohormonal systems such as the renin–angiotensin system and the sympathoadrenal system, lead to fibrosis, dilation, and structural changes in the shape of the left ventricle from elliptical to spherical.The heart of a person with heart failure may have a reduced force of contraction due to overloading of the ventricle. In a normal heart, increased filling of the ventricle results in increased contraction force by the Frank–Starling law of the heart, and thus a rise in cardiac output. In heart failure, this mechanism fails, as the ventricle is loaded with blood to the point where heart muscle contraction becomes less efficient. This is due to reduced ability to cross-link actin and myosin myofilaments in over-stretched heart muscle.
Diagnosis
No diagnostic criteria have been agreed on as the gold standard for heart failure. In the UK the National Institute for Health and Care Excellence recommends measuring brain natriuretic peptide 32 (BNP) followed by an ultrasound of the heart if positive. This is recommended in those with shortness of breath. In those with worsening heart failure, both a measure of BNP and of troponin are recommended to help determine likely outcomes.
Classification
One historical method of categorizing heart failure is by the side of the heart involved (left heart failure versus right heart failure). Right heart failure was thought to compromise blood flow to the lungs compared to left heart failure compromising blood flow to the aorta and consequently to the brain and the remainder of the bodys systemic circulation. However, mixed presentations are common and left heart failure is a common cause of right heart failure.More accurate classification of heart failure type is made by measuring ejection fraction, or the proportion of blood pumped out of the heart during a single contraction. Ejection fraction is given as a percentage with the normal range being between 50 and 75%. The two types are:
1) Heart failure due to reduced ejection fraction (HFrEF): Synonyms no longer recommended are "heart failure due to left ventricular systolic dysfunction" and "systolic heart failure". HFrEF is associated with an ejection fraction less than 40%.2) Heart failure with preserved ejection fraction (HFpEF): Synonyms no longer recommended include "diastolic heart failure" and "heart failure with normal ejection fraction." HFpEF occurs when the left ventricle contracts normally during systole, but the ventricle is stiff and does not relax normally during diastole, which impairs filling.Heart failure may also be classified as acute or chronic. Chronic heart failure is a long-term condition, usually kept stable by the treatment of symptoms. Acute decompensated heart failure is a worsening of chronic heart failure symptoms, which can result in acute respiratory distress. High-output heart failure can occur when there is increased cardiac demand that results in increased left ventricular diastolic pressure which can develop into pulmonary congestion (pulmonary edema).Several terms are closely related to heart failure and may be the cause of heart failure, but should not be confused with it. Cardiac arrest and asystole refer to situations in which no cardiac output occurs at all. Without urgent treatment, these events result in sudden death. Myocardial infarction ("Heart attack") refers to heart muscle damage due to insufficient blood supply, usually as a result of a blocked coronary artery. Cardiomyopathy refers specifically to problems within the heart muscle, and these problems can result in heart failure. Ischemic cardiomyopathy implies that the cause of muscle damage is coronary artery disease. Dilated cardiomyopathy implies that the muscle damage has resulted in enlargement of the heart. Hypertrophic cardiomyopathy involves enlargement and thickening of the heart muscle.
Ultrasound
An echocardiogram (ultrasound of the heart) is commonly used to support a clinical diagnosis of heart failure. This can determine the stroke volume (SV, the amount of blood in the heart that exits the ventricles with each beat), the end-diastolic volume (EDV, the total amount of blood at the end of diastole), and the SV in proportion to the EDV, a value known as the ejection fraction (EF). In pediatrics, the shortening fraction is the preferred measure of systolic function. Normally, the EF should be between 50 and 70%; in systolic heart failure, it drops below 40%. Echocardiography can also identify valvular heart disease and assess the state of the pericardium (the connective tissue sac surrounding the heart). Echocardiography may also aid in deciding specific treatments, such as medication, insertion of an implantable cardioverter-defibrillator, or cardiac resynchronization therapy. Echocardiography can also help determine if acute myocardial ischemia is the precipitating cause, and may manifest as regional wall motion abnormalities on echo.
Chest X-ray
Chest X-rays are frequently used to aid in the diagnosis of CHF. In a person who is compensated, this may show cardiomegaly (visible enlargement of the heart), quantified as the cardiothoracic ratio (proportion of the heart size to the chest). In left ventricular failure, evidence may exist of vascular redistribution (upper lobe blood diversion or cephalization), Kerley lines, cuffing of the areas around the bronchi, and interstitial edema. Ultrasound of the lung may also be able to detect Kerley lines.
Electrophysiology
An electrocardiogram (ECG/EKG) may be used to identify arrhythmias, ischemic heart disease, right and left ventricular hypertrophy, and presence of conduction delay or abnormalities (e.g. left bundle branch block). Although these findings are not specific to the diagnosis of heart failure, a normal ECG virtually excludes left ventricular systolic dysfunction.
Blood tests
Blood tests routinely performed include electrolytes (sodium, potassium), measures of kidney function, liver function tests, thyroid function tests, a complete blood count, and often C-reactive protein if infection is suspected. An elevated brain natriuretic peptide 32 (BNP) is a specific test indicative of heart failure. Additionally, BNP can be used to differentiate between causes of dyspnea due to heart failure from other causes of dyspnea. If myocardial infarction is suspected, various cardiac markers may be used.
BNP is a better indicator than N-terminal pro-BNP for the diagnosis of symptomatic heart failure and left ventricular systolic dysfunction. In symptomatic people, BNP had a sensitivity of 85% and specificity of 84% in detecting heart failure; performance declined with increasing age.Hyponatremia (low serum sodium concentration) is common in heart failure. Vasopressin levels are usually increased, along with renin, angiotensin II, and catecholamines to compensate for reduced circulating volume due to inadequate cardiac output. This leads to increased fluid and sodium retention in the body; the rate of fluid retention is higher than the rate of sodium retention in the body, this phenomenon causes hypervolemic hyponatremia (low sodium concentration due to high body fluid retention). This phenomenon is more common in older women with low body mass. Severe hyponatremia can result in accumulation of fluid in the brain, causing cerebral edema and intracranial hemorrhage.
Angiography
Angiography is the X-ray imaging of blood vessels, which is done by injecting contrast agents into the bloodstream through a thin plastic tube (catheter), which is placed directly in the blood vessel. X-ray images are called angiograms. Heart failure may be the result of coronary artery disease, and its prognosis depends in part on the ability of the coronary arteries to supply blood to the myocardium (heart muscle). As a result, coronary catheterization may be used to identify possibilities for revascularisation through percutaneous coronary intervention or bypass surgery.
Algorithms
Various algorithms are used for the diagnosis of heart failure. For example, the algorithm used by the Framingham Heart Study adds together criteria mainly from physical examination. In contrast, the more extensive algorithm by the European Society of Cardiology weights the difference between supporting and opposing parameters from the medical history, physical examination, further medical tests, and response to therapy.
Framingham criteria
By the Framingham criteria, diagnosis of congestive heart failure (heart failure with impaired pumping capability) requires the simultaneous presence of at least two of the following major criteria or one major criterion in conjunction with two of the minor criteria.
Major criteria includean enlarged heart on a chest X-ray,
an S3 gallop (a third heart sound),
acute pulmonary edema,
episodes of waking up from sleep gasping for air, crackles on lung auscultation,
central venous pressure more than 16 cm H2O at the right atrium,
jugular vein distension,
positive abdominojugular test, and
weight loss more than 4.5 kg in 5 days in response to treatment (sometimes classified as a minor criterion).Minor criteria includean abnormally fast heart rate more than 120 beats per minute,
nocturnal cough,
difficulty breathing with physical activity,
pleural effusion,
a decrease in the vital capacity by one-third from maximum recorded,
liver enlargement, and
bilateral ankle edema.Minor criteria are acceptable only if they can not be attributed to another medical condition such as pulmonary hypertension, chronic lung disease, cirrhosis, ascites, or the nephrotic syndrome. The Framingham Heart Study criteria are 100% sensitive and 78% specific for identifying persons with definite congestive heart failure.
ESC algorithm
The ESC algorithm weights these parameters in establishing the diagnosis of heart failure:
Staging
Heart failure is commonly stratified by the degree of functional impairment conferred by the severity of the heart failure (as reflected in the New York Heart Association (NYHA) Functional Classification.) The NYHA functional classes (I–IV) begin with class I, which is defined as a person who experiences no limitation in any activities and has no symptoms from ordinary activities. People with NYHA class II heart failure have slight, mild limitations with everyday activities; the person is comfortable at rest or with mild exertion. With NYHA class III heart failure, a marked limitation occurs with any activity; the person is comfortable only at rest. A person with NYHA class IV heart failure is symptomatic at rest and becomes quite uncomfortable with any physical activity. This score documents the severity of symptoms and can be used to assess response to treatment. While its use is widespread, the NYHA score is not very reproducible and does not reliably predict the walking distance or exercise tolerance on formal testing.In its 2001 guidelines, the American College of Cardiology/American Heart Association working group introduced four stages of heart failure:
Stage A: People at high risk for developing HF in the future, but no functional or structural heart disorder
Stage B: A structural heart disorder, but no symptoms at any stage
Stage C: Previous or current symptoms of heart failure in the context of an underlying structural heart problem, but managed with medical treatment
Stage D: Advanced disease requiring hospital-based support, a heart transplant, or palliative careThe ACC staging system is useful since stage A encompasses "pre-heart failure" – a stage where intervention with treatment can presumably prevent progression to overt symptoms. ACC stage A does not have a corresponding NYHA class. ACC stage B would correspond to NYHA class I. ACC stage C corresponds to NYHA class II and III, while ACC stage D overlaps with NYHA class IV.
The degree of coexisting illness: i.e. heart failure/systemic hypertension, heart failure/pulmonary hypertension, heart failure/diabetes, heart failure/kidney failure, etc.
Whether the problem is primarily increased venous back pressure (preload), or failure to supply adequate arterial perfusion (afterload)
Whether the abnormality is due to low cardiac output with high systemic vascular resistance or high cardiac output with low vascular resistance (low-output heart failure vs. high-output heart failure)
Histopathology
Histopathology can diagnose heart failure in autopsies. The presence of siderophages indicates chronic left-sided heart failure, but is not specific for it. It is also indicated by congestion of the pulmonary circulation.
Prevention
A persons risk of developing heart failure is inversely related to level of physical activity. Those who achieved at least 500 MET-minutes/week (the recommended minimum by U.S. guidelines) had lower heart failure risk than individuals who did not report exercising during their free time; the reduction in heart failure risk was even greater in those who engaged in higher levels of physical activity than the recommended minimum.
Heart failure can also be prevented by lowering high blood pressure and high blood cholesterol, and by controlling diabetes. Maintaining a healthy weight, and decreasing sodium, alcohol, and sugar intake, may help. Additionally, avoiding tobacco use has been shown to lower the risk of heart failure. According to Johns Hopkins and the American Heart Association there are a few ways to help to prevent a cardiac event. Johns Hopkins mentions that stopping tobacco use, reducing high blood pressure, physical activity and your diet can drastically effect the chances of developing heart disease. High blood pressure accounts for most cardiovascular deaths. High blood pressure can be lowered into the normal range by making dietary decisions such as consuming less salt. Exercise also helps to bring blood pressure back down. One of the best ways to help avoid heart failure is to promote healthier eating habits like eating more vegetables, fruits, grains, and lean protein.Diabetes is a major risk factor for heart failure. For women with Coronary Heart disease (CHD), diabetes was the strongest risk factor for heart failure. Diabetic women with depressed creatinine clearance or elevated BMI were at the highest risk of heart failure. While the annual incidence rate of heart failure for non-diabetic women with no risk factors is 0.4%, the annual incidence rate for diabetic women with elevated body mass index (BMI) and depressed creatinine clearance was 7% and 13%, respectively.
Management
Treatment focuses on improving the symptoms and preventing the progression of the disease. Reversible causes of heart failure also need to be addressed (e.g. infection, alcohol ingestion, anemia, thyrotoxicosis, arrhythmia |
Heart failure | , and hypertension). Treatments include lifestyle and pharmacological modalities, and occasionally various forms of device therapy. Rarely, cardiac transplantation is used as an effective treatment when heart failure has reached the end stage.
Acute decompensation
In acute decompensated heart failure, the immediate goal is to re-establish adequate perfusion and oxygen delivery to end organs. This entails ensuring that airway, breathing, and circulation are adequate. Immediate treatments usually involve some combination of vasodilators such as nitroglycerin, diuretics such as furosemide, and possibly noninvasive positive pressure ventilation. Supplemental oxygen is indicated in those with oxygen saturation levels below 90%, but is not recommended in those with normal oxygen levels in normal atmosphere.
Chronic management
The goals of the treatment for people with chronic heart failure are the prolongation of life, prevention of acute decompensation, and reduction of symptoms, allowing for greater activity.
Heart failure can result from a variety of conditions. In considering therapeutic options, excluding reversible causes is of primary importance, including thyroid disease, anemia, chronic tachycardia, alcohol use disorder, hypertension, and dysfunction of one or more heart valves. Treatment of the underlying cause is usually the first approach to treating heart failure. In the majority of cases, though, either no primary cause is found or treatment of the primary cause does not restore normal heart function. In these cases, behavioral, medical and device treatment strategies exist that can provide a significant improvement in outcomes, including the relief of symptoms, exercise tolerance, and a decrease in the likelihood of hospitalization or death. Breathlessness rehabilitation for chronic obstructive pulmonary disease and heart failure has been proposed with exercise training as a core component. Rehabilitation should also include other interventions to address shortness of breath including psychological and educational needs of people and needs of caregivers. Iron supplementation appears useful in those with iron deficiency anemia and heart failure.
Advance care planning
The latest evidence indicates that advance care planning (ACP) may help to increase documentation by medical staff regarding discussions with participants, and improve an individuals depression. This involves discussing an individuals future care plan in consideration of the individuals preferences and values. The findings are however, based on low-quality evidence.
Monitoring
The various measures often used to assess the progress of people being treated for heart failure include fluid balance (calculation of fluid intake and excretion) and monitoring body weight (which in the shorter term reflects fluid shifts). Remote monitoring can be effective to reduce complications for people with heart failure.
Lifestyle
Behavior modification is a primary consideration in chronic heart failure management program, with dietary guidelines regarding fluid and salt intake. Fluid restriction is important to reduce fluid retention in the body and to correct the hyponatremic status of the body. The evidence of benefit of reducing salt, however, is poor as of 2018.
Exercise and physical activity
Exercise should be encouraged and tailored to suit individuals capabilities. A meta-analysis found that centre-based group interventions delivered by a physiotherapist are helpful in promoting physical activity in HF. There is a need for additional training for physiotherapists in delivering behaviour change intervention alongside an exercise programme. An intervention is expected to be more efficacious in encouraging physical activity than the usual care if it includes Prompts and cues to walk or exercise, like a phone call or a text message. It is extremely helpful if a trusted clinician provides explicit advice to engage in physical activity (Credible source). Another highly effective strategy is to place objects that will serve as a cue to engage in physical activity in the everyday environment of the patient (Adding object to the environment; e.g., exercise step or treadmill). Encouragement to walk or exercise in various settings beyond CR (e.g., home, neighbourhood, parks) is also promising (Generalisation of target behaviour). Additional promising strategies are Graded tasks (e.g., gradual increase in intensity and duration of exercise training), Self-monitoring, Monitoring of physical activity by others without feedback, Action planning, and Goal-setting. The inclusion of regular physical conditioning as part of a cardiac rehabilitation program can significantly improve quality of life and reduce the risk of hospital admission for worsening symptoms, but no evidence shows a reduction in mortality rates as a result of exercise. Despite cardiac rehabilitation being a recommended treatment for patients with heart failure with reduced ejection fraction (HFrEF), it is still underused. The reasons why this happens are complex, heterogeneous, and encompass healthcare system-, referring physician-, program-, and patient-level barriers. According to Alexandre et al., the main reasons for HFrEF patients not being enrolled in cardiac rehabilitation were no medical referral (31%), concomitant medical problems (28%), patient refusal (11%), and geographical distance to the hospital (9%). Furthermore, whether this evidence can be extended to people with HFpEF or to those whose exercise regimen takes place entirely at home is unclear.Home visits and regular monitoring at heart-failure clinics reduce the need for hospitalization and improve life expectancy.
Medication
Quadruple medical therapy using a combination of angiotensin-receptor neprilysin inhibitors (ARNI), beta blockers, mineralocorticoid receptor antagonists (MRA), and sodium glucose cotransporter-2 inhibitors (SGLT2 inhibitors) is now the standard of care as of 2021.
First line medications
First-line therapy for people with heart failure due to reduced systolic function should include angiotensin-converting enzyme (ACE) inhibitors (ACE-I), or angiotensin receptor blockers (ARBs) if the person develops a long-term cough as a side effect of the ACE-I. Use of medicines from these classes is associated with improved survival, fewer hospitalizations for heart failure exacerbations, and improved quality of life in people with heart failure.Beta-adrenergic blocking agents (beta blockers) also form part of the first line of treatment, adding to the improvement in symptoms and mortality provided by ACE-I/ARB. The mortality benefits of beta blockers in people with systolic dysfunction who also have atrial fibrillation is more limited than in those who do not have it. If the ejection fraction is not diminished (HFpEF), the benefits of beta blockers are more modest; a decrease in mortality has been observed, but reduction in hospital admission for uncontrolled symptoms has not been observed.In people who are intolerant of ACE-I and ARBs or who have significant kidney dysfunction, the use of combined hydralazine and a long-acting nitrate, such as isosorbide dinitrate, is an effective alternate strategy. This regimen has been shown to reduce mortality in people with moderate heart failure. It is especially beneficial in the black population.In people with symptomatic heart failure with markedly reduced ejection fraction (anyone with an ejection fraction of 35% or lower or less than 40% if following a heart attack), the use of an mineralocorticoid antagonist, such as spironolactone or eplerenone, in addition to beta blockers and ACE-I (once titrated to the target dose or maximum tolerated dose), can improve symptoms and reduce mortality.ARNI sacubitril/valsartan should be used in those who still have symptoms while on an ACE-I or ARB, beta blocker, and a mineralocorticoid receptor antagonist as it reduces the risks of cardiovascular mortality and hospitalisation for heart failure by a further 4.7% (absolute risk reduction). However, the use of this combination agent requires the cessation of ACE-i or ARB therapy 48 hours before its initiation.SGLT2 inhibitor is the newest medicine for heart failure.
Other medications
Second-line medications for CHF do not confer a mortality benefit. Digoxin is one such medication. Its narrow therapeutic window, a high degree of toxicity, and the failure of multiple trials to show a mortality benefit have reduced its role in clinical practice. It is now used in only a small number of people with refractory symptoms, who are in atrial fibrillation, and/or who have chronic hypotension.Diuretics have been a mainstay of treatment against symptoms of fluid accumulation, and include diuretics classes such as loop diuretics (such as furosemide), thiazide-like diuretics, and potassium-sparing diuretics. Although widely used, evidence on their efficacy and safety is limited, with the exception of mineralocorticoid antagonists such as spironolactone. Mineralocorticoid antagonists in those under 75 years old appear to decrease the risk of death.Anemia is an independent factor in mortality in people with chronic heart failure. Treatment of anemia significantly improves quality of life for those with heart failure, often with a reduction in severity of the NYHA classification, and also improves mortality rates. The European Society of Cardiology guideline in 2016 recommend screening for iron-deficiency anemia and treating with intravenous iron if deficiency is found.The decision to anticoagulate people with HF, typically with left ventricular ejection fractions <35% is debated, but generally, people with coexisting atrial fibrillation, a prior embolic event, or conditions that increase the risk of an embolic event such as amyloidosis, left ventricular noncompaction, familial dilated cardiomyopathy, or a thromboembolic event in a first-degree relative.Vasopressin receptor antagonists can also be used to treat heart failure. Conivaptan is the first medication approved by US Food and Drug Administration for the treatment of euvolemic hyponatremia in those with heart failure. In rare cases hypertonic 3% saline together with diuretics may be used to correct hyponatremia.Ivabradine is recommended for people with symptomatic heart failure with reduced left ventricular ejection fraction who are receiving optimized guideline-directed therapy (as above) including the maximum tolerated dose of beta-blocker, have a normal heart rhythm and continue to have a resting heart rate above 70 beats per minute. Ivabradine has been found to reduce the risk of hospitalization for heart failure exacerbations in this subgroup of people with heart failure.
Implanted devices
In people with severe cardiomyopathy (left ventricular ejection fraction below 35%), or in those with recurrent VT or malignant arrhythmias, treatment with an automatic implantable cardioverter-defibrillator (AICD) is indicated to reduce the risk of severe life-threatening arrhythmias. The AICD does not improve symptoms or reduce the incidence of malignant arrhythmias but does reduce mortality from those arrhythmias, often in conjunction with antiarrhythmic medications. In people with left ventricular ejection (LVEF) below 35%, the incidence of ventricular tachycardia or sudden cardiac death is high enough to warrant AICD placement. Its use is therefore recommended in AHA/ACC guidelines.Cardiac contractility modulation (CCM) is a treatment for people with moderate to severe left ventricular systolic heart failure (NYHA class II–IV), which enhances both the strength of ventricular contraction and the hearts pumping capacity. The CCM mechanism is based on stimulation of the cardiac muscle by nonexcitatory electrical signals, which are delivered by a pacemaker-like device. CCM is particularly suitable for the treatment of heart failure with normal QRS complex duration (120 ms or less) and has been demonstrated to improve the symptoms, quality of life, and exercise tolerance. CCM is approved for use in Europe, and was approved by the Food and Drug Administration for use in the United States in 2019.About one-third of people with LVEF below 35% have markedly altered conduction to the ventricles, resulting in dyssynchronous depolarization of the right and left ventricles. This is especially problematic in people with left bundle branch block (blockage of one of the two primary conducting fiber bundles that originate at the base of the heart and carry depolarizing impulses to the left ventricle). Using a special pacing algorithm, biventricular cardiac resynchronization therapy (CRT) can initiate a normal sequence of ventricular depolarization. In people with LVEF below 35% and prolonged QRS duration on ECG (LBBB or QRS of 150 ms or more), an improvement in symptoms and mortality occurs when CRT is added to standard medical therapy. However, in the two-thirds of people without prolonged QRS duration, CRT may actually be harmful.
Surgical therapies
People with the most severe heart failure may be candidates for ventricular assist devices, which have commonly been used as a bridge to heart transplantation, but have been used more recently as a destination treatment for advanced heart failure.In select cases, heart transplantation can be considered. While this may resolve the problems associated with heart failure, the person must generally remain on an immunosuppressive regimen to prevent rejection, which has its own significant downsides. A major limitation of this treatment option is the scarcity of hearts available for transplantation.
Palliative care
People with heart failure often have significant symptoms, such as shortness of breath and chest pain. Palliative care should be initiated early in the HF trajectory, and should not be an option of last resort. Palliative care can not only provide symptom management, but also assist with advanced care planning, goals of care in the case of a significant decline, and making sure the person has a medical power of attorney and discussed his or her wishes with this individual. A 2016 and 2017 review found that palliative care is associated with improved outcomes, such as quality of life, symptom burden, and satisfaction with care.Without transplantation, heart failure may not be reversible and heart function typically deteriorates with time. The growing number of people with stage IV heart failure (intractable symptoms of fatigue, shortness of breath, or chest pain at rest despite optimal medical therapy) should be considered for palliative care or hospice, according to American College of Cardiology/American Heart Association guidelines.
Prognosis
Prognosis in heart failure can be assessed in multiple ways, including clinical prediction rules and cardiopulmonary exercise testing. Clinical prediction rules use a composite of clinical factors such as laboratory tests and blood pressure to estimate prognosis. Among several clinical prediction rules for prognosticating acute heart failure, the EFFECT rule slightly outperformed other rules in stratifying people and identifying those at low risk of death during hospitalization or within 30 days. Easy methods for identifying people that are low-risk are:
ADHERE Tree rule indicates that people with blood urea nitrogen < 43 mg/dL and systolic blood pressure at least 115 mm Hg have less than 10% chance of inpatient death or complications.
BWH rule indicates that people with systolic blood pressure over 90 mm Hg, respiratory rate of 30 or fewer breaths per minute, serum sodium over 135 mmol/L, and no new ST–T wave changes have less than 10% chance of inpatient death or complications.A very important method for assessing prognosis in people with advanced heart failure is cardiopulmonary exercise testing (CPX testing). CPX testing is usually required prior to heart transplantation as an indicator of prognosis. CPX testing involves measurement of exhaled oxygen and carbon dioxide during exercise. The peak oxygen consumption (VO2 max) is used as an indicator of prognosis. As a general rule, a VO2 max less than 12–14 cc/kg/min indicates poor survival and suggests that the person may be a candidate for a heart transplant. People with a VO2 max <10 cc/kg/min have a clearly poorer prognosis. The most recent International Society for Heart and Lung Transplantation guidelines also suggest two other parameters that can be used for evaluation of prognosis in advanced heart failure, the heart failure survival score and the use of a criterion of VE/VCO2 slope > 35 from the CPX test. The heart failure survival score is calculated using a combination of clinical predictors and the VO2 max from the CPX test.
Heart failure is associated with significantly reduced physical and mental health, resulting in a markedly decreased quality of life. With the exception of heart failure caused by reversible conditions, the condition usually worsens with time. Although some people survive many years, progressive disease is associated with an overall annual mortality rate of 10%.Around 18 of every 1000 persons will experience an ischemic stroke during the first year after diagnosis of HF. As the duration of follow-up increases, the stroke rate rises to nearly 50 strokes per 1000 cases of HF by 5 years.
Epidemiology
In 2022, heart failure affected about 64 million people globally. Overall, around 2% of adults have heart failure and in those over the age of 65, this increases to 6–10%. Above 75 years old, rates are greater than 10%.Rates are predicted to increase. Increasing rates are mostly because of increasing lifespan, but also because of increased risk factors (hypertension, diabetes, dyslipidemia, and obesity) and improved survival rates from other types of cardiovascular disease (myocardial infarction, valvular disease, and arrhythmias). Heart failure is the leading cause of hospitalization in people older than 65.
United States
In the United States, heart failure affects 5.8 million people, and each year 550,000 new cases are diagnosed. In 2011, heart failure was the most common reason for hospitalization for adults aged 85 years and older, and the second-most common for adults aged 65–84 years. An estimated one in five adults at age 40 will develop heart failure during their remaining lifetimes and about half of people who develop heart failure die within 5 years of diagnosis. Heart failure is much higher in African Americans, Hispanics, Native Americans, and recent immigrants from the Eastern Bloc countries such as Russia. This high prevalence in these ethnic minority populations has been linked to high incidence of diabetes and hypertension. In many new immigrants to the U.S., the high prevalence of heart failure has largely been attributed to lack of preventive health care or substandard treatment. Nearly one of every four people (24.7%) hospitalized in the U.S. with congestive heart failure are readmitted within 30 days. Additionally, more than 50% of people seek readmission within 6 months after treatment and the average duration of hospital stay is 6 days.
Heart failure is a leading cause of hospital readmissions in the U.S. People aged 65 and older were readmitted at a rate of 24.5 per 100 admissions in 2011. In the same year, people under Medicaid were readmitted at a rate of 30.4 per 100 admissions, and uninsured people were readmitted at a rate of 16.8 per 100 admissions. These are the highest readmission rates for both categories. Notably, heart failure was not among the top-10 conditions with the most 30-day readmissions among the privately insured.
United Kingdom
In the UK, despite moderate improvements in prevention, heart failure rates have increased due to population growth and ageing. Overall heart failure rates are similar to the four most common causes of cancer (breast, lung, prostate, and colon) combined. People from deprived backgrounds are more likely to be diagnosed with heart failure and at a younger age.
Developing world
In tropical countries, the most common cause of HF is valvular heart disease or some type of cardiomyopathy. As underdeveloped countries have become more affluent, the incidences of diabetes, hypertension, and obesity have increased, which have in turn raised the incidence of heart failure.
Sex
Men have a higher incidence of heart failure, but the overall prevalence rate is similar in both sexes since women survive longer after the onset of heart failure. Women tend to be older when diagnosed with heart failure (after menopause), they are more likely than men to have diastolic dysfunction, and seem to experience a lower overall quality of life than men after diagnosis.
Ethnicity
Some sources state that people of Asian descent are at a higher risk of heart failure than other ethnic groups. Other sources however have found that rates of heart failure are similar to rates found in other ethnic groups.
History
For centuries, the disease entity which would include many cases of what today would be called heart failure was dropsy; the term denotes generalized edema, a major manifestation of a failing heart, though also caused by other diseases. Writings of ancient civilizations include evidence of their acquaintance with dropsy and heart failure: Egyptians were the first to use bloodletting to relieve fluid accumulation and shortage of breath, and provided what may have been the first documented observations on heart failure in the Ebers papurus (around 1500 BCE); Greeks described cases of dyspnea, fluid retention and fatigue compatible with heart failure; Romans used the flowering plant Drimia maritima (sea squill), which contains cardiac glycosides, for the treatment of dropsy; descriptions pertaining to heart failure are also known in the civilizations of ancient India and China. However, the manifestations of failing heart were understood in the context of these peoples medical theories – including ancient Egyptian religion, Hippocratic theory of humours, or ancient Indian and Chinese medicine, and the current concept of heart failure had not developed yet. Although shortage of breath had been connected to heart disease by Avicenna round 1000 CE, decisive for modern understanding of the nature of the condition were the description of pulmonary circulation by Ibn al-Nafis in the 13th century, and of systemic circulation by William Harvey in 1628. The role of the heart in fluid retention began to be better appreciated, as dropsy of the chest (fluid accumulation in and round the lungs causing shortage of breath) became more familiar and the current concept of heart failure, which brings together swelling and shortage of breath due to fluid retention, began to be accepted, in the 17th and especially in the 18th century: Richard Lower linked dyspnea and foot swelling in 1679, and Giovanni Maria Lancisi connected jugular vein distention with right ventricular failure in 1728. Dropsy attributable to other causes, e.g. kidney failure, was differentiated in the 19th century. The stethoscope, invented by René Laennec in 1819, x-rays, discovered by Wilhelm Röntgen in 1895, and electrocardiography, described by Willem Einthoven in 1903, facilitated the investigation of heart failure. 19th century also saw experimental and conceptual advances in the physiology of heart contraction, which led to the formulation of the Frank-Starling law of the heart (named after physiologists Otto Frank and Ernest Starling, a remarkable advance in understanding mechanisms of heart failure.One of the earliest treatments of heart failure, relief of swelling by bloodletting with various methods, including leeches, continued through the centuries. Along with bloodletting, Jean-Baptiste de Sénac in 1749 recommended opiates for acute shortage of breath due to heart failure. In 1785, William Withering described the therapeutic uses of the foxglove genus of plants in the treatment of edema; their extract contains cardiac glycosides, including digoxin, still used today in the treatment of heart failure. The diuretic effects of inorganic mercury salts, which were used to treat syphilis, had already been noted in the 16th century by Paracelsus; in the 19th century they were used by noted physicians like John Blackall and William Stokes. In the meantime, cannulae (tubes) invented by English physician Reginald Southey in 1877 was another method of removing excess fluid by directly inserting into swollen limbs. Use of organic mercury compounds as diuretics, beyond their role in syphilis treatment, started in 1920, though it was limited by their parenteral route of administration and their side-effects. Oral mercurial diuretics were introduced in the 1950s; so were thiazide diuretics, which caused less toxicity, and are still used today. Around the same time, invention of echocardiography by Inge Edler and Hellmuth Hertz in 1954 marked a new era in the evaluation of heart failure. In the 1960s, loop diuretics were added to available treatments of fluid retention, while a patient with heart failure received the first heart transplant by Christiaan Barnard. Over the following decades, new drug classes found their place in heart failure therapeutics, including vasodilators like hydralazine; renin-angiotensin system inhibitors; and beta-blockers.
Economics
In 2011, nonhypertensive heart failure was one of the 10 most expensive conditions seen during inpatient hospitalizations in the U.S., with aggregate inpatient hospital costs more than $10.5 billion.Heart failure is associated with a high health expenditure, mostly because of the cost of hospitalizations; costs have been estimated to amount to 2% of the total budget of the National Health Service in the United Kingdom, and more than $35 billion in the United States.
Research directions
Some research indicates that stem cell therapy may help. Although this research indicated benefits of stem cell therapy, other research does not indicate benefit. There is tentative evidence of longer life expectancy and improved left ventricular ejection fraction in persons treated with bone marrow-derived stem cells.
Notes
References
External links
Heart failure, American Heart Association – information and resources for treating and living with heart failure
Heart Failure Matters – patient information website of the Heart Failure Association of the European Society of Cardiology
Heart failure in children by Great Ormond Street Hospital, London, UK
"Heart Failure". MedlinePlus. U.S. National Library of Medicine.2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure - Guideline Hub at American College of Cardiology, jointly with the American Heart Association and the Heart Failure Society of America. JACC article link, quick references, slides, perspectives, education, apps and tools, and patient resources. Apr 01, 2022
2021 ESC Guidelines for the diagnosis and treatment of acute and chronic heart failure - European Society of Cardiology resource webpage with links to Full Text and Related Materials, Scientific Presentation at ESC Congress 2021, news article, TV interview, slide set, and ESC Pocket Guidelines; plus previous versions. App. 27 Aug 2021 |
Ichthyosis acquisita | Ichthyosis acquisita is a disorder clinically and histologically similar to ichthyosis vulgaris.: 565
Presentation
Associated conditions
The development of ichthyosis in adulthood can be a manifestation of systemic disease, and it has been described in association with malignancies, drugs, endocrine and metabolic disease, HIV, infection, and autoimmune conditions.: 494 It usually is associated with people who have Hodgkins disease but it is also occurs in people with mycosis fungoides, other malignant sarcomas, Kaposis sarcoma and visceral carcinomas. It can occur in people with leprosy, AIDS, tuberculosis, and typhoid fever.
See also
Ichthyosis
Confluent and reticulated papillomatosis of Gougerot and Carteaud
List of cutaneous conditions
References
== External links == |
Pervasive developmental disorder | The diagnostic category pervasive developmental disorders (PDD), as opposed to specific developmental disorders (SDD), is a group of disorders characterized by delays in the development of multiple basic functions including socialization and communication. The pervasive developmental disorders include autism, Asperger syndrome, pervasive developmental disorder not otherwise specified (PDD-NOS, i.e., all autism spectrum disorders [ASD]), childhood disintegrative disorder (CDD), overactive disorder associated with mental retardation and stereotyped movements, and Rett syndrome. The first four of these disorders are commonly called the autism spectrum disorders; the last disorder is much rarer, and is sometimes placed in the autism spectrum and sometimes not.The terminology PDD and ASD is often used interchangeably and varies depending on location. The two have overlapping definitions but are defined differently by the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (DSM-V), and the International Classification of Diseases, 10th edition (ICD-10). DSM-V removed PDD as a diagnosis and replaced it with ASD and the relative severity of the condition. ICD-10 on the other hand labels ASD as a pervasive developmental disorder with the subtypes previously mentioned.The onset of pervasive developmental disorders occurs during infancy, but the condition is usually not identified until the child is around three years old. Parents may begin to question the health of their child when developmental milestones are not met, including age appropriate motor movement and speech production.There is a division among doctors on the use of the term PDD. Many use the term PDD as a short way of saying PDD-NOS (pervasive developmental disorder not otherwise specified). Others use the general category label of PDD because they are hesitant to diagnose very young children with a specific type of PDD, such as autism. Both approaches contribute to confusion about the term, because the term PDD actually refers to a category of disorders and is not a diagnostic label.
Signs and symptoms
Symptoms of PDD may include behavioral and communication problems such as:
Difficulty using and understanding language
Difficulty relating to people, objects, and events; for example, lack of eye contact, pointing behavior, and lack of facial responses
Unusual play with toys and other objects.
Paranoia, a characteristic form of social anxiety, derealization, transient psychosis, and unconventional beliefs if environment or routine are changed without notice
Repetitive body movements or behavior patterns, such as hand flapping, hair twirling, foot tapping, or more complex movements
Difficulty regulating behaviors and emotions, which may result in temper tantrums, anxiety, and aggression
Emotional breakdowns
Delusional or unconventional perception of the world
Maladaptive daydreaming
Mirrored-Self Misidentification – the delusion that the individual in the mirror that you are, is a child, even though you are an older teen or an adult
Degrees
Children with PDD vary widely in abilities, intelligence, and behaviors. Some children do not speak at all, others speak in limited phrases or conversations, and some have relatively normal language development. Repetitive play skills and limited social skills are generally evident. Unusual responses to sensory information – loud noises, lights – are common.
Diagnosis
Diagnosis is usually made during early childhood. With the release of the Diagnostic and Statistical Manual of Mental Disorders–5th Edition (DSM-V) in May 2013, the diagnosis for PDD was removed and replaced with autism spectrum disorders. Distinction between the past disorders is implicated by a series of severity levels. Individuals who received diagnoses based on the DSM-IV maintain their diagnosis under the autism spectrum disorders. However, an editorial published in the October 2012 issue of American Journal of Psychiatry notes that, while some doctors argue that there is insufficient evidence to support the diagnostic distinction between ASD and PDD, multiple literature reviews found that studies showing significant differences between the two disorders significantly outnumbered those that found no difference.Unlike the DSM-V, the World Health Organization’s International Classification of Diseases, 10th edition (ICD-10) categorizes PDD into four distinct subtypes, each with their own diagnostic criteria. The four disorders (childhood autism, atypical autism, Rett syndrome, and other childhood disintegrative disorder) are characterized by abnormalities in social interactions and communication.The disorders are primarily diagnosed based on behavioral features, although the presence of any medical conditions are important, they are not taken into account when making a diagnosis.Before the release of the DSM-V, some clinicians used PDD-NOS as a "temporary" diagnosis for children under the age of five when, for whatever reason, they are reluctant to diagnose autism. There are several justifications for this. Very young children have limited social interaction and communication skills to begin with, so it can be tricky to diagnose milder cases of autism in toddlers. The unspoken assumption is that by the age of five, unusual behaviors will either resolve or develop into diagnosable autism. However, some parents view the PDD label as no more than a euphemism for autism spectrum disorders, problematic because this label makes it more difficult to receive aid for early childhood intervention.
Classification
The pervasive developmental disorders were:
Pervasive developmental disorder not otherwise specified (PDD-NOS), which includes atypical autism, and is the most common (47% of autism diagnoses);
Typical autism, the best-known;
Asperger syndrome (9% of autism diagnoses);
Rett syndrome; and
Childhood disintegrative disorder (CDD).The first three of these disorders are commonly called the autism spectrum disorders; the last two disorders are much rarer, and are sometimes placed in the autism spectrum and sometimes not.In May 2013, the Diagnostic and Statistical Manual–5th Edition (DSM-V) was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, Autism, Asperger Syndrome, Rett Syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The American Psychiatric Association has concluded that using the general diagnosis of ASD supports more accurate diagnoses. The combination of these disorders was also fueled by the standpoint that Autism is characterized by common symptoms and should therefore bear a single diagnostic term. In order to distinguish between the different disorders, the DSM-V employs severity levels. The severity levels take into account required support, restricted interests and repetitive behaviors, and deficits in social communication.
PDD and PDD-NOS
There is a division among doctors on the use of the term PDD. Many use the term PDD as a short way of saying PDD-NOS. Others use the general category because the term PDD actually refers to a category of disorders and is not a diagnostic label.PDD is not itself a diagnosis, while PDD-NOS is a diagnosis. To further complicate the issue, PDD-NOS can also be referred to as "atypical personality development", "atypical PDD", or "atypical Autism".
Behavior
An association between high-functioning autism (HFA) and criminal behavior is not completely characterized. Several studies have shown that the features associated with HFA may increase the probability of engaging in criminal behavior. While there is still a great deal of research that needs to be done in this area, recent studies on the correlation between HFA and criminal actions suggest that there is a need to understand the attributes of HFA that may lead to violent behavior. There have been several case studies that link the lack of empathy and social naïveté associated with HFA to criminal actions.There is still needs more research on the link between HFA and crimes, because most other studies point out that most people with ASD are ten times more likely to be victims and five times less likely to commit crimes than the general population. But there are also small-subgroups of people with Low-functioning Autism that commit crimes, because lack of understanding of the laws.
Treatment
Medications are used to address certain behavioral problems; therapy for children with PDD should be specialized according to the childs specific needs.Some children with PDD benefit from specialized classrooms in which the class size is small and instruction is given on a one-to-one basis. Others function well in standard special education classes or regular classes with support. Early intervention, including appropriate and specialized educational programs and support services, play a critical role in improving the outcome of individuals with PDD.
See also
Infantile neuroaxonal dystrophy
Multiple complex developmental disorder
Multisystem developmental disorder
Overactive disorder associated with mental retardation and stereotyped movements
References
External links
CDCs "Learn the Signs. Act Early." campaign - Information for parents on early childhood development and developmental disabilities
NINFS Pervasive Developmental Disorders Information Page |
Polymyositis | Polymyositis (PM) is a type of chronic inflammation of the muscles (inflammatory myopathy) related to dermatomyositis and inclusion body myositis. Its name means "inflammation of many muscles" (poly- + myos- + -itis). The inflammation of polymyositis is mainly found in the endomysial layer of skeletal muscle, whereas dermatomyositis is characterized primarily by inflammation of the perimysial layer of skeletal muscles.
Signs and symptoms
The hallmark of polymyositis is weakness and/or loss of muscle mass in the proximal musculature, as well as flexion of the neck and torso. These symptoms can be associated with marked pain in these areas as well. The hip extensors are often severely affected, leading to particular difficulty in climbing stairs and rising from a seated position. The skin involvement of dermatomyositis is absent in polymyositis. Dysphagia (difficulty swallowing) or other problems with esophageal motility occur in as many as 1/3 of patients. Low grade fever and enlarged lymph nodes may be present. Foot drop in one or both feet can be a symptom of advanced polymyositis and inclusion body myositis. The systemic involvement of polymyositis includes interstitial lung disease (ILD) and heart disease, such as heart failure and conduction abnormalities.Polymyositis tends to become evident in adulthood, presenting with bilateral proximal muscle weakness often noted in the upper legs due to early fatigue while walking. Sometimes the weakness presents itself as an inability to rise from a seated position without help or an inability to raise ones arms above ones head. The weakness is generally progressive, accompanied by lymphocytic inflammation (mainly cytotoxic T cells).
Associated illnesses
Polymyositis and the associated inflammatory myopathies have an associated increased risk of cancer. The features they found associated with an increased risk of cancer were older age, age greater than 45, male sex, difficulty swallowing, death of skin cells, cutaneous vasculitis, rapid onset of myositis (<4 weeks), elevated creatine kinase, higher erythrocyte sedimentation rate and higher C-reactive protein levels. Several factors were associated with lower-than-average risk, including the presence of interstitial lung disease, joint inflammation/joint pain, Raynauds syndrome, or anti-Jo-1 antibody. The malignancies that are associated are nasopharyngeal cancer, lung cancer, non-Hodgkins lymphoma and bladder cancer, amongst others.Cardiac involvement manifests itself typically as heart failure and is present in up to 77% of patients.
Interstitial lung disease is found in up to 65% of patients with polymyositis, as defined by HRCT or restrictive ventilatory defects compatible with interstitial lung disease.
Causes
Polymyositis is an inflammatory myopathy mediated by cytotoxic T cells with an as yet unknown autoantigen, while dermatomyositis is a humorally mediated angiopathy resulting in myositis and a typical dermatitis.The cause of polymyositis is unknown and may involve viruses and autoimmune factors. Cancer may trigger polymyositis and dermatomyositis, possibly through an immune reaction against cancer that also attacks a component of muscles. There is tentative evidence of an association with celiac disease.
Diagnosis
Diagnosis is fourfold: History and physical examination, elevation of creatine kinase, electromyograph (EMG) alteration, and a positive muscle biopsy.The hallmark clinical feature of polymyositis is proximal muscle weakness, with less important findings being muscle pain and dysphagia. Cardiac and pulmonary findings will be present in approximately 25% of cases of patients with polymyositis.Sporadic inclusion body myositis (sIBM) is often misdiagnosed as polymyositis or dermatomyositis but it can be differentiated as myositis that does not respond to treatment is likely IBM. sIBM comes on over months to years; polymyositis comes on over weeks to months. Polymyositis tends to respond well to treatment, at least initially; IBM does not.
Treatment
The first line treatment for polymyositis is corticosteroids. Specialized exercise therapy may supplement treatment to enhance quality of life.
Epidemiology
Polymyositis strikes females with greater frequency than males.
Polymyositis as a distinct diagnosis
The discovery of several myositis-specific autoantibodies during the past decades has enabled the description of other discrete subsets of diagnosis, specifically the discovery of Antisynthetase syndrome in reducing the number of diagnoses of polymyositis.
Society and culture
Notable cases
Dan Christensen, painter of abstract art. Died due to heart failure caused by polymyositis.
Robert Erickson, American composer and teacher who was a leading modernist exponent of "12-tone" composition. Died from the effects of polymyositis.
David Lean, film director.
Eric Samuelsen, playwright.
Victor Manuel Resendiz Ruiz, wrestler.
Cardinal John Wright
See also
Limb girdle syndrome
References
== External links == |
Trichuriasis | Trichuriasis, also known as whipworm infection, is an infection by the parasitic worm Trichuris trichiura (whipworm). If infection is only with a few worms, there are often no symptoms. In those who are infected with many worms, there may be abdominal pain, fatigue and diarrhea. The diarrhea sometimes contains blood. Infections in children may cause poor intellectual and physical development. Low red blood cell levels may occur due to loss of blood.The disease is usually spread when people eat food or drink water that contains the eggs of these worms. This may occur when contaminated vegetables are not fully cleaned or cooked. Often these eggs are in the soil in areas where people defecate outside and where untreated human feces is used as fertilizer. These eggs originate from the feces of infected people. Young children playing in such soil and putting their hands in their mouths also become infected easily. The worms live in the large bowel and are about four centimetres in length. Whipworm is diagnosed by seeing the eggs when examining the stool with a microscope. Eggs are barrel-shaped. Trichuriasis belongs to the group of soil-transmitted helminthiases.Prevention is by properly cooking food and hand washing before cooking. Other measures include improving access to sanitation such as ensuring use of functional and clean toilets and access to clean water. In areas of the world where the infections are common, often entire groups of people will be treated all at once and on a regular basis. Treatment is with three days of the medication: albendazole, mebendazole or ivermectin. People often become infected again after treatment.Whipworm infection affected about 464 million in 2015. It is most common in tropical countries. Those infected with whipworm often also have hookworm and ascariasis infections. These diseases have a large effect on the economy of many countries. Work is ongoing to develop a vaccine against the disease. Trichuriasis is classified as a neglected tropical disease.
Signs and symptoms
Light infestations (<100 worms) frequently have no symptoms. Heavier infestations, especially in small children, can present gastrointestinal problems including abdominal pain and distension, bloody or mucus-filled diarrhea, and tenesmus (feeling of incomplete defecation, generally accompanied by involuntary straining). Mechanical damage to the intestinal mucosa may occur, as well as toxic or inflammatory damage to the intestines of the host. While appendicitis may be brought on by damage and edema of the adjacent tissue, if there are large numbers of worms or larvae present, it has been suggested that the embedding of the worms into the ileocecal region may also make the host susceptible to bacterial infection. A severe infection with high numbers of embedded worms in the rectum leads to edema, which can cause rectal prolapse, although this is typically only seen in small children. The prolapsed, inflamed and edematous rectal tissue may even show visible worms.Physical growth delay, weight loss, nutritional deficiencies, and anemia (due to long-standing blood loss) are also characteristic of infection, and these symptoms are more prevalent and severe in children. It does not commonly cause eosinophilia.Coinfection of T. trichiura with other parasites is common and with larger worm burdens can cause both exacerbation of dangerous trichuriasis symptoms such as massive gastrointestinal bleeding (shown to be especially dramatic with coinfection with Salmonella typhi) and exacerbation of symptoms and pathogenesis of the other parasitic infection (as is typical with coinfection with Schistosoma mansoni, in which higher worm burden and liver egg burden is common). Parasitic coinfection with HIV/AIDS, tuberculosis, and malaria is also common, especially in sub-Saharan Africa, and helminth coinfection adversely affects the natural history and progression of HIV/AIDS, tuberculosis, and malaria and can increase clinical malaria severity. In a study performed in Senegal, infections of soil-transmitted helminths like T. trichiura (as well as schistosome infections independently) showed enhanced risk and increased the incidence of malaria.Heavy infestations may have bloody diarrhea. Long-standing blood loss may lead to iron-deficiency anemia. Vitamin A deficiency may also result due to infection.
Cause
Trichuriasis is caused by a parasitic worm also known as a helminth called Trichuris trichiura. It belongs to the genus Trichuris, formerly known as Trichocephalus, meaning hair head, which would be a more accurate name; however the generic name is now Trichuris, which means hair tail (implying that the posterior end of the worm is the attenuated section). Infections by parasitic worms are known as helminthiasis.
Reservoir
Humans are the main, but not the only reservoir for T. trichiura. Recent research verified by the application of molecular techniques (PCR) that dogs are a reservoir for T. trichiura, as well as T. vulpis.
Vector
Non-biting cyclorrhaphan flies (Musca domestica, M. sorbens, Chrysomya rufifacies, C. bezziana, Lucina cuprina, Calliphora vicina and Wohlfarthia magnifica) have been found to carry Trichuris trichiura. A study in two localized areas in Ethiopia found cockroaches were carriers for several human intestinal parasites, including T. trichiura.
Transmission
Humans can become infected with the parasite due to ingestion of infective eggs by mouth contact with hands or food contaminated with egg-carrying soil. However, there have also been rare reported cases of transmission of T. trichiura by sexual contact. Some major outbreaks have been traced to contaminated vegetables (due to presumed soil contamination).
Life cycle
Unembryonated eggs (unsegmented) are passed in the feces of a previous host to the soil. In the soil, these eggs develop into a 2-cell stage (segmented egg) and then into an advanced cleavage stage. Once at this stage, the eggs embryonate and then become infective, a process that occurs in about 15 to 30 days). Next, the infective eggs are ingested by way of soil-contaminated hands or food and hatch inside the small intestine, releasing larvae into the gastrointestinal tract. These larvae burrow into a villus and develop into adults (over 2–3 days). They then migrate into the cecum and ascending colon where they thread their anterior portion (whip-like end) into the tissue mucosa and reside permanently for their year-long lifespan. About 60 to 70 days after infection, female adults begin to release unembryonated eggs (oviposit) into the cecum at a rate of 3,000 to 20,000 eggs per day, linking the life cycle to the start.
Incubation period
The exact incubation period of T. trichiura is unknown, however, immature eggs in soil under favorable conditions take about three weeks to mature: 15–30 days, 10 days minimum to mature before ideal ingestion by the human host. Favorable conditions for maturation of eggs is warm to temperate climates with adequate humidity or precipitation, as ova are resistant to cold, but not resistant to drying.
Once ingested, the larva will remain dug into a villus in the small intestine for about 2–3 days until it is fully developed for migration to the ileocecal section of the gastrointestinal tract.
The average total life span of T. trichiura is one year, although there have been longer cases reported, lasting as long as five years (Note: inadequate treatment and re-infection are likely to play a role in this).
Morphology
Adult worms are usually 3–5 centimetres (1.2–2.0 in) long, with females being larger than males as is typical of nematodes. The thin, clear majority of the body (the anterior, whip-like end) is the esophagus, and it is the end that the worm threads into the mucosa of the colon. The widened, pinkish gray region of the body is the posterior, and it is the end that contains the parasites intestines and reproductive organs.
T. trichiura eggs are prolate spheroids, the shape of the balls used in Rugby and Gridiron football. They are about 50–54 μm (0.0020–0.0021 in) long and have polar plugs (also known as refractile prominences) at each end.
Diagnosis
A stool ova and parasites exam reveals the presence of typical whipworm eggs. Typically, the Kato-Katz thick-smear technique is used for identification of the Trichuris trichiura eggs in the stool sample. Trichuria eggs often appear larger and more swollen on Kato-Katz preparation compared to when using other techniques.Although colonoscopy is not typically used for diagnosis, as the adult worms can be overlooked, especially with imperfect colon, there have been reported cases in which colonoscopy has revealed adult worms. Colonoscopy can directly diagnose trichuriasis by identification of the threadlike form of worms with an attenuated, whip-like end. Colonoscopy has been shown to be a useful diagnostic tool, especially in patients infected with only a few male worms and with no eggs presenting in the stool sample.Trichuriasis can be diagnosed when T. trichiura eggs are detected in stool examination. Eggs will appear barrel-shaped and unembryonated, having bipolar plugs and a smooth shell. Rectal prolapse can be diagnosed easily using defecating proctogram and is one of many methods for imaging the parasitic infection. Sigmoidoscopys show characteristic white bodies of adult worms hanging from inflamed mucosa ("coconut cake rectum").
Prevention
Deworming
Limited access to essential medicine poses a challenge to the eradication of trichuriasis worldwide. Also, it is a public health concern that rates of post-treatment re-infection need to be determined and addressed to diminish the incidence of untreated re-infection. Lastly, with mass drug administration strategies and improved diagnosis and prompt treatment, detection of an emergence of antihelminthic drug resistance should be examined.Mass Drug Administration (preventative chemotherapy) has had a positive effect on the disease burden of trichuriasis in East and West Africa, especially among children, who are at highest risk for infection.
Sanitation
Infection can be avoided by proper disposal of human feces, avoiding fecal contamination of food, not eating soil, and avoiding crops fertilized with untreated human feces. Simple and effective proper hygiene such as washing hands and food is recommended for control.Improved facilities for feces disposal have decreased the incidence of whipworm. Handwashing before food handling, and avoiding ingestion of soil by thorough washing of food that may have been contaminated with egg-containing soil are other preventive measures. In addition to washing, it is also advisable to peel and/or cook fruits and vegetables. Improvement of sanitation systems, as well as improved facilities for feces disposal, have helped to limit defecation onto soil and contain potentially infectious feces from bodily contact.A study in a Brazilian urban centre demonstrated a significant reduction in prevalence and incidence of soil-transmitted helminthiasis, including trichuriasis, following implementation of a citywide sanitation program. A 33% reduction in the prevalence of trichuriasis and a 26% reduction in the incidence of trichuriasis was found in a study performed on 890 children ages 7–14 years old within 24 different sentinel areas chosen to represent the varied environmental conditions throughout the city of Salvador, Bahia, Brazil. Control of soil fertilizers has helped eliminate the potential for contact of human fecal matter and fertilizer in the soil.
Treatment
Trichuriasis is treated with benzimidazole anthelmintic agents such as albendazole or mebendazole, sometimes in conjunction with other medications.
Mebendazole is 90% effective in the first dose. Higher clearance rates can be obtained by combining mebendazole or albendazole with ivermectin. The safety of ivermectin in children under 15 kg (33 lb) and pregnant women has not yet been established.
In people with diarrhea, loperamide may be added to increase the contact time between anthelmintic agents and the parasites. Oral iron supplementation may be useful in treating the iron-deficiency anemia which often accompanies trichuriasis.
Epidemiology
Regions
Infection of T. trichiura is most frequent in areas with tropical weather and poor sanitation practices. Trichuriasis occurs frequently in areas in which untreated human feces is used as fertilizer or where open defecation takes place. Trichuriasis infection prevalence is 50 to 80 percent in some regions of Asia (noted especially in China and Korea) and also occurs in rural areas of the southeastern United States.
Infection estimates
T. trichiura is the third most common nematode (roundworm) infecting humans. Infection is most prevalent among children, and in North America, infection occurs frequently in immigrants from tropical or sub-tropical regions. It is estimated that 600–800 million people are infected worldwide, with 3.2 billion individuals at risk because they live in regions where this intestinal worm is common.
History
The first written record of T. trichiura was made by the Italian anatomist Giovanni Battista Morgagni, who identified the presence of the parasite in a case of worms residing in the colon in 1740. An exact morphological description and accurate drawings were first recorded in 1761 by Johann Georg Roederer, a German physician. Soon after, the name Trichuris trichiura was given to this species.
Synonyms
Human whipworm, trichocephaliasis, and tricuriasis are all synonyms for trichuriasis, human infection of the T. trichiura intestinal nematode. In Spanish, trichuriasis is called tricuriasis, while in it is known as trichuriose in French and Peitschenwurmbefall in German.
Research
Development of subunit vaccines requires the identification of protective antigens and their formulation in a suitable adjuvant. Trichuris muris is an antigenically similar laboratory model for T. trichiura. Subcutaneous vaccination with adult excretory–secretory products (ES) protects susceptible mouse strains from T. muris. Larval stages may contain novel and more relevant antigens which when incorporated in a vaccine induce worm expulsion earlier in infection than the adult worm products. Nematode vaccines marketed to date have been of the irradiated larval type and used exclusively for the treatment of animals. These vaccines are not stable and require annual production, involving the yearly production and sacrifice of donor animals for passage. There has been much interest in the production of subunit vaccines against human and agricultural parasites since the early 1980s. Development of subunit vaccines requires the identification of protective antigens and their formulation with a suitable adjuvant to stimulate the immune response appropriately.
References
== External links == |
Somnolence | Somnolence (alternatively sleepiness or drowsiness) is a state of strong desire for sleep, or sleeping for unusually long periods (compare hypersomnia). It has distinct meanings and causes. It can refer to the usual state preceding falling asleep, the condition of being in a drowsy state due to circadian rhythm disorders, or a symptom of other health problems. It can be accompanied by lethargy, weakness and lack of mental agility.Somnolence is often viewed as a symptom rather than a disorder by itself. However, the concept of somnolence recurring at certain times for certain reasons constitutes various disorders, such as excessive daytime sleepiness, shift work sleep disorder, and others; and there are medical codes for somnolence as viewed as a disorder.
Sleepiness can be dangerous when performing tasks that require constant concentration, such as driving a vehicle. When a person is sufficiently fatigued, microsleeps may be experienced. In individuals deprived of sleep, somnolence may spontaneously dissipate for short periods of time; this phenomenon is the second wind, and results from the normal cycling of the circadian rhythm interfering with the processes the body carries out to prepare itself to rest.
The word "somnolence" is derived from the Latin "somnus" meaning "sleep".
Causes
Circadian rhythm disorders
Circadian rhythm ("biological clock") disorders are a common cause of drowsiness as are a number of other conditions such as sleep apnea, insomnia and narcolepsy. The body clock disorders are classified as extrinsic (externally caused) or intrinsic. The former type is, for example, shift work sleep disorder, which affects people who work nights or rotating shifts. The intrinsic types include:
Advanced sleep phase disorder (ASPD) – A condition in which patients feel very sleepy and go to bed early in the evening and wake up very early in the morning
Delayed sleep phase disorder (DSPD) – Faulty timing of sleep, peak period of alertness, the core body temperature rhythm, hormonal and other daily cycles such that they occur a number of hours late compared to the norm, often misdiagnosed as insomnia
Non-24-hour sleep–wake disorder – A faulty body clock and sleep-wake cycle that usually is longer than (rarely shorter than) the normal 24-hour period causing complaints of insomnia and excessive sleepiness
Irregular sleep–wake rhythm – Numerous naps throughout the 24-hour period, no main nighttime sleep episode and irregularity from day to day
Physical illness
Sleepiness can also be a response to infection. Such somnolence is one of several sickness behaviors or reactions to infection that some theorize evolved to promote recovery by conserving energy while the body fights the infection using fever and other means. Other causes include:
Anxiety
Brain tumor
Chronic pains
Concussion – a mild traumatic brain injury
Diabetes
Fibromyalgia
Head injury
Hypercalcemia – too much calcium in the blood
Hypermagnesemia
Hyponatremia – low blood sodium
Hypothyroidism – the body doesnt produce enough hormones that control how cells use energy
Meningitis
Mood disorders – depression
Multiple sclerosis
Narcolepsy – disorder of the nervous system
Skull fractures
Sleeping sickness – caused by a specific parasite
Stress
Medicine
Analgesics – mostly prescribed or illicit opiates such as OxyContin or heroin
Anticonvulsants / antiepileptics – such as phenytoin (Dilantin), carbamazepine (Tegretol), Lyrica (Pregabalin) and Gabapentin
Antidepressants – for instance, sedating tricyclic antidepressants and mirtazapine. Somnolence is less common with SSRIs and SNRIs as well as MAOIs.
Antihistamines – for instance, diphenhydramine (Benadryl, Nytol) and doxylamine (Unisom-2)
Antipsychotics – for example, Lurasidone (Latuda), thioridazine, quetiapine (Seroquel), olanzapine (Zyprexa), risperidone and ziprasidone (Geodon) but not haloperidol
Dopamine agonists used in the treatment of Parkinsons disease – e.g. pergolide, ropinirole and pramipexole.
HIV medications – such as efavirenz
Hypertension medications – such as amlodipine
Hypnotics, or soporific drugs, commonly known as sleeping pills.
Tranquilizers – such as zopiclone (Zimovane), or the benzodiazepines such as diazepam (Valium) or nitrazepam (Mogadon) and the barbiturates, such as amobarbital (Amytal) or secobarbital (Seconal)
Other agents impacting the central nervous system in sufficient or toxic doses
Assessment
Quantifying sleepiness requires a careful assessment. The diagnosis depends on two factors, namely chronicity and reversibility. Chronicity signifies that the patient, unlike healthy people, experiences persistent sleepiness, which does not pass. Reversibility stands for the fact that even if the individual goes to sleep, the sleepiness may not be completely gone after waking up. The problem with the assessment is that patients may only report the consequences of sleepiness: loss of energy, fatigue, weariness, difficulty remembering or concentrating, etc. It is crucial to aim for objective measures to quantify the sleepiness. A good measurement tool is the multiple sleep latency test (MSLT). It assesses the sleep onset latency during the course of one day—often from 8:00 to 16:00. An average sleep onset latency of less than 5 minutes is an indication of pathological sleepiness.
Severity
A number of diagnostic tests, including the Epworth Sleepiness Scale, are available to help ascertain the seriousness and likely causes of abnormal somnolence.
Treatment
Somnolence is a symptom, so the treatment will depend on its cause. If the cause is the behavior and life choices of the patient (like working long hours, smoking, mental state), it may help to get plenty of rest and get rid of distractions. Its also important to investigate whats causing the problem, such as stress or anxiety, and take steps to reduce the feeling.
See also
References
== External links == |
Unicornuate uterus | A unicornuate uterus represents a uterine malformation where the uterus is formed from one only of the paired Müllerian ducts while the other Müllerian duct does not develop or only in a rudimentary fashion. The sometimes called hemi-uterus has a single horn linked to the ipsilateral fallopian tube that faces its ovary.
Signs and symptoms
Women with the condition may be asymptomatic and unaware of having a unicornuate uterus; normal pregnancy may occur. In a review of the literature Reichman et al. analyzed the data on pregnancy outcome of 290 women with a unicornuate uterus. 175 women had conceived for a total of 468 pregnancies. They found that about 50% of patients delivered a live baby. The rates for ectopic pregnancy was 2.7%, for miscarriage 34%, and for preterm delivery 20%, while the intrauterine demise rate was 10%. Thus patients with a unicornuate uterus are at a higher risk for pregnancy loss and obstetrical complications.
Cause
The uterus is normally formed during embryogenesis by the fusion of the two Müllerian ducts. If one of the ducts does not develop, only one Müllerian duct contributes to the uterine development. This uterus may or may not be connected to Müllerian structure on the opposite site if the Müllerian duct on that site undergoes some development. A unicornuate uterus has a single cervix and vagina.
Associated defects may affect the renal system, and less common, the skeleton.The condition is much less common than these other uterine malformations: arcuate uterus, septate uterus, and bicornuate uterus. While the uterus didelphys is estimated to occur in 1/3,000 women, the unicornuate uterus appears to be even more infrequent with an estimated occurrence of about 1/4,000.
Diagnosis
A pelvic examination will typically reveal a single vagina and a single cervix. Investigations are usually prompted on the basis of reproductive problems.Helpful techniques to investigate the uterine structure are transvaginal ultrasonography and sonohysterography, hysterosalpingography, MRI, and hysteroscopy. More recently 3-D ultrasonography has been advocated as an excellent non-invasive method to evaluate uterine malformations.
Rudimentary horn
A unicornuate uterus may be associated with a rudimentary horn on the opposite site. This horn may be communicating with the uterus, and linked to the ispilateral tube. Occasionally a pregnancy may implant into such a horn setting up a dangerous situation as such pregnancy can lead to a potentially fatal uterine rupture. Surgical resection of the horn is indicated.
Management
Patients with a unicornuate uterus may need special attention during pregnancy as miscarriage, fetal demise, premature birth, and malpresentation are more common. It is unproven that cerclage procedures are helpful.
A pregnancy in a rudimentary horn cannot be saved and needs to be removed with the horn to prevent a potentially fatal rupture of the horn and uterus.Although it is unclear whether interventions before conception or early in pregnancy such as resection of the rudimentary horn and prophylactic cervical cerclage decidedly improve obstetrical outcomes, current practice suggests that such interventions may be helpful.
References
== External links == |
Intermittent hydrarthrosis | Intermittent hydrarthrosis (IH), also known as periodic synoviosis, periodic benign synovitis, or periodic hydrarthritis, is a chronic condition of unknown cause characterized by recurring, temporary episodes of fluid accumulation (effusion) in the knee. While the knee is mainly involved, occasionally other joints such as the elbow or ankle can additionally be affected. Fluid accumulation in the joint can be extensive causing discomfort and impairing movement, although affected joints are not usually very painful. While the condition is chronic, it does not appear to progress to more destructive damage of the joint. It seems to affect slightly more women than men.
Episodes of swelling last several days or longer, can occur with regular or semi-regular frequency, typically one or two episodes per month. Between periods of effusion, knee swelling reduces dramatically providing largely symptomless intervals. Unlike some other rheumatological conditions such as rheumatoid arthritis, laboratory findings are usually within normal ranges or limits.
Clear treatment options have yet to be established. NSAIDs and COX2-inhibitors are generally not effective. Where this condition has been correctly diagnosed, various anti-rheumatic drugs as well as colchicine may be trialled to find the most effective option. More aggressive intra-articular treatment such chemical or radio-active synovectomy can also be helpful although benefits beyond 1 year have not been reported in literature.
Signs and symptoms
Repeated, periodic joint effusions of the knee. Usually one knee is affected but sometimes both knees. Other joints may also be involved along with the knee. Effusions are large, restricting range of motion but significant pain is not a feature. There is usually stiffness. Tenderness of the joint may or may not be present. Aspirated synovial fluid is usually sterile but will sometimes show elevated cell count (>100 cells/mL) with 50% being polymorphonuclear leukocytes.Onset of effusions are sudden with no particular trigger or stimulus. Each episode lasts for a few days to about a week and recurs in cycles of 7 to 11 days with extremes of 3 days to 30 days also reported. Sometimes the joint may begin to swell again as soon as the fluid has subsided. Where both knees are affected concurrently, as one joint ceases to swell the other may become involved.The cycle of joints swellings have been reported as being very regular, even predictable. This has been a characteristic feature of IH in many case reports. However, over the longer-term especially, these cycles of effusion and recovery may not be as constant as first reported.In women, many cases seem to begin at puberty. Episodes of knee swelling may coincide the menstrual cycle. In nearly all case reports, pregnancy seems to suppress the condition but after birth, during lactation, it returns.In the main, patients are mostly free of other symptoms. Fever is rare. There no signs of local inflammation or lymphatic involvement. Laboratory tests are generally normal or within reference limits.
Cause
The cause is unknown but allergic and auto-inflammatory mechanisms have been proposed.In a 1957 review of IH, Mattingly did not find evidence that the condition is inherited unlike Reimann who, in 1974, describes the condition as “heritable, non-inflammatory, and afebrile”.More recently, specific association with the Mediterranean fever gene, MEFV, has been proposed. So, with some individuals carrying gene mutations (MEFV and also TRAPS-related genes), the native immune system seems to plays a role in the development of IH, i.e. there is an auto-immune component to the condition.
Pathophysiology
Involvement of mast cells has been reported reflecting a possible immunoallergic aspect to IH.
Mattingly suggests that IH may be an unusual variant of rheumatoid arthritis, and some patients may go on to develop RA. Joint damage however does not generally occur and only the synovial membrane is affected by a ‘non-inflammatory oedema’.With regard to the periodic nature of effusions, Reimann theorises that:
“…either an inherent rhythm or a feedback mechanism (Morley, 1970) excites bioclocks in the hypothalamus or in the synovial membrane (Richter, 1960). These Zeitgebers provoke sudden accumulation of plasma in the lining and spaces of joints, tendon and ligament sheaths.”
Diagnosis
There is no specific test for this condition. Diagnosis is based on signs and symptoms, and exclusion of other conditions.
Differential diagnoses
Rheumatoid arthritis. Confusion with rheumatoid arthritis may be common even though IH is a non-inflammatory condition without the many signs and symptoms associated with RA. So far, an association with HLA-B27 gene in IH has not been reported. With RA, small joints are mostly affected in an inflammatory, destructive manner. This is not observed in IH. Rheumatoid factor, cyclic citrullinated peptide antibodies and antinuclear antibodies are usually negative in IH.
Palindromic rheumatism. Like IH, palindromic rheumatism (PR) is also characterised by relapsing, short-episodes of sudden onset arthritis with no recognisable trigger. However, unlike PR, IH affects the knee almost exclusively and there is a predictable periodic, regularity of attacks, with laboratory tests during attacks being generally unremarkable. PR is also more likely to be associated with development of rheumatoid arthritis.
Familial Mediterranean Fever. For some patients with FMF, episodes of knee or joint inflammation may be the only presenting symptom during an acute inflammatory episode. The MEFV gene mutations associated with FMF have been implicated in the pathogenesis of both palindromic rheumatism and IH.
Other conditions for consideration (or exclusion) are other periodic arthropathies, crystal arthopathy, prepatellar bursitis (housemaids knee), pigmented villonodular synovitis, trauma and infectious causes.
Treatment
No treatment has been found to be routinely effective. NSAIDs and COX-2 inhibitors are not generally helpful other than for general pain relief. They do not seem to help reduce effusions or prevent their occurrence. Low-dose colchicine (and some other ‘anti-rheumatic’ therapies e.g. hydroxychloroquine) have been used with some success. (Use of methotrexate and intramuscular gold have not been reported in the literature). More aggressive treatments such as synovectomy, achieved using intra-articular agents (chemical or radioactive) can provide good results, with efficacy reported for at least 1 year.Reducing acute joint swelling:
Arthrocentesis (or drainage of joint) may be useful to relieve joint swelling and improve range of motion. Local steroid injections can also reduce fluid accumulation short-term, but do not prevent onset of episodes. These treatments provide temporary relief only. Bed rest, ice packs splints and exercise are ineffective.A single case report of a patient with treatment-refractory IH describes the use of anakinra, an interleukin 1 receptor antagonist. At the first sign of any attack, a single 100 mg dose was given. With this dosing at onset of attacks, each episode of effusion was successfully terminated.Reducing frequency and severity of IH episodes:
Case reports indicate some success using long-term, low-dose colchicine (e.g. 0.5 mg to 1 mg daily). A recent single case report has shown hydroxychloroquine (300 mg daily) to be effective too.Small-sized clinical trials have shown positive results with (1) chemical and (2) radioactive synovectomy. (1) Setti et al. treated 53 patients with rifampicin RV (600 mg intra-articular injections weekly for approximately 6 weeks) with good results at 1 year follow-up. (2) Top and Cross used single doses of intra-articular radioactive gold in 18 patients with persistent effusions of mixed causes including 3 with IH. All 3 patients with IH responded well to treatment at one-year follow-up.
Prognosis
Once established, periods of remissions and relapse can persist indefinitely.
While IH may remit spontaneously for most people the condition is long-lasting. Treatments as described above can be effective in reducing the frequency and degree of effusions. Deformative changes to joints are not a common feature of this mostly non-inflammatory condition.
Epidemiology
Intermittent hydrarthrosis is uncommon and its prevalence is not known. (In 1974 more than 200 cases were reported in published literature). It affects men and women equally although some publications suggest the condition is slightly more prevalent in females. Case reports indicate that only white people are affected. First onset of IH is most common between the ages of 20 and 50 years, and in females, onset can often coincide with puberty.Usually the condition begins spontaneously or following trauma to the joint in otherwise healthy individuals.
History
Perrin (France) is reported to have first recorded this condition in 1845. The periodic nature of infusions was noted by CH Moore (Middlesex Hospital, UK) in 1852.
When the condition was first being reported in scientific journals, IH was classified as either ‘symptomatic’ or ‘idiopathic’ (of unknown cause). The symptomatic state was associated with existing disease such as rheumatoid arthritis, ankylosing spondylitis, other arthritis, or infection e.g. Brucellosis. With the idiopathic variant, an allergic component was believed to be involved since, in some patients at least, allergic phenomena (including cases of angioedema) were associated with episodes of inflammation. Rheumatoid disease did not develop in this latter variant.Today, a primarily auto-immune cause predominates literature with speculation that IH may be an inherited conditionOn the basis that IH is periodic in its presentation, early researchers proposed links with malaria where symptoms are also cyclical, even though the two have different duration cycles. Treatment with quinine (and arsenic) compounds were trialled with little benefit. Links to other infectious disease have also been posited over the years. These included Brucella, gonorrhoea, and syphilis.Adrenaline injections, mercury, various hormone treatments (ovarian extracts, growth hormone, stilboestrol), and ergotamine tartrate are among other treatments at some time used without significant or long-term benefit. Physiotherapy, surgery, exclusion diets (following allergen testing) have similarly shown no particular success in early reports of IH.
References
== External links == |
Nocturia | Nocturia is defined by the International Continence Society (ICS) as “the complaint that the individual has to wake at night one or more times for voiding (i.e. to urinate).” The term is derived from Latin nox, night, and Greek [τα] ούρα, urine. Causes are varied and can be difficult to discern. Although not every patient needs treatment, most people seek treatment for severe nocturia, waking up to void more than 2–3 times per night.
Prevalence
Studies show that 5–15% of people who are 20–50 years old, 20–30% of people who are 50–70 years old, and 10–50% of people 70+ years old, urinate at least twice a night. Nocturia becomes more common with age. More than 50 percent of men and women over the age of 60 have been measured to have nocturia in many communities. Even more over the age of 80 are shown to experience symptoms of nocturia nightly. Nocturia symptoms also often worsen with age. Although nocturia rates are about the same for both genders, data shows that there is a higher prevalence in younger women than younger men and older men than older women.
Impact
Research suggests that more than 60% of people are negatively affected by nocturia. The resulting insomnia and sleep deprivation can cause exhaustion, changes in mood, sleepiness, impaired productivity, fatigue, increased risk of accidents, and cognitive dysfunction. 25% of falls that older individuals experience happen during the night, of which 25% occur while waking up to void.A quality of life test for people who experience nocturia was published in 2004. The pilot study was conducted only on men.
Diagnosis
Nocturia diagnosis requires knowing the patients nocturnal urine volume (NUV). The ICS defines NUV as “the total volume of urine passed between the time the individual goes to bed with the intention of sleeping and the time of waking with the intention of rising.” Thus, NUV excludes the last void before going to bed, but includes the first morning void if the urge to urinate woke the patient. The amount of sleep a patient gets, and the amount they intend to get, are also considered in a diagnosis.As with any patient, a detailed history of the problem is required to establish what is normal for that patient. The principal diagnostic tool for nocturia is the voiding bladder diary. Based on information recorded in the diary, a physician can classify the patient as having global polyuria, nocturnal polyuria, or bladder storage problems. A voiding bladder diary should record:
number of voids
timing of voids
volume voided
volume and time of fluid intakePatients should include the first morning void in the NUV. However, the first morning void is not included with the number of nightly voids.
Causes
Polyuria
Polyuria is excessive or an abnormally large production or passage of urine. Increased production and passage of urine may also be termed diuresis. Polyuria is usually viewed as a symptom or sign of another disorder (not a disease by itself), but it can be classed as a disorder, at least when its underlying causes are not clear.
Global polyuria
Global polyuria is the continuous overproduction of urine that is not only limited to sleep hours. Global polyuria occurs in response to increased fluid intake and is defined as urine outputs of greater than 40 mL/kg/24 hours. The common causes of global polyuria are primary thirst disorders such as diabetes mellitus and diabetes insipidus (DI). Urination imbalance may lead to polydipsia or excessive thirst to prevent circulatory collapse. Central diabetes insipidus is caused by low levels of Vasopressin (also called antidiuretic hormone (ADH), arginine vasopressin (AVP) or argipressin). ADH is produced in the hypothalamus and stored in and released from the posterior pituitary gland. ADH increases water absorption in the collecting duct systems of kidney nephrons, subsequently decreasing urine production. ADH regulate hydration levels in the body. that helps regulates water levels. In nephrogenic DI, the kidneys do not respond properly to the normal amount of ADH.Diagnosis of DI can be made by an overnight water deprivation test. This test requires the patient to eliminate fluid intake for a fixed period of time, usually around 8–12 hours. If the first morning void is not highly concentrated, the patient is diagnosed with DI. Central DI usually can be treated with a synthetic replacement of ADH, called desmopressin. Desmopressin is taken to control thirst and frequent urination. Although there is no substitute for nephrogenic DI, it may be treated with careful regulation of fluid intake.
Nocturnal polyuria
Nocturnal polyuria is defined as an increase in urine production during the night but with a proportional decrease in daytime urine production that results in a normal 24-hour urine volume. With the 24-hour urine production within normal limits, nocturnal polyuria can be translated to having a nocturnal polyuria index (NPi) greater than 35% of the normal 24-hour urine volume. NPi is calculated simply by dividing NUV by the 24-hour urine volume. Similar to the inability of control urination, a disruption of arginine vasopressin (ADH) levels has been proposed for nocturia. Compared with the normal patients, nocturia patients have a nocturnal decrease in ADH level.Other causes of nocturnal polyuria include diseases such as
congestive heart failure
nephritic syndrome
liver failure
lifestyle patterns such as excessive nighttime drinking
sleep apnea increasing obstructive airway resistance. Obstructive sleep apnea sufferers have shown to have increases in renal sodium and water excretion that are mediated by elevated plasma atrial natriuretic hormone (ANH) levels. ANH is released by cardiac muscle cells in response to high blood volume. When activated, ANH releases water, subsequently increasing urine production.
Bladder storage
Normal human bladder storage capacity varies from person to person and is considered 400 – 600 mL. A bladder storage disorder is any factor that increases the frequency of small volume voids. These factors are usually related to lower urinary tract symptoms that affect the capacity of the bladder. Some patients with nocturia have neither global nor nocturnal polyuria according to the above criteria. Such patients most likely have a bladder storage disorder that impacts their nighttime voiding or a sleep disorder. Nocturnal bladder capacity (NBC) is defined as the largest voided volume during the sleep period.Decreased NBC can be traced to a decreased maximum voided volume or decreased bladder storage. Decreased NBC can be related to other disorders such as:
benign prostatic hyperplasia (BPH), also known as prostate enlargement
neurogenic bladder dysfunction
learned voiding dysfunction
anxiety disorders
urinary tract infection
certain pharmacological agents.
Mixed cause
A significant number of nocturia cases occur from a combination of causes. Mixed nocturia is more common than many realise and is a combination of nocturnal polyuria and decreased nocturnal bladder capacity. In a study of 194 nocturia patients:
7% were determined to solely have nocturnal polyuria
57% solely had decreased NBC
36% had a mixed cause of the twoMultifactor caused nocturia is often unrelated to an underlying urological condition. Mixed nocturia is diagnosed through the maintenance and analysis of bladder diaries of the patient. Assessment of cause contributions are done through formulas.
Management
Lifestyle changes
Although there is no cure for nocturia, many actions can manage the symptoms.
Prohibiting caffeine and alcohol intake. Both are diuretic.
Beverage consumption regulation. In regard to nocturia, this specifically means avoiding consuming fluids for three or more hours before bedtime so giving the bladder less fluid to store overnight. This especially helps people with urgency incontinence. However, one study regarding geriatric patients showed that it reduced voiding at night by only a small amount and is suboptimal for managing nocturia in older people. Fluid restriction does not help people who have nocturia due to gravity-induced third spacing of fluid because fluid is mobilized when they lie in a reclining position.
Compression stockings may be worn through the day to prevent fluid from accumulating in the legs, unless heart failure or another contraindication is present.
Drugs that increase the passing of urine can help decrease the third spacing of fluid, but they could also increase nocturia.
Medications
ADH replacements such as Desmopressin and Vasopressin
Selective Alpha-1 blockers are the most commonly used medicine to treat BPH. Alpha-1 blockers are first line treatment for the symptoms of BPH in men. Doxazosin, terazosin, alfuzosin and tamsulosin have all been well established in treatment to reduce lower urine tract symptoms (LUTS) caused by benign prostatic hyperplasia. They are all believed to be similarly effective for this purpose. First generation alpha-1 blockers, like prazosin are not recommended to treat lower urinary tract symptoms because of their blood-pressure-lowering effect. Later generation drugs in this class are used for this purpose. In some cases alpha-1 blockers have been used in combined therapy with 5-alpha reductase blockers. Dutasteride and tamsulosin are on the market as combined therapy and results have shown that they improve symptoms significantly versus monotherapy.
If urinary tract infection is causative, it can be treated with urinary antimicrobials.
Antimuscarinic agents such as oxybutynin, tolterodine, solifenacin are especially used in patients who suffer from nocturia due to an overactive bladder and urgency incontinence because they help bladder contractility.
Surgery
If the cause of nocturia is related to benign prostatic hyperplasia or an overactive bladder, surgical actions may be sought out.
Surgery for benign prostatic hyperplasia includes increasingly popular and minimally invasive laser surgery.
Surgical correction of the pelvic organ prolapse
sacral nerve stimulation
Bladder augmentation
Detrusor muscle myectomy
See also
Polyuria
Enuresis
References
External links
http://nocturia.elsevierresource.com/
Nocturia Resource Centre", linked to the journal European Urology , has been providing a continuous update on nocturia, causes, consequences and clinical approaches. |
Anemia | Anemia or anaemia (British English) is a blood disorder in which the blood has a reduced ability to carry oxygen due to a lower than normal number of red blood cells, or a reduction in the amount of hemoglobin. When anemia comes on slowly, the symptoms are often vague, such as tiredness, weakness, shortness of breath, headaches, and a reduced ability to exercise. When anemia is acute, symptoms may include confusion, feeling like one is going to pass out, loss of consciousness, and increased thirst. Anemia must be significant before a person becomes noticeably pale. Symptoms of anemia depend on how quickly hemoglobin decreases. Additional symptoms may occur depending on the underlying cause. Preoperative anemia can increase the risk of needing a blood transfusion following surgery. Anemia can be temporary or long term and can range from mild to severe.Anemia can be caused by blood loss, decreased red blood cell production, and increased red blood cell breakdown. Causes of bleeding include trauma and gastrointestinal bleeding. Causes of decreased production include iron deficiency, vitamin B12 deficiency, thalassemia and a number of bone marrow tumors. Causes of increased breakdown include genetic disorders such as sickle cell anemia, infections such as malaria, and certain autoimmune diseases. Anemia can also be classified based on the size of the red blood cells and amount of hemoglobin in each cell. If the cells are small, it is called microcytic anemia; if they are large, it is called macrocytic anemia; and if they are normal sized, it is called normocytic anemia. The diagnosis of anemia in men is based on a hemoglobin of less than 130 to 140 g/L (13 to 14 g/dL); in women, it is less than 120 to 130 g/L (12 to 13 g/dL). Further testing is then required to determine the cause.A large number of patients diagnosed with anemia of chronic disease present with no active inflammation or dietary issues. These include many with reduced limb loading, such as spinal cord injured patients, astronauts, elderly people with limited mobility, bed-bound and experimental bed-rest subjects.Certain groups of individuals, such as pregnant women, benefit from the use of iron pills for prevention. Dietary supplementation, without determining the specific cause, is not recommended. The use of blood transfusions is typically based on a persons signs and symptoms. Symptoms of anemia depend on how quickly hemoglobin decreases. In those without symptoms, they are not recommended unless hemoglobin levels are less than 60 to 80 g/L (6 to 8 g/dL). These recommendations may also apply to some people with acute bleeding. Erythropoiesis-stimulating agents are only recommended in those with severe anemia.Anemia is the most common blood disorder, affecting about a third of the global population. Iron-deficiency anemia affects nearly 1 billion people. In 2013, anemia due to iron deficiency resulted in about 183,000 deaths – down from 213,000 deaths in 1990. This condition is most prevalent in children with also an above average prevalence in elderly and women of reproductive age (especially during pregnancy). The name is derived from Ancient Greek: ἀναιμία anaimia, meaning "lack of blood", from ἀν- an-, "not" and αἷμα haima, "blood".Anemia is one of the six WHO global nutrition targets for 2025 and for diet-related global targets endorsed by World Health Assembly in 2012 and 2013. Efforts to reach global targets contribute to reaching Sustainable Development Goals (SDGs), with anemia as one of the targets in SDG 2 for achieving zero world hunger.
Signs and symptoms
Anemia is considered to be the most common blood disorder. A person with anemia may not have any symptoms depending on the underlying cause, and no symptoms may be noticed, as the anemia is initially mild, and then the symptoms become worse as the anemia worsens. A patient with anemia may report feeling tired, weak, decreased ability to concentrate, and sometimes shortness of breath on exertion.Symptoms of anemia can come on quickly or slowly. Early on there may be few or no symptoms. If the anemia continues slowly (chronic), the body may adapt and compensate for this change. In this case, no symptoms may appear until the anemia becomes more severe. Symptoms can include feeling tired, weak, dizziness, headaches, lack of physical exertion, shortness of breath, difficulty concentrating, irregular or rapid heartbeat, cold hands and feet, cold intolerance, pale or yellow skin, poor appetite, easy bruising and bleeding, and muscle weakness. Anemia that develops quickly, often, has more severe symptoms, including, feeling faint, chest pain, sweating, increased thirst, and confusion. There may be also additional symptoms depending on the underlying cause.In more severe anemia, the body may compensate for the lack of oxygen-carrying capability of the blood by increasing cardiac output. The person may have symptoms related to this, such as palpitations, angina (if pre-existing heart disease is present), intermittent claudication of the legs, and symptoms of heart failure.On examination, the signs exhibited may include pallor (pale skin, mucosa, conjunctiva and nail beds), but this is not a reliable sign. A blue coloration of the sclera may be noticed in some cases of iron-deficiency anemia. There may be signs of specific causes of anemia, e.g. koilonychia (in iron deficiency), jaundice (when anemia results from abnormal break down of red blood cells – in hemolytic anemia), nerve cell damage (vitamin B12 deficiency), bone deformities (found in thalassemia major) or leg ulcers (seen in sickle-cell disease). In severe anemia, there may be signs of a hyperdynamic circulation: tachycardia (a fast heart rate), bounding pulse, flow murmurs, and cardiac ventricular hypertrophy (enlargement). There may be signs of heart failure.
Pica, the consumption of non-food items such as ice, paper, wax, grass, hair or dirt, may be a symptom of iron deficiency; although it occurs often in those who have normal levels of hemoglobin.
Chronic anemia may result in behavioral disturbances in children as a direct result of impaired neurological development in infants, and reduced academic performance in children of school age. Restless legs syndrome is more common in people with iron-deficiency anemia than in the general population.
Causes
The causes of anemia may be classified as impaired red blood cell (RBC) production, increased RBC destruction (hemolytic anemia), blood loss and fluid overload (hypervolemia). Several of these may interplay to cause anemia. The most common cause of anemia is blood loss, but this usually does not cause any lasting symptoms unless a relatively impaired RBC production develops, in turn, most commonly by iron deficiency.
Impaired production
Disturbance of proliferation and differentiation of stem cells
Pure red cell aplasia
Aplastic anemia affects all kinds of blood cells. Fanconi anemia is a hereditary disorder or defect featuring aplastic anemia and various other abnormalities.
Anemia of kidney failure due to insufficient production of the hormone erythropoietin
Anemia of endocrine disease
Disturbance of proliferation and maturation of erythroblasts
Pernicious anemia is a form of megaloblastic anemia due to vitamin B12 deficiency dependent on impaired absorption of vitamin B12. Lack of dietary B12 causes non-pernicious megaloblastic anemia.
Anemia of folate deficiency, as with vitamin B12, causes megaloblastic anemia
Anemia of prematurity, by diminished erythropoietin response to declining hematocrit levels, combined with blood loss from laboratory testing, generally occurs in premature infants at two to six weeks of age.
Iron deficiency anemia, resulting in deficient heme synthesis
Thalassemias, causing deficient globin synthesis
Congenital dyserythropoietic anemias, causing ineffective erythropoiesis
Anemia of kidney failure (also causing stem cell dysfunction)
Other mechanisms of impaired RBC production
Myelophthisic anemia or myelophthisis is a severe type of anemia resulting from the replacement of bone marrow by other materials, such as malignant tumors, fibrosis, or granulomas.
Myelodysplastic syndrome
anemia of chronic inflammation
Leukoerythroblastic anemia is caused by space-occupying lesions in the bone marrow that prevent normal production of blood cells.
Increased destruction
Anemias of increased red blood cell destruction are generally classified as hemolytic anemias. These types generally feature jaundice, and elevated levels of lactate dehydrogenase.
Intrinsic (intracorpuscular) abnormalities cause premature destruction. All of these, except paroxysmal nocturnal hemoglobinuria, are hereditary genetic disorders.Hereditary spherocytosis is a hereditary defect that results in defects in the RBC cell membrane, causing the erythrocytes to be sequestered and destroyed by the spleen.
Hereditary elliptocytosis is another defect in membrane skeleton proteins.
Abetalipoproteinemia, causing defects in membrane lipids
Enzyme deficiencies
Pyruvate kinase and hexokinase deficiencies, causing defect glycolysis
Glucose-6-phosphate dehydrogenase deficiency and glutathione synthetase deficiency, causing increased oxidative stress
Hemoglobinopathies
Sickle cell anemia
Hemoglobinopathies causing unstable hemoglobins
Paroxysmal nocturnal hemoglobinuria
Extrinsic (extracorpuscular) abnormalities
Antibody-mediated
Warm autoimmune hemolytic anemia is caused by autoimmune attack against red blood cells, primarily by IgG. It is the most common of the autoimmune hemolytic diseases. It can be idiopathic, that is, without any known cause, drug-associated or secondary to another disease such as systemic lupus erythematosus, or a malignancy, such as chronic lymphocytic leukemia.
Cold agglutinin hemolytic anemia is primarily mediated by IgM. It can be idiopathic or result from an underlying condition.
Rh disease, one of the causes of hemolytic disease of the newborn
Transfusion reaction to blood transfusions
Mechanical trauma to red blood cells
Microangiopathic hemolytic anemias, including thrombotic thrombocytopenic purpura and disseminated intravascular coagulation
Infections, including malaria
Heart surgery
Haemodialysis
Parasitic
Trypanosoma congolense alters the surfaces of RBCs of its host and this may explain T. c. induced anemia
Blood loss
Anemia of prematurity, from frequent blood sampling for laboratory testing, combined with insufficient RBC production
Trauma or surgery, causing acute blood loss
Gastrointestinal tract lesions, causing either acute bleeds (e.g. variceal lesions, peptic ulcers) or chronic blood loss (e.g. angiodysplasia)
Gynecologic disturbances, also generally causing chronic blood loss
From menstruation, mostly among young women or older women who have fibroids
Many type of cancers, including colorectal cancer and cancer of the urinary bladder, may cause acute or chronic blood loss, especially at advanced stages
Infection by intestinal nematodes feeding on blood, such as hookworms and the whipworm Trichuris trichiura
Iatrogenic anemia, blood loss from repeated blood draws and medical procedures.The roots of the words anemia and ischemia both refer to the basic idea of "lack of blood", but anemia and ischemia are not the same thing in modern medical terminology. The word anemia used alone implies widespread effects from blood that either is too scarce (e.g., blood loss) or is dysfunctional in its oxygen-supplying ability (due to whatever type of hemoglobin or erythrocyte problem). In contrast, the word ischemia refers solely to the lack of blood (poor perfusion). Thus ischemia in a body part can cause localized anemic effects within those tissues.
Fluid overload
Fluid overload (hypervolemia) causes decreased hemoglobin concentration and apparent anemia:
General causes of hypervolemia include excessive sodium or fluid intake, sodium or water retention and fluid shift into the intravascular space.
From the 6th week of pregnancy, hormonal changes cause an increase in the mothers blood volume due to an increase in plasma.
Intestinal inflammation
Certain gastrointestinal disorders can cause anemia. The mechanisms involved are multifactorial and not limited to malabsorption but mainly related to chronic intestinal inflammation, which causes dysregulation of hepcidin that leads to decreased access of iron to the circulation.
Helicobacter pylori infection.
Gluten-related disorders: untreated celiac disease and non-celiac gluten sensitivity. Anemia can be the only manifestation of celiac disease, in absence of gastrointestinal or any other symptoms.
Inflammatory bowel disease.
Diagnosis
Definitions
There are a number of definitions of anemia; reviews provide comparison and contrast of them. A strict but broad definition is an absolute decrease in red blood cell mass, however, a broader definition is a lowered ability of the blood to carry oxygen. An operational definition is a decrease in whole-blood hemoglobin concentration of more than 2 standard deviations below the mean of an age- and sex-matched reference range.It is difficult to directly measure RBC mass, so the hematocrit (amount of RBCs) or the hemoglobin (Hb) in the blood are often used instead to indirectly estimate the value. Hematocrit; however, is concentration dependent and is therefore not completely accurate. For example, during pregnancy a womans RBC mass is normal but because of an increase in blood volume the hemoglobin and hematocrit are diluted and thus decreased. Another example would be bleeding where the RBC mass would decrease but the concentrations of hemoglobin and hematocrit initially remains normal until fluids shift from other areas of the body to the intravascular space.The anemia is also classified by severity into mild (110 g/L to normal), moderate (80 g/L to 110 g/L), and severe anemia (less than 80 g/L) in adults. Different values are used in pregnancy and children.
Testing
Anemia is typically diagnosed on a complete blood count. Apart from reporting the number of red blood cells and the hemoglobin level, the automatic counters also measure the size of the red blood cells by flow cytometry, which is an important tool in distinguishing between the causes of anemia. Examination of a stained blood smear using a microscope can also be helpful, and it is sometimes a necessity in regions of the world where automated analysis is less accessible.
A blood test will provide counts of white blood cells, red blood cells and platelets. If anemia appears, further tests may determine what type it is, and whether it has a serious cause. although of that, it is possible to refer to the genetic history and physical diagnosis. These tests may include:
complete blood count (CBC); a CBC is used to count the number of blood cells in a sample of the blood. For anemia, it will likely to be interested in the levels of the red blood cells contained in blood (hematocrit), hemoglobin, mean corpuscular volume.
determine the size and shape of red blood cells; some of red blood cells might also be examined for unusual size, shape and color.
serum ferritin; This protein helps store iron in the body, a low levels of ferritin usually indicates a low levels of stored iron.
serum vitamin B12; low levels usually develop an anemia, vitamin B12 is needed to make red blood cells, which carry oxygen to all parts of human body.
blood tests to detect rare causes; such as an immune attack on red blood cells, red blood cell fragility, and defects of enzymes, hemoglobin, and clotting.
a bone marrow sample; when the cause is unclear, a bone marrow test is performed, most often, when some blood cell defect is suspected.Reticulocyte counts, and the "kinetic" approach to anemia, have become more common than in the past in the large medical centers of the United States and some other wealthy nations, in part because some automatic counters now have the capacity to include reticulocyte counts. A reticulocyte count is a quantitative measure of the bone marrows production of new red blood cells. The reticulocyte production index is a calculation of the ratio between the level of anemia and the extent to which the reticulocyte count has risen in response. If the degree of anemia is significant, even a "normal" reticulocyte count actually may reflect an inadequate response.
If an automated count is not available, a reticulocyte count can be done manually following special staining of the blood film. In manual examination, activity of the bone marrow can also be gauged qualitatively by subtle changes in the numbers and the morphology of young RBCs by examination under a microscope. Newly formed RBCs are usually slightly larger than older RBCs and show polychromasia. Even where the source of blood loss is obvious, evaluation of erythropoiesis can help assess whether the bone marrow will be able to compensate for the loss and at what rate.
When the cause is not obvious, clinicians use other tests, such as: ESR, serum iron, transferrin, RBC folate level, hemoglobin electrophoresis, renal function tests (e.g. serum creatinine) although the tests will depend on the clinical hypothesis that is being investigated.
When the diagnosis remains difficult, a bone marrow examination allows direct examination of the precursors to red cells, although is rarely used as is painful, invasive and is hence reserved for cases where severe pathology needs to be determined or excluded.
Red blood cell size
In the morphological approach, anemia is classified by the size of red blood cells; this is either done automatically or on microscopic examination of a peripheral blood smear. The size is reflected in the mean corpuscular volume (MCV). If the cells are smaller than normal (under 80 fl), the anemia is said to be microcytic; if they are normal size (80–100 fl), normocytic; and if they are larger than normal (over 100 fl), the anemia is classified as macrocytic. This scheme quickly exposes some of the most common causes of anemia; for instance, a microcytic anemia is often the result of iron deficiency. In clinical workup, the MCV will be one of the first pieces of information available, so even among clinicians who consider the "kinetic" approach more useful philosophically, morphology will remain an important element of classification and diagnosis.
Limitations of MCV include cases where the underlying cause is due to a combination of factors – such as iron deficiency (a cause of microcytosis) and vitamin B12 deficiency (a cause of macrocytosis) where the net result can be normocytic cells.
Production vs. destruction or loss
The "kinetic" approach to anemia yields arguably the most clinically relevant classification of anemia. This classification depends on evaluation of several hematological parameters, particularly the blood reticulocyte (precursor of mature RBCs) count. This then yields the classification of defects by decreased RBC production versus increased RBC destruction or loss. Clinical signs of loss or destruction include abnormal peripheral blood smear with signs of hemolysis; elevated LDH suggesting cell destruction; or clinical signs of bleeding, such as guaiac-positive stool, radiographic findings, or frank bleeding.
The following is a simplified schematic of this approach:
* For instance, sickle cell anemia with superimposed iron deficiency; chronic gastric bleeding with B12 and folate deficiency; and other instances of anemia with more than one cause.** Confirm by repeating reticulocyte count: ongoing combination of low reticulocyte production index, normal MCV and hemolysis or loss may be seen in bone marrow failure or anemia of chronic disease, with superimposed or related hemolysis or blood loss.
Here is a schematic representation of how to consider anemia with MCV as the starting point:
Other characteristics visible on the peripheral smear may provide valuable clues about a more specific diagnosis; for example, abnormal white blood cells may point to a cause in the bone marrow.
Microcytic
Microcytic anemia is primarily a result of hemoglobin synthesis failure/insufficiency, which could be caused by several etiologies:
Iron deficiency anemia is the most common type of anemia overall and it has many causes. RBCs often appear hypochromic (paler than usual) and microcytic (smaller than usual) when viewed with a microscope.
Iron deficiency anemia is due to insufficient dietary intake or absorption of iron to meet the bodys needs. Infants, toddlers, and pregnant women have higher than average needs. Increased iron intake is also needed to offset blood losses due to digestive tract issues, frequent blood donations, or heavy menstrual periods. Iron is an essential part of hemoglobin, and low iron levels result in decreased incorporation of hemoglobin into red blood cells. In the United States, 12% of all women of childbearing age have iron deficiency, compared with only 2% of adult men. The incidence is as high as 20% among African American and Mexican American women. Studies have shown iron deficiency without anemia causes poor school performance and lower IQ in teenage girls, although this may be due to socioeconomic factors. Iron deficiency is the most prevalent deficiency state on a worldwide basis. It is sometimes the cause of abnormal fissuring of the angular (corner) sections of the lips (angular stomatitis).
In the United States, the most common cause of iron deficiency is bleeding or blood loss, usually from the gastrointestinal tract. Fecal occult blood testing, upper endoscopy and lower endoscopy should be performed to identify bleeding lesions. In older men and women, the chances are higher that bleeding from the gastrointestinal tract could be due to colon polyps or colorectal cancer.
Worldwide, the most common cause of iron deficiency anemia is parasitic infestation (hookworms, amebiasis, schistosomiasis and whipworms).The Mentzer index (mean cell volume divided by the RBC count) predicts whether microcytic anemia may be due to iron deficiency or thalassemia, although it requires confirmation.
Macrocytic
Megaloblastic anemia, the most common cause of macrocytic anemia, is due to a deficiency of either vitamin B12, folic acid, or both. Deficiency in folate or vitamin B12 can be due either to inadequate intake or insufficient absorption. Folate deficiency normally does not produce neurological symptoms, while B12 deficiency does.
Pernicious anemia is caused by a lack of intrinsic factor, which is required to absorb vitamin B12 from food. A lack of intrinsic factor may arise from an autoimmune condition targeting the parietal cells (atrophic gastritis) that produce intrinsic factor or against intrinsic factor itself. These lead to poor absorption of vitamin B12.
Macrocytic anemia can also be caused by the removal of the functional portion of the stomach, such as during gastric bypass surgery, leading to reduced vitamin B12/folate absorption. Therefore, one must always be aware of anemia following this procedure.
Hypothyroidism
Alcoholism commonly causes a macrocytosis, although not specifically anemia. Other types of liver disease can also cause macrocytosis.
Drugs such as methotrexate, zidovudine, and other substances may inhibit DNA replication such as heavy metalsMacrocytic anemia can be further divided into "megaloblastic anemia" or "nonmegaloblastic macrocytic anemia". The cause of megaloblastic anemia is primarily a failure of DNA synthesis with preserved RNA synthesis, which results in restricted cell division of the progenitor cells. The megaloblastic anemias often present with neutrophil hypersegmentation (six to 10 lobes). The nonmegaloblastic macrocytic anemias have different etiologies (i.e. unimpaired DNA globin synthesis,) which occur, for example, in alcoholism.
In addition to the nonspecific symptoms of anemia, specific features of vitamin B12 deficiency include peripheral neuropathy and subacute combined degeneration of the cord with resulting balance difficulties from posterior column spinal cord pathology. Other features may include a smooth, red tongue and glossitis.
The treatment for vitamin B12-deficient anemia was first devised by William Murphy, who bled dogs to make them anemic, and then fed them various substances to see what (if anything) would make them healthy again. He discovered that ingesting large amounts of liver seemed to cure the disease. George Minot and George Whipple then set about to isolate the curative substance chemically and ultimately were able to isolate the vitamin B12 from the liver. All three shared the 1934 Nobel Prize in Medicine.
Normocytic
Normocytic anemia occurs when the overall hemoglobin levels are decreased, but the red blood cell size (mean corpuscular volume) remains normal. Causes include:
Dimorphic
A dimorphic appearance on a peripheral blood smear occurs when there are two simultaneous populations of red blood cells, typically of different size and hemoglobin content (this last feature affecting the color of the red blood cell on a stained peripheral blood smear). For example, a person recently transfused for iron deficiency would have small, pale, iron deficient red blood cells (RBCs) and the donor RBCs of normal size and color. Similarly, a person transfused for severe folate or vitamin B12 deficiency would have two cell populations, but, in this case, the patients RBCs would be larger and paler than the donors RBCs. A person with sideroblastic anemia (a defect in heme synthesis, commonly caused by alcoholism, but also drugs/toxins, nutritional deficiencies, a few acquired and rare congenital diseases) can have a dimorphic smear from the sideroblastic anemia alone. Evidence for multiple causes appears with an elevated RBC distribution width (RDW), indicating a wider-than-normal range of red cell sizes, also seen in common nutritional anemia.
Heinz body anemia
Heinz bodies form in the cytoplasm of RBCs and appear as small dark dots under the microscope. In animals, Heinz body anemia has many causes. It may be drug-induced, for example in cats and dogs by acetaminophen (paracetamol), or may be caused by eating various plants or other substances:
In cats and dogs after eating either raw or cooked plants from the genus Allium, for example, onions or garlic.
In dogs after ingestion of zinc, for example, after eating U.S. pennies minted after 1982.
In horses which eat dry or wilted red maple leaves.
Hyperanemia
Hyperanemia is a severe form of anemia, in which the hematocrit is below 10%.
Refractory anemia
Refractory anemia, an anemia which does not respond to treatment, is often seen secondary to myelodysplastic syndromes. Iron deficiency anemia may also be refractory as a manifestation of gastrointestinal problems which disrupt iron absorption or cause occult bleeding.
Transfusion dependent
Transfusion dependent anemia is a form of anemia where ongoing blood transfusion are required. Most people with myelodysplastic syndrome develop this state at some point in time. Beta thalassemia may also result in transfusion dependence. Concerns from repeated blood transfusions include iron overload. This iron overload may require chelation therapy.
Treatment
Treatment for anemia depends on cause and severity. Vitamin supplements given orally (folic acid or vitamin B12) or intramuscularly (vitamin B12) will replace specific deficiencies.
Oral iron
Nutritional iron deficiency is common in developing nations. An estimated two-thirds of children and of women of childbearing age in most developing nations are estimated to have iron deficiency without anemia; one-third of them have iron deficiency with anemia. Iron deficiency due to inadequate dietary iron intake is rare in men and postmenopausal women. The diagnosis of iron deficiency mandates a search for potential sources of blood loss, such as gastrointestinal bleeding from ulcers or colon cancer.Mild to moderate iron-deficiency anemia is treated by oral iron supplementation with ferrous sulfate, ferrous fumarate, or ferrous |
Anemia | gluconate. Daily iron supplements have been shown to be effective in reducing anemia in women of childbearing age. When taking iron supplements, stomach upset or darkening of the feces are commonly experienced. The stomach upset can be alleviated by taking the iron with food; however, this decreases the amount of iron absorbed. Vitamin C aids in the bodys ability to absorb iron, so taking oral iron supplements with orange juice is of benefit.In the anemia of chronic kidney disease, recombinant erythropoietin or epoetin alfa is recommended to stimulate RBC production, and if iron deficiency and inflammation are also present, concurrent parenteral iron is also recommended.
Injectable iron
In cases where oral iron has either proven ineffective, would be too slow (for example, pre-operatively), or where absorption is impeded (for example in cases of inflammation), parenteral iron preparations can be used. Parenteral iron can improve iron stores rapidly and is also effective for treating people with postpartum haemorrhage, inflammatory bowel disease, and chronic heart failure. The body can absorb up to 6 mg iron daily from the gastrointestinal tract. In many cases, the patient has a deficit of over 1,000 mg of iron which would require several months to replace. This can be given concurrently with erythropoietin to ensure sufficient iron for increased rates of erythropoiesis.
Blood transfusions
Blood transfusions in those without symptoms is not recommended until the hemoglobin is below 60 to 80 g/L (6 to 8 g/dL). In those with coronary artery disease who are not actively bleeding transfusions are only recommended when the hemoglobin is below 70 to 80g/L (7 to 8 g/dL). Transfusing earlier does not improve survival. Transfusions otherwise should only be undertaken in cases of cardiovascular instability.A 2012 review concluded that when considering blood transfusions for anaemia in people with advanced cancer who have fatigue and breathlessness (not related to cancer treatment or haemorrhage), consideration should be given to whether there are alternative strategies can be tried before a blood transfusion.
Vitamin B12 intramuscular injections
In many cases, vitamin B12 is used by intramuscular injection in severe cases or cases of malabsorption of dietary-B12. Pernicious anemia caused by loss of intrinsic factor cannot be prevented. If there are other, reversible causes of low vitamin B12 levels, the cause must be treated.Vitamin B12 deficiency anemia is usually easily treated by providing the necessary level of vitamin B12 supplementation. The injections are quick-acting, and symptoms usually go away within one to two weeks. As the condition improves, doses are reduced to weeks and then can be given monthly. Intramuscular therapy leads to more rapid improvement and should be considered in patients with severe deficiency or severe neurologic symptoms. Treatment should begin rapidly for severe neurological symptoms, as some changes can become permanent. In some individuals lifelong treatment may be needed.
Erythropoiesis-stimulating agents
The objective for the administration of an erythropoiesis-stimulating agent (ESA) is to maintain hemoglobin at the lowest level that both minimizes transfusions and meets the individual persons needs. They should not be used for mild or moderate anemia. They are not recommended in people with chronic kidney disease unless hemoglobin levels are less than 10 g/dL or they have symptoms of anemia. Their use should be along with parenteral iron. The 2020 Cochrane Anaesthesia Review Group review of Erythropoietin plus iron versus control treatment including placebo or iron for preoperative anaemic adults undergoing non‐cardiac surgery demonstrated that patients were much less likely to require red cell transfusion and in those transfused, the volumes were unchanged (mean difference -0.09, 95% CI -0.23 to 0.05). Pre-op Hb concentration was increased in those receiving high dose EPO, but not low dose.
Hyperbaric oxygen
Treatment of exceptional blood loss (anemia) is recognized as an indication for hyperbaric oxygen (HBO) by the Undersea and Hyperbaric Medical Society. The use of HBO is indicated when oxygen delivery to tissue is not sufficient in patients who cannot be given blood transfusions for medical or religious reasons. HBO may be used for medical reasons when threat of blood product incompatibility or concern for transmissible disease are factors. The beliefs of some religions (ex: Jehovahs Witnesses) may require they use the HBO method. A 2005 review of the use of HBO in severe anemia found all publications reported positive results.
Preoperative anemia
An estimated 30% of adults who require non-cardiac surgery have anemia. In order to determine an appropriate preoperative treatment, it is suggested that the cause of anemia be first determined. There is moderate level medical evidence that supports a combination of iron supplementation and erythropoietin treatment to help reduce the requirement for red blood cell transfusions after surgery in those who have preoperative anemia.
Epidemiology
Anemia affects 27% of the worlds population with iron-deficiency anemia accounting for more than 60% of it. A moderate degree of iron-deficiency anemia affected approximately 610 million people worldwide or 8.8% of the population. It is somewhat more common in females (9.9%) than males (7.8%). Mild iron deficiency anemia affects another 375 million. Severe anaemia is prevalent globally, and especially in sub-Saharan Africa where it is associated with infections including malaria and invasive bacterial infections.
History
Signs of severe anemia in human bones from 4000 years ago have been uncovered in Thailand.
References
External links
Anemia, U.S. National Library of Medicine
[About Anemia] |
Adenovirus infection | Adenovirus infection is a contagious viral disease, caused by Adenoviruses, commonly resulting in a respiratory tract infection. Typical symptoms range from those of a common cold, such as nasal congestion, coryza and cough, to difficulty breathing as in pneumonia. Other general symptoms include fever, fatigue, muscle aches, headache, abdominal pain and swollen neck glands. Onset is usually two to fourteen days after exposure to the virus. A mild eye infection may occur on its own, combined with a sore throat and fever, or as a more severe adenoviral keratoconjunctivitis with a painful red eye, intolerance to light and discharge. Very young children may just have an earache. Adenovirus infection can present as a gastroenteritis with vomiting, diarrhoea and abdominal pain, with or without respiratory symptoms. However, some people have no symptoms.Adenovirus infection in humans are generally caused by Adenoviruses types B, C, E and F. Spread occurs mainly when an infected person is in close contact with another person. This may occur by either fecal–oral route, airborne transmission or small droplets containing the virus. Less commonly, the virus may spread via contaminated surfaces. Other respiratory complications include acute bronchitis, bronchiolitis and acute respiratory distress syndrome. It may cause myocarditis, meningoencephalitis or hepatitis in people with weak immune systems.Diagnosis is by signs and symptoms, and a laboratory test is not usually required. In some circumstances, a PCR test on blood or respiratory secretions may detect adenovirus DNA. Other conditions that appear similar include whooping cough, influenza, parainfluenza, and respiratory syncytial virus. Adenovirus gastroenteritis appears similar to diarrhoeal diseases caused by other infections. Infection by adenovirus may be prevented by washing hands, avoiding touching own eyes, mouth and nose with unwashed hands, and avoiding being near sick people. A live vaccine to protect against types 4 and 7 adenoviruses has been used successfully in some military personnel. Management is generally symptomatic and supportive. Most adenovirus infections get better without any treatment. Medicines to ease pain and reduce fever can be bought over the counter.Adenovirus infections affect all ages. They occur sporadically throughout the year, and outbreaks can occur particularly in winter and spring, when they may spread more quickly in closed populations such as in hospitals, nurseries, long-term care facilities, schools, and swimming pools. Severe disease is rare in people who are otherwise healthy. Adenovirus infection accounts for up to 10% of respiratory infections in children. Most cases are mild and by the age of 10-years, most children have had at least one adenovirus infection. 75% of conjunctivitis cases are due to adenovirus infection. In 2016, the Global Burden of Disease Study estimated that globally, around 75 million episodes of diarrhea among children under the age of five-years, were attributable to adenovirus infection. The first adenoviral strains were isolated in 1953 by Rowe et al.
Signs and symptoms
Symptoms are variable, ranging from mild symptoms to severe illness. They depend on the type of adenovirus, where it enters into the body, and on the age and well-being of the person. Recognised patterns of clinical features include respiratory, eye, gastrointestinal, genitourinary and central nervous system. There is also a widespread type that occurs in immunocompromised people. Typical symptoms are of a mild cold or resembling the flu; fever, nasal congestion, coryza, cough, and pinky-red eyes. Infants may also have symptoms of an ear infection. Onset is usually two to fourteen days after exposure to the virus. There may be tiredness, chills, muscle aches, or headache. However, some people have no symptoms. Generally, a day or two after developing a sore throat with large tonsils, glands can be felt in the neck. Illness is more likely to be severe in people with weakened immune systems, particularly children who have had a hematopoietic stem cell transplantation. Sometimes there is a skin rash.
Respiratory tract
Preschool children with adenovirus colds tend to present with a nasal congestion, runny nose and abdominal pain. There may be a harsh barking cough. It is frequently associated with a fever and a sore throat. Up to one in five infants with bronchiolitis will have adenovirus infection, which can be severe. Bronchiolitis obliterans is uncommon, but can occur if adenovirus causes pneumonia with prolonged fever, and can result in difficulty breathing. It presents with a hyperinflated chest, expiratory wheeze and low oxygen. Severe pneumonia is most common in very young children age three-to-18 months and presents with sudden illness, ongoing cough, high fever, shortness of breath and a fast rate of breathing. There are frequently wheezes and crackles on breathing in and out.
Eyes
Adenovirus eye infection may present as a pinky-red eye. Six to nine-days following exposure to adenovirus, one or both eyes, typically in children, may be affected in association with fever, pharyngitis and lymphadenopathy (pharyngoconjunctival fever (PCF)). The onset is usually sudden, and there is often rhinitis. Adenovirus infection can also cause adenoviral keratoconjunctivitis. Typically one eye is affected after an incubation period of up to a week. The eye becomes itchy, painful, burning and reddish and lymphadenopathy may be felt by the ear nearest the affected eye. The symptoms may last around 10-days to three-weeks. It may be is associated with blurred vision, photophobia and swelling of the conjunctiva. A sore throat and nasal congestion may or may not be present. This tends to occur in epidemics, affecting predominantly adults. In very young children, it may be associated with high fever, sore throat, otitis media, diarrhoea, and vomiting.
Gastrointestinal tract
Adenovirus infection can cause a gastroenteritis when it may present with diarrhoea, vomiting and abdominal pain, with or without respiratory or general symptoms. Children under the age of one-year appear particularly vulnerable. However, it usually resolves within three-days. It appears similar to diarrhoea diseases caused by other infections.
Other organs
Uncommonly the bladder may be affected, presenting with a sudden onset of burning on passing urine and increased frequency of passing urine, followed by seeing blood in the urine a day or two later. Meningism may occur in adenovirus associated meningoencephalitis, which may occur in people with weakened immune systems such as with AIDS or lymphoma. Adenovirus infection may result in symptoms of myocarditis, dilated cardiomyopathy, and pericarditis. Other signs and symptoms depend on other complications such as dark urine, itching and jaundice in hepatitis, generally in people who have a weakened immune system. Adenovirus is a rare cause of urethritis in men, when it may present with burning on passing urine associated with red eyes and feeling unwell.
Cause and mechanism
Adenovirus infection in humans are generally caused by Adenoviruses types B, C, E and F.Although epidemiologic characteristics of the adenoviruses vary by type, all are transmitted by direct contact, fecal-oral transmission, and occasionally waterborne transmission. Some types are capable of establishing persistent asymptomatic infections in tonsils, adenoids, and intestines of infected hosts, and shedding can occur for months or years. Some adenoviruses (e.g., serotypes 1, 2, 5, and 6) have been shown to be endemic in parts of the world where they have been studied, and infection is usually acquired during childhood. Other types cause sporadic infection and occasional outbreaks; for example, epidemic keratoconjunctivitis is associated with adenovirus serotypes 8, 19, and 37. Epidemics of febrile disease with conjunctivitis are associated with waterborne transmission of some adenovirus types, often centering on inadequately chlorinated swimming pools and small lakes. ARD is most often associated with adenovirus types 4 and 7 in the United States. Enteric adenoviruses 40 and 41 cause gastroenteritis, usually in children. For some adenovirus serotypes, the clinical spectrum of disease associated with infection varies depending on the site of infection; for example, infection with adenovirus 7 acquired by inhalation is associated with severe lower respiratory tract disease, whereas oral transmission of the virus typically causes no or mild disease. Outbreaks of adenovirus-associated respiratory disease have been more common in the late winter, spring, and early summer; however, adenovirus infections can occur throughout the year.Several adenoviruses, including Ad5, Ad9, Ad31, Ad36, Ad37, and SMAM1, have at least some evidence of causation of obesity in animals, adipogenesis in cells, and/or association with human obesity.
Diagnosis
Diagnosis is by signs and symptoms, and a laboratory test is not usually required. In some circumstances such as severe disease, when a diagnosis needs to be confirmed, a PCR test on blood or respiratory secretions may detect adenovirus DNA. Adenovirus can be isolated by growing in cell cultures in a laboratory. Other conditions that appear similar include whooping cough, influenza, parainfluenza, and respiratory syncytial virus. Since adenovirus can be excreted for prolonged periods, the presence of virus does not necessarily mean it is associated with disease.
Prevention
Infection by adenovirus may be prevented by washing hands, avoiding touching own eyes, mouth and nose before washing hands and avoiding being near sick people. Strict attention to good infection-control practices is effective for stopping transmission in hospitals of adenovirus-associated disease, such as epidemic keratoconjunctivitis. Maintaining adequate levels of chlorination is necessary for preventing swimming pool-associated outbreaks of adenovirus conjunctivitis. A live adenovirus vaccine to protect against types 4 and 7 adenoviruses has been used in some military personnel. Rates of adenovirus disease fell among military recruits following the introduction a live oral vaccine against types 4 and 7. Stocks of the vaccine ran out in 1999 and rates of disease increased until 2011 when the vaccine was re-introduced.
Treatment
Treatment is generally symptomatic and supportive. Medicines to ease pain and reduce fever can be bought over the counter. For adenoviral conjunctivitis, a cold compress and lubricants may provide some relief of discomfort. Steroid eye drops may be required if the cornea is involved. Most adenovirus infections get better without any treatment.
Prognosis
After recovery from adenovirus infection, the virus can be carried for weeks or months.Adenovirus can cause severe necrotizing pneumonia in which all or part of a lung has increased translucency radiographically, which is called Swyer-James Syndrome. Severe adenovirus pneumonia also may result in bronchiolitis obliterans, a subacute inflammatory process in which the small airways are replaced by scar tissue, resulting in a reduction in lung volume and lung compliance.
Epidemiology
Adenovirus infections occur sporadically throughout the year, and outbreaks can occur particularly in winter and spring. Epidemics may spread more quickly in closed populations such as in hospitals, nurseries, long-term care facilities, boarding schools, orphanages and swimming pools. Severe disease is rare in people who are usually healthy. Around 10% of respiratory infections in children are caused by adenoviruses. Most are mild and by the age of 10-years, most children have had at least one adenovirus infection.Adenoviruses are the most common viruses causing an inflamed throat. 75% of conjunctivitis cases are due to adenovirus infection. Under two-year olds are particularly susceptible to adenovirus gastroenteritis by types 40 and 41, with type 41 being more common than type 40. Some large studies have revealed type 40/41 adenovirus as one of the second most common causes of diarrhoea in children in low and middle income countries; the most common being rotavirus. In 2016, the Global Burden of Disease Study estimated that globally, around 75 million episodes of diarrhea among children under the age of five-years, were attributable to adenovirus infection, with a mortality of near 12%.Research in adenovirus infection has generally been limited relative to other respiratory disease viruses. The impact of type-40/41 adenovirus diarrhoea is possibly underestimated.
History
The first adenoviral strains were isolated from adenoids in 1953 by Rowe et al. Later, during studies on rotavirus diarrhoea, the wider use of electron microscopy resulted in detecting previously unrecognized adenoviruses types 40 and 41, subsequently found to be important in causing gastrointestinal illness in children.The illness made headlines in Texas in September 2007, when a so-called "boot camp flu" sickened hundreds at Lackland Air Force Base in San Antonio. In 2018, outbreaks occurred in an adult nursing home in New Jersey, and a college campus in Maryland. In 2020, as a result of infection control measures during the COVID-19 pandemic, rates of adenovirus diarrhoea declined significantly in China.
Other animals
Dogs can be affected by adenovirus infection. Severe liver damage is a classical infectious disease seen in unvaccinated dogs.
References
== External links == |
Braxton Hicks contractions | Braxton Hicks contractions, also known as practice contractions or false labor, are sporadic uterine contractions that may start around six weeks into a pregnancy. However, they are usually felt in the second or third trimester of pregnancy.
Associated conditions
Braxton Hicks contractions are often confused for labor. Braxton Hicks contractions allow the pregnant womans body to prepare for labor. However, the presence of Braxton Hicks contractions does not mean a woman is in labor or even that labor is about to commence. Another common cause of pain in pregnancy is round ligament pain.
Table 1. Braxton Hicks contractions vs. True Labor
Pathophysiology
Although the exact causes of Braxton Hicks contractions are not fully understood, there are known triggers that cause Braxton Hicks contractions, such as when a pregnant woman:
is dehydrated
has a full bladder
has just had sexual intercourse
has been exercising (running, lifting heavy objects)
is under excessive stress
has had her stomach touchedThere are two thoughts for why these intermittent uterine muscle contractions may be occurring. The first is that these early “practice contractions” could be helping to prepare the body for true labor by strengthening the uterine muscle. The second is that these contractions may occur when the fetus is in a state of physiological stress, in order to help provide more oxygenated blood to the fetal circulation.
Signs and symptoms
The determination of Braxton Hicks contractions is dependent on the history and physical assessment of the pregnant womans abdomen, as there are no specific imaging tests for diagnosis. The key is to differentiate Braxton Hicks contractions from true labor contractions (see Table 1 above).
Most commonly, Braxton Hicks contractions are weak and feel like mild cramping that occurs in a localized area in the front abdomen at an infrequent and irregular rhythm (usually every 10-20 minutes), with each contraction lasting up to 2 minutes. They may be associated with certain triggers and can disappear and reappear; they do not get more frequent, longer, or stronger over the course of the contractions. However, as the end of a pregnancy approaches, Braxton Hicks contractions tend to become more frequent and more intense.On a physical exam, some uterine muscle tightening may be palpable, but there should be no palpable contraction in the uterine fundus and no cervical changes or cervical dilation. Braxton Hicks contractions do not lead to birth.More concerning symptoms that may require assessment by a healthcare professional include:
Any bleeding or fluid leakage from the vagina
Contractions that are strong, frequent (every 5 minutes), and persisting for an hour
Changes or significant decreases in fetal movement
Management
Although there is no specific medical treatment for Braxton Hicks contractions, some alleviating factors include:
Adequate hydration
Drinking warm milk, herbal tea, or having a small meal
Urination to empty a full bladder
Rhythmic breathing
Lying down on the left side
A mild change in movement or activity level
Relaxing and de-stressing (e.g., a massage, nap, or warm bath)
Trying other pain management techniques (e.g., practices from childbirth preparation class)
History
Braxton Hicks contractions are named after John Braxton Hicks, the English physician who first wrote about them in Western medicine. In 1872, he investigated the later stages of pregnancy and noted that many pregnant women felt contractions without being near birth. He examined the prevalence of uterine contractions throughout pregnancy and determined that contractions that do not lead to labor are a normal part of pregnancy.
== References == |
Time | Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience. Time is often referred to as a fourth dimension, along with three spatial dimensions.Time has long been an important subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.
Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.Time in physics is operationally defined as "what a clock reads".The physical nature of time is addressed by general relativity with respect to events in spacetime. Examples of events are the collision of two particles, the explosion of a supernova, or the arrival of a rocket ship. Every event can be assigned four numbers representing its time and position (the events coordinates). However, the numerical values are different for different observers. In general relativity, the question of what time it is now only has meaning relative to a particular observer. Distance and time are intimately related, and the time required for light to travel a specific distance is the same for all observers, as first publicly demonstrated by Michelson and Morley. General relativity does not address the nature of time for extremely small intervals where quantum mechanics holds. At this time, there is no generally accepted theory of quantum general relativity.Time is one of the seven fundamental physical quantities in both the International System of Units (SI) and International System of Quantities. The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms. Time is used to define other quantities, such as velocity, so defining time in terms of such quantities would result in circularity of definition. An operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of both advanced experiments and everyday affairs of life. To describe observations of an event, a location (position in space) and time are typically noted.
The operational definition of time does not address what the fundamental nature of it is. It does not address why events can happen forward and backward in space, whereas events only happen in the forward progress of time. Investigations into the relationship between space and time led physicists to define the spacetime continuum. General relativity is the primary framework for understanding how spacetime works. Through advances in both theoretical and experimental investigations of spacetime, it has been shown that time can be distorted and dilated, particularly at the edges of black holes.
Temporal measurement has occupied scientists and technologists and was a prime motivation in navigation and astronomy. Periodic events and periodic motion have long served as standards for units of time. Examples include the apparent motion of the sun across the sky, the phases of the moon, and the swing of a pendulum. Time is also of significant social importance, having economic value ("time is money") as well as personal value, due to an awareness of the limited time in each day and in human life spans.
There are many systems for determining what time it is, including the Global Positioning System, other satellite systems, Coordinated Universal Time and mean solar time. In general, the numbers obtained from different time systems differ from one another.
Measurement
Generally speaking, methods of temporal measurement, or chronometry, take two distinct forms: the calendar, a mathematical tool for organising intervals of time,
and the clock, a physical mechanism that counts the passage of time. In day-to-day life, the clock is consulted for periods less than a day, whereas the calendar is consulted for periods longer than a day. Increasingly, personal electronic devices display both calendars and clocks simultaneously. The number (as on a clock dial or calendar) that marks the occurrence of a specified event as to hour or date is obtained by counting from a fiducial epoch – a central reference point.
History of the calendar
Artifacts from the Paleolithic suggest that the moon was used to reckon time as early as 6,000 years ago. Lunar calendars were among the first to appear, with years of either 12 or 13 lunar months (either 354 or 384 days). Without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months. Lunisolar calendars have a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years. Other early forms of calendars originated in Mesoamerica, particularly in ancient Mayan civilization. These calendars were religiously and astronomically based, with 18 months in a year and 20 days in a month, plus five epagomenal days at the end of the year.The reforms of Julius Caesar in 45 BC put the Roman world on a solar calendar. This Julian calendar was faulty in that its intercalation still allowed the astronomical solstices and equinoxes to advance against it by about 11 minutes per year. Pope Gregory XIII introduced a correction in 1582; the Gregorian calendar was only slowly adopted by different nations over a period of centuries, but it is now by far the most commonly used calendar around the world.
During the French Revolution, a new clock and calendar were invented in an attempt to de-Christianize time and create a more rational system in order to replace the Gregorian calendar. The French Republican Calendars days consisted of ten hours of a hundred minutes of a hundred seconds, which marked a deviation from the base 12 (duodecimal) system used in many other devices by many cultures. The system was abolished in 1806.
History of other devices
A large variety of devices have been invented to measure time. The study of these devices is called horology.An Egyptian device that dates to c. 1500 BC, similar in shape to a bent T-square, measured the passage of time from the shadow cast by its crossbar on a nonlinear rule. The T was oriented eastward in the mornings. At noon, the device was turned around so that it could cast its shadow in the evening direction.A sundial uses a gnomon to cast a shadow on a set of markings calibrated to the hour. The position of the shadow marks the hour in local time. The idea to separate the day into smaller parts is credited to Egyptians because of their sundials, which operated on a duodecimal system. The importance of the number 12 is due to the number of lunar cycles in a year and the number of stars used to count the passage of night.The most precise timekeeping device of the ancient world was the water clock, or clepsydra, one of which was found in the tomb of Egyptian pharaoh Amenhotep I. They could be used to measure the hours even at night but required manual upkeep to replenish the flow of water. The ancient Greeks and the people from Chaldea (southeastern Mesopotamia) regularly maintained timekeeping records as an essential part of their astronomical observations. Arab inventors and engineers, in particular, made improvements on the use of water clocks up to the Middle Ages. In the 11th century, Chinese inventors and engineers invented the first mechanical clocks driven by an escapement mechanism.
The hourglass uses the flow of sand to measure the flow of time. They were used in navigation. Ferdinand Magellan used 18 glasses on each ship for his circumnavigation of the globe (1522).Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe. Waterclocks, and later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle Ages. Richard of Wallingford (1292–1336), abbot of St. Albans abbey, famously built a mechanical clock as an astronomical orrery about 1330.Great advances in accurate time-keeping were made by Galileo Galilei and especially Christiaan Huygens with the invention of pendulum-driven clocks along with the invention of the minute hand by Jost Burgi.The English word clock probably comes from the Middle Dutch word klocke which, in turn, derives from the medieval Latin word clocca, which ultimately derives from Celtic and is cognate with French, Latin, and German words that mean bell. The passage of the hours at sea was marked by bells and denoted the time (see ships bell). The hours were marked by bells in abbeys as well as at sea.
Clocks can range from watches to more exotic varieties such as the Clock of the Long Now. They can be driven by a variety of means, including gravity, springs, and various forms of electrical power, and regulated by a variety of means such as a pendulum.
Alarm clocks first appeared in ancient Greece around 250 BC with a water clock that would set off a whistle. This idea was later mechanized by Levi Hutchins and Seth E. Thomas.A chronometer is a portable timekeeper that meets certain precision standards. Initially, the term was used to refer to the marine chronometer, a timepiece used to determine longitude by means of celestial navigation, a precision firstly achieved by John Harrison. More recently, the term has also been applied to the chronometer watch, a watch that meets precision standards set by the Swiss agency COSC.
The most accurate timekeeping devices are atomic clocks, which are accurate to seconds in many millions of years, and are used to calibrate other clocks and timekeeping instruments.
Atomic clocks use the frequency of electronic transitions in certain atoms to measure the second. One of the atoms used is caesium, most modern atomic clocks probe caesium with microwaves to determine the frequency of these electron vibrations. Since 1967, the International System of Measurements bases its unit of time, the second, on the properties of caesium atoms. SI defines the second as 9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom.
Today, the Global Positioning System in coordination with the Network Time Protocol can be used to synchronize timekeeping systems across the globe.
In medieval philosophical writings, the atom was a unit of time referred to as the smallest possible division of time. The earliest known occurrence in English is in Byrhtferths Enchiridion (a science text) of 1010–1012, where it was defined as 1/564 of a momentum (11⁄2 minutes), and thus equal to 15/94 of a second. It was used in the computus, the process of calculating the date of Easter.
As of May 2010, the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 3.7 × 1026 Planck times.
Units
The second (s) is the SI base unit. A minute (min) is 60 seconds in length, and an hour is 60 minutes or 3600 seconds in length. A day is usually 24 hours or 86,400 seconds in length; however, the duration of a calendar day can vary due to Daylight saving time and Leap seconds.
Definitions and standards
A time standard is a specification for measuring time: assigning a number or calendar date to an instant (point in time), quantifying the duration of a time interval, and establishing a chronology (ordering of events). In modern times, several time specifications have been officially recognized as standards, where formerly they were matters of custom and practice. The invention in 1955 of the caesium atomic clock has led to the replacement of older and purely astronomical time standards such as sidereal time and ephemeris time, for most practical purposes, by newer time standards based wholly or partly on atomic time using the SI second.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculated. Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earths rotation. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the "leap second". The Global Positioning System broadcasts a very precise time signal based on UTC time.
The surface of the Earth is split up into a number of time zones. Standard time or civil time in a time zone deviates a fixed, round amount, usually a whole number of hours, from some form of Universal Time, usually UTC. Most time zones are exactly one hour apart, and by convention compute their local time as an offset from UTC. For example, time zones at sea are based on UTC. In many locations (but not at sea) these offsets vary twice yearly due to daylight saving time transitions.
Some other time standards are used mainly for scientific work. Terrestrial Time is a theoretical ideal scale realized by TAI. Geocentric Coordinate Time and Barycentric Coordinate Time are scales defined as coordinate times in the context of the general theory of relativity. Barycentric Dynamical Time is an older relativistic scale that is still in use.
Philosophy
Religion
Linear and cyclical
Ancient cultures such as Incan, Mayan, Hopi, and other Native American Tribes – plus the Babylonians, ancient Greeks, Hinduism, Buddhism, Jainism, and others – have a concept of a wheel of time: they regard time as cyclical and quantic, consisting of repeating ages that happen to every being of the Universe between birth and extinction.In general, the Islamic and Judeo-Christian world-view regards time as linear
and directional,
beginning with the act of creation by God. The traditional Christian view sees time ending, teleologically,
with the eschatological end of the present order of things, the "end time".
In the Old Testament book Ecclesiastes, traditionally ascribed to Solomon (970–928 BC), time (as the Hebrew word עידן, זמן iddan (age, as in "Ice age") zĕman(time) is often translated) was traditionally regarded as a medium for the passage of predestined events. (Another word, زمان" זמן" zamān, meant time fit for an event, and is used as the modern Arabic, Persian, and Hebrew equivalent to the English word "time".)
Time in Greek mythology
The Greek language denotes two distinct principles, Chronos and Kairos. The former refers to numeric, or chronological, time. The latter, literally "the right or opportune moment", relates specifically to metaphysical or Divine time. In theology, Kairos is qualitative, as opposed to quantitative.In Greek mythology, Chronos (ancient Greek: Χρόνος) is identified as the Personification of Time. His name in Greek means "time" and is alternatively spelled Chronus (Latin spelling) or Khronos. Chronos is usually portrayed as an old, wise man with a long, gray beard, such as "Father Time". Some English words whose etymological root is khronos/chronos include chronology, chronometer, chronic, anachronism, synchronise, and chronicle.
Time in Kabbalah
According to Kabbalists, "time" is a paradox and an illusion. Both the future and the past are recognised to be combined and simultaneously present.
In Western philosophy
Two contrasting viewpoints on time divide prominent philosophers. One view is that time is part of the fundamental structure of the universe – a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time.
The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, and thus is not itself measurable nor can it be travelled.
Furthermore, it may be that there is a subjective component to time, but whether or not time itself is "felt", as a sensation, or is a judgment, is a matter of debate.In Philosophy, time was questioned throughout the centuries; what time is and if it is real or not. Ancient Greek philosophers asked if time was linear or cyclical and if time was endless or finite. These philosophers had different ways of explaining time; for instance, ancient Indian philosophers had something called the Wheel of Time. It is believed that there was repeating ages over the lifespan of the universe. This led to beliefs like cycles of rebirth and reincarnation. The Greek philosophers believe that the universe was infinite, and was an illusion to humans. Plato believed that time was made by the Creator at the same instant as the heavens. He also says that time is a period of motion of the heavenly bodies. Aristotle believed that time correlated to movement, that time did not exist on its own but was relative to motion of objects. he also believed that time was related to the motion of celestial bodies; the reason that humans can tell time was because of orbital periods and therefore there was a duration on time.The Vedas, the earliest texts on Indian philosophy and Hindu philosophy dating back to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction and rebirth, with each cycle lasting 4,320 million years.Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time.Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies. Aristotle, in Book IV of his Physica defined time as number of movement in respect of the before and after.In Book 11 of his Confessions, St. Augustine of Hippo ruminates on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not." He begins to define time by what it is not rather than what it is,
an approach similar to that taken in other negative definitions. However, Augustine ends up calling time a "distention" of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present by attention, and the future by expectation.
Isaac Newton believed in absolute space and absolute time; Leibniz believed that time and space are relational.
The differences between Leibnizs and Newtons interpretations came to a head in the famous Leibniz–Clarke correspondence.
Philosophers in the 17th and 18th century questioned if time was real and absolute, or if it was an intellectual concept that humans use to understand and sequence events. These questions lead to realism vs anti-realism; the realists believed that time is a fundamental part of the universe, and be perceived by events happening in a sequence, in a dimension. Isaac Newton said that we are merely occupying time, he also says that humans can only understand relative time. Relative time is a measurement of objects in motion. The anti-realists believed that time is merely a convenient intellectual concept for humans to understand events. This means that time was useless unless there were objects that it could interact with, this was called relational time. René Descartes, John Locke, and David Hume said that ones mind needs to acknowledge time, in order to understand what time is. Immanuel Kant believed that we can not know what something is unless we experience it first hand.
Immanuel Kant, in the Critique of Pure Reason, described time as an a priori intuition that allows us (together with the other a priori intuition, space) to comprehend sense experience.
With Kant, neither space nor time are conceived as substances, but rather both are elements of a systematic mental framework that necessarily structures the experiences of any rational agent, or observing subject. Kant thought of time as a fundamental part of an abstract conceptual framework, together with space and number, within which we sequence events, quantify their duration, and compare the motions of objects. In this view, time does not refer to any kind of entity that "flows," that objects "move through," or that is a "container" for events. Spatial measurements are used to quantify the extent of and distances between objects, and temporal measurements are used to quantify the durations of and between events. Time was designated by Kant as the purest possible schema of a pure concept or category.
Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as Duration. Duration, in Bergsons view, was creativity and memory as an essential component of reality.According to Martin Heidegger we do not exist inside time, we are time. Hence, the relationship to the past is a present awareness of having been, which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes "being ahead of oneself" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.
We are not stuck in sequential time. We are able to remember the past and project into the future – we have a kind of random access to our representation of temporal existence; we can, in our thoughts, step out of (ecstasis) sequential time.Modern era philosophers asked: is time real or unreal, is time happening all at once or a duration, is time tensed or tenseless, and is there a future to be? There is a theory called the tenseless or B-theory; this theory says that any tensed terminology can be replaced with tenseless terminology. For example, "we will win the game" can be replaced with "we do win the game", taking out the future tense. On the other hand, there is a theory called the tense or A-theory; this theory says that our language has tense verbs for a reason and that the future can not be determined. There is also something called imaginary time, this was from Stephen Hawking, he says that space and imaginary time are finite but have no boundaries. Imaginary time is not real or unreal, it is something that is hard to visualize. Philosophers can agree that physical time exists outside of the human mind and is objective, and psychological time is mind-dependent and subjective.
Unreality
In 5th century BC Greece, Antiphon the Sophist, in a fragment preserved from his chief work On Truth, held that: "Time is not a reality (hypostasis), but a concept (noêma) or a measure (metron)." Parmenides went further, maintaining that time, motion, and change were illusions, leading to the paradoxes of his follower Zeno. Time as an illusion is also a common theme in Buddhist thought.J. M. E. McTaggarts 1908 The Unreality of Time argues that, since every event has the characteristic of being both present and not present (i.e., future or past), that time is a self-contradictory idea (see also The flow of time).
These arguments often center on what it means for something to be unreal. Modern physicists generally believe that time is as real as space – though others, such as Julian Barbour in his book The End of Time, argue that quantum equations of the universe take their true form when expressed in the timeless realm containing every possible now or momentary configuration of the universe, called "platonia" by Barbour.A modern philosophical theory called presentism views the past and the future as human-mind interpretations of movement instead of real parts of time (or "dimensions") which coexist with the present. This theory rejects the existence of all direct interaction with the past or the future, holding only the present as tangible. This is one of the philosophical arguments against time travel. This contrasts with eternalism (all time: present, past and future, is real) and the growing block theory (the present and the past are real, but the future is not).
Physical definition
Until Einsteins reinterpretation of the physical concepts associated with time and space in 1907, time was considered to be the same everywhere in the universe, with all observers measuring the same time interval for any event.
Non-relativistic classical mechanics is based on this Newtonian idea of time.
Einstein, in his special theory of relativity,
postulated the constancy and finiteness of the speed of light for all observers. He showed that this postulate, together with a reasonable definition for what it means for two events to be simultaneous, requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer.
The theory of special relativity finds a convenient formulation in Minkowski spacetime, a mathematical structure that combines three dimensions of space with a single dimension of time. In this formalism, distances in space can be measured by how long light takes to travel that distance, e.g., a light-year is a measure of distance, and a meter is now defined in terms of how far light travels in a certain amount of time. Two events in Minkowski spacetime are separated by an invariant interval, which can be either space-like, light-like, or time-like. Events that have a time-like separation cannot be simultaneous in any frame of reference, there must be a temporal component (and possibly a spatial one) to their separation. Events that have a space-like separation will be simultaneous in some frame of reference, and there is no frame of reference in which they do not have a spatial separation. Different observers may calculate different distances and different time intervals between two events, but the invariant interval between the events is independent of the observer (and his or her velocity).
Classical mechanics
In non-relativistic classical mechanics, Newtons concept of "relative, apparent, and common time" can be used in the formulation of a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each other produce a mathematical concept of time that works sufficiently well for describing the everyday phenomena of most peoples experience. In the late nineteenth century, physicists encountered problems with the classical understanding of time, in connection with the behavior of electricity and magnetism. Einstein resolved these problems by invoking a method of synchronizing clocks using the constant, finite speed of light as the maximum signal velocity. This led directly to the conclusion that observers in motion relative to one another measure different elapsed times for the same event.
Spacetime
Time has historically been closely related with space, the two together merging into spacetime in Einsteins special relativity and general relativity. According to these theories, the concept of time depends on the spatial reference frame of the observer, and the human perception, as well as the measurement by instruments such as clocks, are different for observers in relative motion. For example, if a spaceship carrying a clock flies through space at (very nearly) the speed of light, its crew does not notice a change in the speed of time on board their vessel because everything traveling at the same speed slows down at the same rate (including the clock, the crews thought processes, and the functions of their bodies). However, to a stationary observer watching the spaceship fly by, the spaceship appears flattened in the direction it is traveling and the clock on board the spaceship appears to move very slowly.
On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened along the spaceships direction of travel, because both are moving at very nearly the speed of light relative to each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled by the fact that the crews perception of time is different from the stationary observers; what seems like seconds to the crew might be hundreds of years |
Time | to the stationary observer. In either case, however, causality remains unchanged: the past is the set of events that can send light signals to an entity and the future is the set of events to which an entity can send light signals.
Dilation
Einstein showed in his thought experiments that people travelling at different speeds, while agreeing on cause and effect, measure different time separations between events, and can even observe different chronological orderings between non-causally related events. Though these effects are typically minute in the human experience, the effect becomes much more pronounced for objects moving at speeds approaching the speed of light. Subatomic particles exist for a well-known average fraction of a second in a lab relatively at rest, but when travelling close to the speed of light they are measured to travel farther and exist for much longer than when at rest. According to the special theory of relativity, in the high-speed particles frame of reference, it exists, on the average, for a standard amount of time known as its mean lifetime, and the distance it travels in that time is zero, because its velocity is zero. Relative to a frame of reference at rest, time seems to "slow down" for the particle. Relative to the high-speed particle, distances seem to shorten. Einstein showed how both temporal and spatial dimensions can be altered (or "warped") by high-speed motion.
Einstein (The Meaning of Relativity): "Two events taking place at the points A and B of a system K are simultaneous if they appear at the same instant when observed from the middle point, M, of the interval AB. Time is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously."
Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.
Relativistic versus Newtonian
The animations visualise the different treatments of time in the Newtonian and the relativistic descriptions. At the heart of these differences are the Galilean and Lorentz transformations applicable in the Newtonian and relativistic theories, respectively.
In the figures, the vertical direction indicates time. The horizontal direction indicates distance (only one spatial dimension is taken into account), and the thick dashed curve is the spacetime trajectory ("world line") of the observer. The small dots indicate specific (past and future) events in spacetime.
The slope of the world line (deviation from being vertical) gives the relative velocity to the observer. Note how in both pictures the view of spacetime changes when the observer accelerates.
In the Newtonian description these changes are such that time is absolute: the movements of the observer do not influence whether an event occurs in the now (i.e., whether an event passes the horizontal line through the observer).
However, in the relativistic description the observability of events is absolute: the movements of the observer do not influence whether an event passes the "light cone" of the observer. Notice that with the change from a Newtonian to a relativistic description, the concept of absolute time is no longer applicable: events move up and down in the figure depending on the acceleration of the observer.
Arrow
Time appears to have a direction – the past lies behind, fixed and immutable, while the future lies ahead and is not necessarily fixed. Yet for the most part, the laws of physics do not specify an arrow of time, and allow any process to proceed both forward and in reverse. This is generally a consequence of time being modelled by a parameter in the system being analysed, where there is no "proper time": the direction of the arrow of time is sometimes arbitrary. Examples of this include the cosmological arrow of time, which points away from the Big Bang, CPT symmetry, and the radiative arrow of time, caused by light only travelling forwards in time (see light cone). In particle physics, the violation of CP symmetry implies that there should be a small counterbalancing time asymmetry to preserve CPT symmetry as stated above. The standard description of measurement in quantum mechanics is also time asymmetric (see Measurement in quantum mechanics). The second law of thermodynamics states that entropy must increase over time (see Entropy). This can be in either direction – Brian Greene theorizes that, according to the equations, the change in entropy occurs symmetrically whether going forward or backward in time. So entropy tends to increase in either direction, and our current low-entropy universe is a statistical aberration, in a similar manner as tossing a coin often enough that eventually heads will result ten times in a row. However, this theory is not supported empirically in local experiment.
Quantization
Time quantization is a hypothetical concept. In the modern established physical theories (the Standard Model of Particles and Interactions and General Relativity) time is not quantized.
Planck time (~ 5.4 × 10−44 seconds) is the unit of time in the system of natural units known as Planck units. Current established physical theories are believed to fail at this time scale, and many physicists expect that the Planck time might be the smallest unit of time that could ever be measured, even in principle. Tentative physical theories that describe this time scale exist; see for instance loop quantum gravity.
Travel
Time travel is the concept of moving backwards or forwards to different points in time, in a manner analogous to moving through space, and different from the normal "flow" of time to an earthbound observer. In this view, all points in time (including future times) "persist" in some way. Time travel has been a plot device in fiction since the 19th century. Travelling backwards or forwards in time has never been verified as a process, and doing so presents many theoretical problems and contradictive logic which to date have not been overcome. Any technological device, whether fictional or hypothetical, that is used to achieve time travel is known as a time machine.
A central problem with time travel to the past is the violation of causality; should an effect precede its cause, it would give rise to the possibility of a temporal paradox. Some interpretations of time travel resolve this by accepting the possibility of travel between branch points, parallel realities, or universes.
Another solution to the problem of causality-based temporal paradoxes is that such paradoxes cannot arise simply because they have not arisen. As illustrated in numerous works of fiction, free will either ceases to exist in the past or the outcomes of such decisions are predetermined. As such, it would not be possible to enact the grandfather paradox because it is a historical fact that ones grandfather was not killed before his child (ones parent) was conceived. This view does not simply hold that history is an unchangeable constant, but that any change made by a hypothetical future time traveller would already have happened in his or her past, resulting in the reality that the traveller moves from. More elaboration on this view can be found in the Novikov self-consistency principle.
Perception
The specious present refers to the time duration wherein ones perceptions are considered to be in the present. The experienced present is said to be specious in that, unlike the objective present, it is an interval and not a durationless instant. The term specious present was first introduced by the psychologist E.R. Clay, and later developed by William James.
Biopsychology
The brains judgment of time is known to be a highly distributed system, including at least the cerebral cortex, cerebellum and basal ganglia as its components. One particular component, the suprachiasmatic nuclei, is responsible for the circadian (or daily) rhythm, while other cell clusters appear capable of shorter-range (ultradian) timekeeping.
Psychoactive drugs can impair the judgment of time. Stimulants can lead both humans and rats to overestimate time intervals, while depressants can have the opposite effect. The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this. Such chemicals will either excite or inhibit the firing of neurons in the brain, with a greater firing rate allowing the brain to register the occurrence of more events within a given interval (speed up time) and a decreased firing rate reducing the brains capacity to distinguish events occurring within a given interval (slow down time).Mental chronometry is the use of response time in perceptual-motor tasks to infer the content, duration, and temporal sequencing of cognitive operations.
Early childhood education
Childrens expanding cognitive abilities allow them to understand time more clearly. Two- and three-year-olds understanding of time is mainly limited to "now and not now". Five- and six-year-olds can grasp the ideas of past, present, and future. Seven- to ten-year-olds can use clocks and calendars.
Alterations
In addition to psychoactive drugs, judgments of time can be altered by temporal illusions (like the kappa effect), age, and hypnosis. The sense of time is impaired in some people with neurological diseases such as Parkinsons disease and attention deficit disorder.
Psychologists assert that time seems to go faster with age, but the literature on this age-related perception of time remains controversial. Those who support this notion argue that young people, having more excitatory neurotransmitters, are able to cope with faster external events.
Spatial conceptualization
Although time is regarded as an abstract concept, there is increasing evidence that time is conceptualized in the mind in terms of space. That is, instead of thinking about time in a general, abstract way, humans think about time in a spatial way and mentally organize it as such. Using space to think about time allows humans to mentally organize temporal events in a specific way.
This spatial representation of time is often represented in the mind as a Mental Time Line (MTL). Using space to think about time allows humans to mentally organize temporal order. These origins are shaped by many environmental factors––for example, literacy appears to play a large role in the different types of MTLs, as reading/writing direction provides an everyday temporal orientation that differs from culture to culture. In western cultures, the MTL may unfold rightward (with the past on the left and the future on the right) since people read and write from left to right. Western calendars also continue this trend by placing the past on the left with the future progressing toward the right. Conversely, Arabic, Farsi, Urdu and Israeli-Hebrew speakers read from right to left, and their MTLs unfold leftward (past on the right with future on the left), and evidence suggests these speakers organize time events in their minds like this as well.This linguistic evidence that abstract concepts are based in spatial concepts also reveals that the way humans mentally organize time events varies across cultures––that is, a certain specific mental organization system is not universal. So, although Western cultures typically associate past events with the left and future events with the right according to a certain MTL, this kind of horizontal, egocentric MTL is not the spatial organization of all cultures. Although most developed nations use an egocentric spatial system, there is recent evidence that some cultures use an allocentric spatialization, often based on environmental features.A recent study of the indigenous Yupno people of Papua New Guinea focused on the directional gestures used when individuals used time-related words. When speaking of the past (such as "last year" or "past times"), individuals gestured downhill, where the river of the valley flowed into the ocean. When speaking of the future, they gestured uphill, toward the source of the river. This was common regardless of which direction the person faced, revealing that the Yupno people may use an allocentric MTL, in which time flows uphill.A similar study of the Pormpuraawans, an aboriginal group in Australia, revealed a similar distinction in which when asked to organize photos of a man aging "in order," individuals consistently placed the youngest photos to the east and the oldest photos to the west, regardless of which direction they faced. This directly clashed with an American group that consistently organized the photos from left to right. Therefore, this group also appears to have an allocentric MTL, but based on the cardinal directions instead of geographical features.The wide array of distinctions in the way different groups think about time leads to the broader question that different groups may also think about other abstract concepts in different ways as well, such as causality and number.
Use
In sociology and anthropology, time discipline is the general name given to social and economic rules, conventions, customs, and expectations governing the measurement of time, the social currency and awareness of time measurements, and peoples expectations concerning the observance of these customs by others. Arlie Russell Hochschild and Norbert Elias have written on the use of time from a sociological perspective.
The use of time is an important issue in understanding human behavior, education, and travel behavior. Time-use research is a developing field of study. The question concerns how time is allocated across a number of activities (such as time spent at home, at work, shopping, etc.). Time use changes with technology, as the television or the Internet created new opportunities to use time in different ways. However, some aspects of time use are relatively stable over long periods of time, such as the amount of time spent traveling to work, which despite major changes in transport, has been observed to be about 20–30 minutes one-way for a large number of cities over a long period.
Time management is the organization of tasks or events by first estimating how much time a task requires and when it must be completed, and adjusting events that would interfere with its completion so it is done in the appropriate amount of time. Calendars and day planners are common examples of time management tools.
Sequence of events
A sequence of events, or series of events, is a sequence of items, facts, events, actions, changes, or procedural steps, arranged in time order (chronological order), often with causality relationships among the items.
Because of causality, cause precedes effect, or cause and effect may appear together in a single item, but effect never precedes cause. A sequence of events can be presented in text, tables, charts, or timelines. The description of the items or events may include a timestamp. A sequence of events that includes the time along with place or location information to describe a sequential path may be referred to as a world line.
Uses of a sequence of events include stories,
historical events (chronology), directions and steps in procedures,
and timetables for scheduling activities. A sequence of events may also be used to help describe processes in science, technology, and medicine. A sequence of events may be focused on past events (e.g., stories, history, chronology), on future events that must be in a predetermined order (e.g., plans, schedules, procedures, timetables), or focused on the observation of past events with the expectation that the events will occur in the future (e.g., processes, projections). The use of a sequence of events occurs in fields as diverse as machines (cam timer), documentaries (Seconds From Disaster), law (choice of law), finance (directional-change intrinsic time), computer simulation (discrete event simulation), and electric power transmission
(sequence of events recorder). A specific example of a sequence of events is the timeline of the Fukushima Daiichi nuclear disaster.
See also
List of UTC timing centers
Time metrology
Organizations
Antiquarian Horological Society – AHS (United Kingdom)
Chronometrophilia (Switzerland)
Deutsche Gesellschaft für Chronometrie – DGC (Germany)
National Association of Watch and Clock Collectors – NAWCC (United States)
References
Further reading
External links
Different systems of measuring time
Time on In Our Time at the BBC
Time in the Internet Encyclopedia of Philosophy, by Bradley Dowden.
Le Poidevin, Robin (Winter 2004). "The Experience and Perception of Time". In Edward N. Zalta (ed.). The Stanford Encyclopedia of Philosophy. Retrieved 9 April 2011. |
Tinnitus | Tinnitus is the perception of sound when no corresponding external sound is present. Nearly everyone will experience a faint "normal tinnitus" in a completely quiet room but it is only of concern if it is bothersome or interferes with normal hearing or correlated with other problems. While often described as a ringing, it may also sound like a clicking, buzzing, hiss, or roaring. The sound may be soft or loud, low or high pitched, and often appears to be coming from one or both ears or from the head itself. In some people, the sound may interfere with concentration and in some cases it is associated with anxiety and depression. Tinnitus is usually associated with a degree of hearing loss and with decreased comprehension of speech in noisy environments. It is common, affecting about 10–15% of people. Most, however, tolerate it well, and it is a significant problem in only 1–2% of all people. It can trigger a fight-or-flight response, as the brain may perceive it as dangerous and important. The word tinnitus comes from the Latin tinnire which means "to ring".Rather than a disease, tinnitus is a symptom that may result from various underlying causes and may be generated at any level of the auditory system and structures beyond that system. The most common causes are hearing damage, noise-induced hearing loss or age-related hearing loss, known as presbycusis. Other causes include ear infections, disease of the heart or blood vessels, Ménières disease, brain tumors, acoustic neuromas (tumors on the auditory nerves of the ear), migraines, temporomandibular joint disorders, exposure to certain medications, a previous head injury, earwax; and tinnitus can suddenly emerge during a period of emotional stress. It is more common in those with depression.The diagnosis of tinnitus is usually based on the persons description. It is commonly supported by an audiogram, an otolaryngological and a neurological examination. The degree of interference with a persons life may be quantified with questionnaires. If certain problems are found, medical imaging, such as magnetic resonance imaging (MRI), may be performed. Other tests are suitable when tinnitus occurs with the same rhythm as the heartbeat. Rarely, the sound may be heard by someone else using a stethoscope, in which case it is known as objective tinnitus. Occasionally, spontaneous otoacoustic emissions, sounds produced normally by the inner ear, may result in tinnitus.Prevention involves avoiding exposure to loud noise for longer periods or chronically. If there is an underlying cause, treating it may lead to improvements. Otherwise, typically, management involves psychoeducation or counseling, such as talk therapy. Sound generators or hearing aids may help. No medication directly targets tinnitus.
Signs and symptoms
Tinnitus may be perceived in one or both ears, or more centrally in the head. The noise commonly occurs inside a persons head or ear(s) in the absence of auditory stimulation, similar to ringing, although in some people, it is a high-pitched whining or electric buzzing, among numerous other sounds. Tinnitus may be intermittent or continuous. In some individuals, the intensity may be changed by shoulder, neck, head, tongue, jaw, or eye movements.The specific type of tinnitus called objective tinnitus is characterized by hearing the sounds of ones own muscle contractions or pulse, which is typically a result of sounds that have been created by the movement of jaw muscles or sounds related to blood flow in the neck or face.
Course
Due to variations in study designs, data on the course of tinnitus showed few consistent results. Generally, the prevalence increased with age in adults, whereas the ratings of annoyance decreased with duration.
Psychological effects
Although an annoying condition to which most people adapt, persistent tinnitus may cause anxiety and depression in some people. Tinnitus annoyance is more strongly associated with the psychological condition of the person than the loudness or frequency range. Psychological problems such as depression, anxiety, sleep disturbances, and concentration difficulties are common in those with strongly annoying tinnitus. 45% of people with tinnitus have an anxiety disorder at some time in their life.Psychological research has focused on the tinnitus distress reaction to account for differences in tinnitus severity. The research indicates that conditioning at the initial perception of tinnitus linked it with negative emotions, such as fear and anxiety.
Types
A common tinnitus classification is into "subjective and objective tinnitus". Tinnitus is usually subjective, meaning that the sounds the person hears are not detectable by means currently available to physicians and hearing technicians. Subjective tinnitus has also been called "tinnitus aurium", "non-auditory" or "non-vibratory" tinnitus. In rare cases, tinnitus can be heard by someone else using a stethoscope. Even more rarely, in some cases it can be measured as a spontaneous otoacoustic emission (SOAE) in the ear canal. This is classified as objective tinnitus, also called "pseudo-tinnitus" or "vibratory" tinnitus.
Subjective tinnitus
Subjective tinnitus is the most frequent type of tinnitus. It may have many possible causes, but most commonly it results from hearing loss. When the tinnitus is caused by disorders of the inner ear or auditory nerve it can be called otic (from the Greek word for ear). These otological or neurological conditions include those triggered by infections, drugs, or trauma. A frequent cause is traumatic noise exposure that damages hair cells in the inner ear.When there does not seem to be a connection with a disorder of the inner ear or auditory nerve, the tinnitus can be called non-otic (i.e. not otic). In some 30% of tinnitus cases, the tinnitus is influenced by the somatosensory system, for instance, people can increase or decrease their tinnitus by moving their face, head, or neck. This type is called somatic or craniocervical tinnitus, since it is only head or neck movements that have an effect.There is a growing body of evidence suggesting that some tinnitus is a consequence of neuroplastic alterations in the central auditory pathway. These alterations are assumed to result from a disturbed sensory input, caused by hearing loss. Hearing loss could indeed cause a homeostatic response of neurons in the central auditory system, and therefore cause tinnitus.
Hearing loss
The most common cause of tinnitus is hearing loss. Hearing loss may have many different causes, but among those with tinnitus, the major cause is cochlear injury.Ototoxic drugs also may cause subjective tinnitus, as they may cause hearing loss, or increase the damage done by exposure to loud noise. Those damages may occur even at doses that are not considered ototoxic. More than 260 medications have been reported to cause tinnitus as a side effect. In many cases, however, no underlying cause could be identified.Tinnitus can also occur due to the discontinuation of therapeutic doses of benzodiazepines. It can sometimes be a protracted symptom of benzodiazepine withdrawal and may persist for many months. Medications such as bupropion may also result in tinnitus. In many cases, however, no underlying cause can be identified.
Associated factors
Factors associated with tinnitus include:
ear problems and hearing loss:
conductive hearing loss
acoustic shock
loud noise or music
middle ear effusion
otitis
otosclerosis
Eustachian tube dysfunction
sensorineural hearing loss
excessive or loud noise; e.g. acoustic trauma
presbycusis (age-associated hearing loss)
Ménières disease
endolymphatic hydrops
superior canal dehiscence
acoustic neuroma
mercury or lead poisoning
ototoxic medications
neurologic disorders:
Arnold–Chiari malformation
multiple sclerosis
head injury
giant cell arteritis
temporomandibular joint dysfunction
metabolic disorders:
vitamin B12 deficiency
iron deficiency anemia
psychiatric disorders
depression
anxiety disorders
other factors:
vasculitis
Some psychedelic drugs can produce temporary tinnitus-like symptoms as a side effect
5-MeO-DET
diisopropyltryptamine (DiPT)
benzodiazepine withdrawal
intracranial hyper or hypotension caused by, for example, encephalitis or a cerebrospinal fluid leak
Objective tinnitus
Objective tinnitus can be detected by other people and is sometimes caused by an involuntary twitching of a muscle or a group of muscles (myoclonus) or by a vascular condition. In some cases, tinnitus is generated by muscle spasms around the middle ear.Spontaneous otoacoustic emissions (SOAEs), which are faint high-frequency tones that are produced in the inner ear and can be measured in the ear canal with a sensitive microphone, may also cause tinnitus. About 8% of those with SOAEs and tinnitus have SOAE-linked tinnitus, while the percentage of all cases of tinnitus caused by SOAEs is estimated at 4%.
Pediatric tinnitus
Children may be subject to pulsatile or continuous tinnitus. With pulsatile tinnitus involving anomalies and variants of the vascular parts. While continuous affecting the middle/inner ear structures. CT scans are able to check for integrity of the structures, and MR scans can evaluate the nerves and potential masses or malformations. Early diagnosis can prevent long term impairments to development, imaging and categorizing whether it nonpulsatile or pulsatile tinnitus help create an efficient diagnosis.
Pulsatile tinnitus
Some people experience a sound that beats in time with their pulse, known as pulsatile tinnitus or vascular tinnitus. Pulsatile tinnitus is usually objective in nature, resulting from altered blood flow, increased blood turbulence near the ear, such as from atherosclerosis or venous hum, but it can also arise as a subjective phenomenon from an increased awareness of blood flow in the ear. Rarely, pulsatile tinnitus may be a symptom of potentially life-threatening conditions such as carotid artery aneurysm or carotid artery dissection. Pulsatile tinnitus may also indicate vasculitis, or more specifically, giant cell arteritis. Pulsatile tinnitus may also be an indication of idiopathic intracranial hypertension. Pulsatile tinnitus can be a symptom of intracranial vascular abnormalities and should be evaluated for irregular noises of blood flow (bruits).
Pathophysiology
It may be caused by increased neural activity in the auditory brainstem, where the brain processes sounds, causing some auditory nerve cells to become over-excited. The basis of this theory is that many with tinnitus also have hearing loss.Three reviews of 2016 emphasized the large range and possible combinations of pathologies involved in tinnitus, which in turn result in a great variety of symptoms demanding specifically adapted therapies.
Diagnosis
The diagnostic approach is based on a history of the condition and an examination of the head, neck, and neurological system. Typically an audiogram is done, and occasionally medical imaging or electronystagmography. Treatable conditions may include middle ear infection, acoustic neuroma,
concussion, and otosclerosis.Evaluation of tinnitus can include a hearing test (audiogram), measurement of acoustic parameters of the tinnitus like pitch and loudness, and psychological assessment of comorbid conditions like depression, anxiety, and stress that are associated with severity of the tinnitus.One definition of tinnitus, as compared to normal ear noise experience, is lasting five minutes at least twice a week. However, people with tinnitus often experience the noise more frequently than this. Tinnitus can be present constantly or intermittently. Some people with constant tinnitus might not be aware of it all the time, but only for example during the night when there is less environmental noise to mask it. Chronic tinnitus can be defined as tinnitus with duration of six months or more.
Audiology
Since most persons with tinnitus also have hearing loss, a pure tone hearing test resulting in an audiogram may help diagnose a cause, though some persons with tinnitus do not have hearing loss. An audiogram may also facilitate fitting of a hearing aid in those cases where hearing loss is significant. The pitch of tinnitus is often in the range of the hearing loss.
Psychoacoustics
Acoustic qualification of tinnitus will include measurement of several acoustic parameters like frequency in cases of monotone tinnitus or frequency range and bandwidth in cases of narrow band noise tinnitus, loudness in dB above hearing threshold at the indicated frequency, mixing-point, and minimum masking level. In most cases, tinnitus pitch or frequency range is between 5 kHz and 10 kHz, and loudness between 5 and 15 dB above the hearing threshold.Another relevant parameter of tinnitus is residual inhibition, the temporary suppression or disappearance of tinnitus following a period of masking. The degree of residual inhibition may indicate how effective tinnitus maskers would be as a treatment modality.An assessment of hyperacusis, a frequent accompaniment of tinnitus, may also be made. Hyperacusis is related to negative reactions to sound and can take many forms. One associated parameter that can be measured is Loudness Discomfort Level (LDL) in dB, the subjective level of acute discomfort at specified frequencies over the frequency range of hearing. This defines a dynamic range between the hearing threshold at that frequency and the loudness discomfort level. A compressed dynamic range over a particular frequency range can be associated with hyperacusis. Normal hearing threshold is generally defined as 0–20 decibels (dB). Normal loudness discomfort levels are 85–90+ dB, with some authorities citing 100 dB. A dynamic range of 55 dB or less is indicative of hyperacusis.
Severity
The condition is often rated on a scale from "slight" to "severe" according to the effects it has, such as interference with sleep, quiet activities and normal daily activities.Assessment of psychological processes related to tinnitus involves measurement of tinnitus severity and distress (i.e., nature and extent of tinnitus-related problems), measured subjectively by validated self-report tinnitus questionnaires. These questionnaires measure the degree of psychological distress and handicap associated with tinnitus, including effects on hearing, lifestyle, health and emotional functioning. A broader assessment of general functioning, such as levels of anxiety, depression, stress, life stressors and sleep difficulties, is also important in the assessment of tinnitus due to higher risk of negative well-being across these areas, which may be affected by or exacerbate the tinnitus symptoms for the individual. Overall, current assessment measures are aimed to identify individual levels of distress and interference, coping responses and perceptions of tinnitus to inform treatment and monitor progress. However, wide variability, inconsistencies and lack of consensus regarding assessment methodology are evidenced in the literature, limiting comparison of treatment effectiveness. Developed to guide diagnosis or classify severity, most tinnitus questionnaires have been shown to be treatment-sensitive outcome measures.
Pulsatile tinnitus
If the examination reveals a bruit (sound due to turbulent blood flow), imaging studies such as transcranial doppler (TCD) or magnetic resonance angiography (MRA) should be performed.
Differential diagnosis
Other potential sources of the sounds normally associated with tinnitus should be ruled out. For instance, two recognized sources of high-pitched sounds might be electromagnetic fields common in modern wiring and various sound signal transmissions. A common and often misdiagnosed condition that mimics tinnitus is radio frequency (RF) hearing, in which subjects have been tested and found to hear high-pitched transmission frequencies that sound similar to tinnitus.
Prevention
Prolonged exposure to loud sound or noise levels can lead to tinnitus. Custom made ear plugs or other measures can help with prevention. Employers may use hearing loss prevention programs to help educate and prevent dangerous levels of exposure to noise. Government organizations set regulations to ensure employees, if following the protocol, should have minimal risk to permanent damage to their hearing.Certain groups are advised to wear ear plugs when working or riding to avoid the risk of tinnitus, caused by overexposure to loud noises such as wind noise for motorcycle riders. Occupationally this includes musicians, DJs, agricultural, and construction workers as they are at a greater risk compared to the general population.
Several medicines have ototoxic effects, and can have a cumulative effect that can increase the damage done by noise. If ototoxic medications must be administered, close attention by the physician to prescription details, such as dose and dosage interval, can reduce the damage done.
Management
If a specific underlying cause is determined, treating it may lead to improvements. Otherwise, the primary treatment for tinnitus is talk therapy, sound therapy, or hearing aids. There are no effective drugs that treat tinnitus.
Psychological
The best supported treatment for tinnitus is a type of counseling called cognitive behavioral therapy (CBT) which can be delivered via the internet or in person. It decreases the amount of stress those with tinnitus feel. These benefits appear to be independent of any effect on depression or anxiety in an individual. Acceptance and commitment therapy (ACT) also shows promise in the treatment of tinnitus. Relaxation techniques may also be useful. A clinical protocol called Progressive Tinnitus Management for treatment of tinnitus has been developed by the United States Department of Veterans Affairs.
Sound-based interventions
The use of sound therapy by either hearing aids or tinnitus maskers may help the brain ignore the specific tinnitus frequency. Whilst these methods are poorly supported by evidence, there are no negative effects. There are several approaches for tinnitus sound therapy. The first is sound modification to compensate for the individuals hearing loss. The second is a signal spectrum notching to eliminate energy close to the tinnitus frequency. There is some tentative evidence supporting tinnitus retraining therapy, which is aimed at reducing tinnitus-related neuronal activity. There are preliminary data on an alternative tinnitus treatment using mobile applications, including various methods: masking, sound therapy, relaxing exercises and other. These applications can work as a separate device or as a hearing aid control system.
Medications
As of 2018 there were no medications effective for idiopathic tinnitus. There is not enough evidence to determine if antidepressants or acamprosate are useful. There is no high-quality evidence to support the use of benzodiazepines for tinnitus. Usefulness of melatonin, as of 2015, is unclear. It is unclear if anticonvulsants are useful for treating tinnitus. Steroid injections into the middle ear also do not seem to be effective. There is no evidence to suggest that the use of betahistine to treat tinnitus is effective.Botulinum toxin injection has been tried with some success in some of the rare cases of objective tinnitus from a palatal tremor.Caroverine is used in a few countries to treat tinnitus. The evidence for its usefulness is very weak.
Neuromodulation
In 2020, information about recent clinical trials has indicated that bimodal neuromodulation may be a promising treatment for reducing the symptoms of tinnitus. It is a noninvasive technique that involves applying an electrical stimulus to the tongue while also administering sounds. Equipment associated with the treatments is available through physicians. Studies with it and similar devices continue in several research centers.There is some evidence supporting neuromodulation techniques such as transcranial magnetic stimulation; transcranial direct current stimulation and neurofeedback. However, the effects in terms of tinnitus relief are still under debate.
Alternative medicine
Ginkgo biloba does not appear to be effective. The American Academy of Otolaryngology recommends against taking melatonin or zinc supplements to relieve symptoms of tinnitus, and reported that evidence for the efficacy of many dietary supplements—lipoflavonoids, garlic, traditional Chinese/Korean herbal medicine, honeybee larvae and various other vitamins and minerals, as well as homeopathic preparations—did not exist. A 2016 Cochrane Review also concluded that evidence was not sufficient to support taking zinc supplements to reduce symptoms associated with tinnitus.
Prognosis
While there is no cure, most people with tinnitus get used to it over time; for a minority, it remains a significant problem.
Epidemiology
Adults
Tinnitus affects 10–15% of people. About a third of North Americans over 55 experience tinnitus. Tinnitus affects one third of adults at some time in their lives, whereas ten to fifteen percent are disturbed enough to seek medical evaluation.
70 million people in Europe are estimated to have tinnitus.
Children
Tinnitus is commonly thought of as a symptom of adulthood, and is often overlooked in children. Children with hearing loss have a high incidence of pediatric tinnitus, even though they do not express the condition or its effect on their lives. Children do not generally report tinnitus spontaneously and their complaints may not be taken seriously. Among those children who do complain of tinnitus, there is an increased likelihood of associated otological or neurological pathology such as migraine, juvenile Menieres disease or chronic suppurative otitis media. Its reported prevalence varies from 12% to 36% in children with normal hearing thresholds and up to 66% in children with a hearing loss and approximately 3–10% of children have been reported to be troubled by tinnitus.
See also
References
External links
Tinnitus at Curlie
Baguley, David; Andersson, Gerhard; McFerran, Don; McKenna, Laurence (2013) [2004]. Tinnitus: A Multidisciplinary Approach (2nd ed.). Indianapolis, IN: Wiley-Blackwell. ISBN 978-1-4051-9989-6. LCCN 2012032714. OCLC 712915603.
Langguth, B; Hajak, G; Kleinjung, T; Cacace, A; Møller, AR, eds. (2007). Tinnitus: pathophysiology and treatment. Progress in brain research no. 166 (1st ed.). Amsterdam; Boston: Elsevier. ISBN 978-0-444-53167-4. LCCN 2012471552. OCLC 648331153. Retrieved 5 November 2012. Alt URL
Møller, Aage R; Langguth, Berthold; Ridder, Dirk; et al., eds. (2011). Textbook of Tinnitus. New York: Springer. doi:10.1007/978-1-60761-145-5. ISBN 978-1-60761-144-8. LCCN 2010934377. OCLC 695388693, 771366370, 724696022. (subscription required) |
Streptococcus | Streptococcus is a genus of gram-positive coccus (plural cocci) or spherical bacteria that belongs to the family Streptococcaceae, within the order Lactobacillales (lactic acid bacteria), in the phylum Bacillota. Cell division in streptococci occurs along a single axis, so as they grow, they tend to form pairs or chains that may appear bent or twisted. This differs from staphylococci, which divide along multiple axes, thereby generating irregular, grape-like clusters of cells. Most streptococci are oxidase-negative and catalase-negative, and many are facultative anaerobes (capable of growth both aerobically and anaerobically).
The term was coined in 1877 by Viennese surgeon Albert Theodor Billroth (1829–1894), by combining the prefix "strepto-" (from Ancient Greek: στρεπτός, romanized: streptós, lit. easily twisted, pliant), together with the suffix "-coccus" (from Modern Latin: coccus, from Ancient Greek: κόκκος, romanized: kókkos, lit. grain, seed, berry.) In 1984, many bacteria formerly grouped in the genus Streptococcus were separated out into the genera Enterococcus and Lactococcus. Currently, over 50 species are recognised in this genus. This genus has been found to be part of the salivary microbiome.
Pathogenesis and classification
In addition to streptococcal pharyngitis (strep throat), certain Streptococcus species are responsible for many cases of pink eye, meningitis, bacterial pneumonia, endocarditis, erysipelas, and necrotizing fasciitis (the flesh-eating bacterial infections). However, many streptococcal species are not pathogenic, and form part of the commensal human microbiota of the mouth, skin, intestine, and upper respiratory tract. Streptococci are also a necessary ingredient in producing Emmentaler ("Swiss") cheese.Species of Streptococcus are classified based on their hemolytic properties. Alpha-hemolytic species cause oxidization of iron in hemoglobin molecules within red blood cells, giving it a greenish color on blood agar. Beta-hemolytic species cause complete rupture of red blood cells. On blood agar, this appears as wide areas clear of blood cells surrounding bacterial colonies. Gamma-hemolytic species cause no hemolysis.Beta-hemolytic streptococci are further classified by Lancefield grouping, a serotype classification (that is, describing specific carbohydrates present on the bacterial cell wall). The 21 described serotypes are named Lancefield groups A to W (excluding I and J). This system of classification was developed by Rebecca Lancefield, a scientist at Rockefeller University.In the medical setting, the most important groups are the alpha-hemolytic streptococci S. pneumoniae and Streptococcus viridans group, and the beta-hemolytic streptococci of Lancefield groups A and B (also known as “group A strep” and “group B strep”).
Table: Medically relevant streptococci (not all are alpha-hemolytic)
Alpha-hemolytic
When alpha-hemolysis (α-hemolysis) is present, the agar under the colony will appear dark and greenish due to the conversion of hemoglobin to green biliverdin. Streptococcus pneumoniae and a group of oral streptococci (Streptococcus viridans or viridans streptococci) display alpha-hemolysis.
Alpha-hemolysis is also termed incomplete hemolysis or partial hemolysis because the cell membranes of the red blood cells are left intact. This is also sometimes called green hemolysis because of the color change in the agar.
Pneumococci
S. pneumoniae (sometimes called pneumococcus), is a leading cause of bacterial pneumonia and occasional etiology of otitis media, sinusitis, meningitis, and peritonitis. Inflammation is thought to be the major cause of how pneumococci cause disease, hence the tendency of diagnoses associated with them to involve inflammation. They possess no Lancefield antigens.
The viridans group: alpha-hemolytic
The viridans streptococci are a large group of commensal bacteria that are either alpha-hemolytic, producing a green coloration on blood agar plates (hence the name "viridans", from Latin vĭrĭdis, green), or nonhemolytic. They possess no Lancefield antigens.
Beta-hemolytic
Beta-hemolysis (β-hemolysis), sometimes called complete hemolysis, is a complete lysis of red cells in the media around and under the colonies: the area appears lightened (yellow) and transparent. Streptolysin, an exotoxin, is the enzyme produced by the bacteria which causes the complete lysis of red blood cells. There are two types of streptolysin: Streptolysin O (SLO) and streptolysin S (SLS). Streptolysin O is an oxygen-sensitive cytotoxin, secreted by most group A Streptococcus (GAS), and interacts with cholesterol in the membrane of eukaryotic cells (mainly red and white blood cells, macrophages, and platelets), and usually results in beta-hemolysis under the surface of blood agar. Streptolysin S is an oxygen-stable cytotoxin also produced by most GAS strains which results in clearing on the surface of blood agar. SLS affects immune cells, including polymorphonuclear leukocytes and lymphocytes, and is thought to prevent the host immune system from clearing infection. Streptococcus pyogenes, or GAS, displays beta hemolysis.
Some weakly beta-hemolytic species cause intense hemolysis when grown together with a strain of Staphylococcus. This is called the CAMP test. Streptococcus agalactiae displays this property. Clostridium perfringens can be identified presumptively with this test. Listeria monocytogenes is also positive on sheeps blood agar.
Group A
Group A S. pyogenes is the causative agent in a wide range of group A streptococcal infections (GAS). These infections may be noninvasive or invasive. The noninvasive infections tend to be more common and less severe. The most common of these infections include streptococcal pharyngitis (strep throat) and impetigo. Scarlet fever is also a noninvasive infection, but has not been as common in recent years.
The invasive infections caused by group A beta-hemolytic streptococci tend to be more severe and less common. This occurs when the bacterium is able to infect areas where it is not usually found, such as the blood and the organs. The diseases that may be caused include streptococcal toxic shock syndrome, necrotizing fasciitis, pneumonia, and bacteremia. Globally, GAS has been estimated to cause more than 500,000 deaths every year, making it one of the worlds leading pathogens.Additional complications may be caused by GAS, namely acute rheumatic fever and acute glomerulonephritis. Rheumatic fever, a disease that affects the joints, kidneys, and heart valves, is a consequence of untreated strep A infection caused not by the bacterium itself. Rheumatic fever is caused by the antibodies created by the immune system to fight off the infection cross-reacting with other proteins in the body. This "cross-reaction" causes the body to essentially attack itself and leads to the damage above. A similar autoimmune mechanism initiated by Group A beta-hemolytic streptococcal (GABHS) infection is hypothesized to cause pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS), wherein autoimmune antibodies affect the basal ganglia, causing rapid onset of psychiatric, motor, sleep, and other symptoms in pediatric patients.
GAS infection is generally diagnosed with a rapid strep test or by culture.
Group B
S. agalactiae, or group B streptococcus, GBS, causes pneumonia and meningitis in newborns and the elderly, with occasional systemic bacteremia. Importantly, Streptococcus agalactiae is the most common cause of meningitis in infants from one month to three months old. They can also colonize the intestines and the female reproductive tract, increasing the risk for premature rupture of membranes during pregnancy, and transmission of the organism to the infant. The American College of Obstetricians and Gynecologists, American Academy of Pediatrics, and the Centers for Disease Control recommend all pregnant women between 35 and 37 weeks gestation to be tested for GBS. Women who test positive should be given prophylactic antibiotics during labor, which will usually prevent transmission to the infant.The United Kingdom has chosen to adopt a risk factor-based protocol, rather than the culture-based protocol followed in the US. Current guidelines state that if one or more of the following risk factors is present, then the woman should be treated with intrapartum antibiotics:
GBS bacteriuria during this pregnancy
History of GBS disease in a previous infant
Intrapartum fever (≥38 °C)
Preterm labour (<37 weeks)
Prolonged rupture of membranes (>18 hours)This protocol results in the administration of intrapartum antibiotics to 15–20% of pregnant women and prevention of 65–70% of cases of early onset GBS sepsis.
Group C
This group includes S. equi, which causes strangles in horses, and S. zooepidemicus—S. equi is a clonal descendant or biovar of the ancestral S. zooepidemicus—which causes infections in several species of mammals, including cattle and horses. S. dysgalactiae subsp. dysgalactiae is also a member of group C, beta-haemolytic streptococci that can cause pharyngitis and other pyogenic infections similar to group A streptococci.
Group D (enterococci)
Many former group D streptococci have been reclassified and placed in the genus Enterococcus (including E. faecalis, E. faecium, E. durans, and E. avium). For example, Streptococcus faecalis is now Enterococcus faecalis. E. faecalis is sometimes alpha-hemolytic and E. faecium is sometimes beta hemolytic.The remaining nonenterococcal group D strains include Streptococcus gallolyticus, Streptococcus bovis, Streptococcus equinus and Streptococcus suis.
Nonhemolytic streptococci rarely cause illness. However, weakly hemolytic group D beta-hemolytic streptococci and Listeria monocytogenes (which is actually a gram-positive bacillus) should not be confused with nonhemolytic streptococci.
Group F streptococci
Group F streptococci were first described in 1934 by Long and Bliss amongst the "minute haemolytic streptococci". They are also known as Streptococcus anginosus (according to the Lancefield classification system) or as members of the S. milleri group (according to the European system).
Group G streptococci
These streptococci are usually, but not exclusively, beta-hemolytic. Streptococcus dysgalactiae subsp. canis is the predominant subspecies encountered. It is a particularly common GGS in humans, although it is typically found on animals. S. phocae is a GGS subspecies that has been found in marine mammals and marine fish species. In marine mammals it has been mainly associated with meningoencephalitis, sepsis, and endocarditis, but is also associated with many other pathologies. Its environmental reservoir and means of transmission in marine mammals is not well characterized.
Group H streptococci
Group H streptococci cause infections in medium-sized canines. Group H streptococci rarely cause human illness unless a human has direct contact with the mouth of a canine. One of the most common ways this can be spread is human-to-canine, mouth-to-mouth contact. However, the canine may lick the humans hand and infection can be spread, as well.
Molecular taxonomy and phylogenetics
Streptococci have been divided into six groups on the basis of their 16S rDNA sequences: S. anginosus, S. gallolyticus, S. mitis, S. mutans, S. pyogenes and S. salivarius. The 16S groups have been confirmed by whole genome sequencing (see figure). The important pathogens S. pneumoniae and S. pyogenes belong to the S. mitis and S. pyogenes groups, respectively, while the causative agent of dental caries, Streptococcus mutans, is basal to the Streptococcus group.
Recent technological advances have resulted in an increase of available genome sequences for Streptococcus species, allowing for more robust and reliable phylogenetic and comparative genomic analyses to be conducted. In 2018, the evolutionary relationships within Streptococcus was re-examined by Patel and Gupta through the analysis of comprehensive phylogenetic trees constructed based on four different datasets of proteins and the identification of 134 highly specific molecular signatures (in the form of conserved signature indels) that are exclusively shared by the entire genus or its distinct subclades.The results revealed the presence of two main clades at the highest level within Streptococcus, termed the “Mitis-Suis” and “Pyogenes-Equinus-Mutans” clades. The “Mitis-Suis” main clade comprises the Suis subclade and the Mitis clade, which encompasses the Angiosus, Pneumoniae, Gordonii and Parasanguinis subclades. The second main clade, the “Pyogenes-Equinus-Mutans”, includes the Pyogenes, Mutans, Salivarius, Equinus, Sobrinus, Halotolerans, Porci, Entericus and Orisratti subclades. In total, 14 distinct subclades have been identified within the genus Streptococcus, each supported by reliable branching patterns in phylogenetic trees and by the presence of multiple conserved signature indels in different proteins that are distinctive characteristics of the members of these 14 clades. A summary diagram showing the overall relationships among the Streptococcus based on these studies is depicted in a figure on this page.
Genomics
The genomes of hundreds of species have been sequenced. Most Streptococcus genomes are 1.8 to 2.3 Mb in size and encode 1,700 to 2,300 proteins. Some important genomes are listed in the table. The four species shown in the table (S. pyogenes, S. agalactiae, S. pneumoniae, and S. mutans) have an average pairwise protein sequence identity of about 70%.
Bacteriophage
Bacteriophages have been described for many species of Streptococcus. 18 prophages have been described in S. pneumoniae that range in size from 38 to 41 kb in size, encoding from 42 to 66 genes each. Some of the first Streptococcus phages discovered were Dp-1
and ω1 (alias ω-1).
In 1981 the Cp (Complutense phage 1, officially Streptococcus virus Cp1, Picovirinae) family was discovered with Cp-1 as its first member. Dp-1 and Cp-1 infect both S. pneumoniae and S. mitis. However, the host ranges of most Streptococcus phages have not been investigated systematically.
Natural genetic transformation
Natural genetic transformation involves the transfer of DNA from one bacterium to another through the surrounding medium. Transformation is a complex process dependent on expression of numerous genes. To be capable of transformation a bacterium must enter a special physiologic state referred to as competence. S. pneumoniae, S. mitis and S. oralis can become competent, and as a result actively acquire homologous DNA for transformation by a predatory fratricidal mechanism This fratricidal mechanism mainly exploits non-competent siblings present in the same niche Among highly competent isolates of S. pneumoniae, Li et al. showed that nasal colonization fitness and virulence (lung infectivity) depend on an intact competence system. Competence may allow the streptococcal pathogen to use external homologous DNA for recombinational repair of DNA damages caused by the hosts oxidative attack.
See also
Cia-dependent small RNAs
Quellung reaction
Streptococcal infection in poultry
Streptococcal pharyngitis
Streptokinase
References
External links
Centers for Disease Control Prevention (CDC) (March 2000). "Adoption of perinatal group B streptococcal disease prevention recommendations by prenatal-care providers—Connecticut and Minnesota, 1998". MMWR Morb. Mortal. Wkly. Rep. 49 (11): 228–32. PMID 10763673.
Nature-Inspired CRISPR Enzyme Discoveries Vastly Expand Genome Editing . On: SciTechDaily. June 16, 2020. Source: Media Lab, Massachusetts Institute of Technology.
Streptococcus genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID
The Canadian Strep B Foundation Archived 2013-05-02 at the Wayback Machine
The UK Group B Strep Support charity |
Fibrosis | Fibrosis, also known as fibrotic scarring, is a pathological wound healing in which connective tissue replaces normal parenchymal tissue to the extent that it goes unchecked, leading to considerable tissue remodelling and the formation of permanent scar tissue.Repeated injuries, chronic inflammation and repair are susceptible to fibrosis where an accidental excessive accumulation of extracellular matrix components, such as the collagen is produced by fibroblasts, leading to the formation of a permanent fibrotic scar.In response to injury, this is called scarring, and if fibrosis arises from a single cell line, this is called a fibroma. Physiologically, fibrosis acts to deposit connective tissue, which can interfere with or totally inhibit the normal architecture and function of the underlying organ or tissue. Fibrosis can be used to describe the pathological state of excess deposition of fibrous tissue, as well as the process of connective tissue deposition in healing. Defined by the pathological accumulation of extracellular matrix (ECM) proteins, fibrosis results in scarring and thickening of the affected tissue — it is in essence an exaggerated wound healing response which interferes with normal organ function.
Physiology
Fibrosis is similar to the process of scarring, in that both involve stimulated fibroblasts laying down connective tissue, including collagen and glycosaminoglycans. The process is initiated when immune cells such as macrophages release soluble factors that stimulate fibroblasts. The most well characterized pro-fibrotic mediator is TGF beta, which is released by macrophages as well as any damaged tissue between surfaces called interstitium. Other soluble mediators of fibrosis include CTGF, platelet-derived growth factor (PDGF), and interleukin 10 (IL-10). These initiate signal transduction pathways such as the AKT/mTOR and SMAD pathways that ultimately lead to the proliferation and activation of fibroblasts, which deposit extracellular matrix into the surrounding connective tissue. This process of tissue repair is a complex one, with tight regulation of extracellular matrix (ECM) synthesis and degradation ensuring maintenance of normal tissue architecture. However, the entire process, although necessary, can lead to a progressive irreversible fibrotic response if tissue injury is severe or repetitive, or if the wound healing response itself becomes deregulated.
Anatomical location
Fibrosis can occur in many tissues within the body, typically as a result of inflammation or damage, and examples include:
Lungs
Fibrothorax
Pulmonary fibrosis
Cystic fibrosis
Idiopathic pulmonary fibrosis (idiopathic meaning the cause is unknown)
Radiation-induced lung injury (following treatment for cancer)
Liver
Bridging fibrosis An advanced stage of liver fibrosis seen in the progressive form of chronic liver diseases. The term “bridging” means ‘the formation of “bridge” (by the band of mature & thick fibrous tissue) obliterating portal area to central vein’, leads to the formation of pseudolobules. Long-term exposure to hepatotoxin (e.g. thioacetamide, carbon tetrachloride, diethylnitrosamine) results in the bridging fibrosis in experimental animal models.
Senescence of hepatic stellate cells could prevent progression of liver fibrosis, although this has not been implemented as a therapy, and would carry the risk of hepatic dysfunction.
Cirrhosis
Kidney
CYR61 induction of cellular senescence in the kidney is a potential therapy to limit fibrosis.
Brain
Glial scar
Heart
Myocardial fibrosis has mainly two forms:
Interstitial fibrosis, which has been described in congestive heart failure, hypertension, and normal aging.
Replacement fibrosis, which indicates an older myocardial infarction.
Other
Arterial stiffness
Arthrofibrosis (knee, shoulder, other joints)
Chronic kidney disease
Crohns disease (intestine)
Dupuytrens contracture (hands, fingers)
Keloid (skin)
Mediastinal fibrosis (soft tissue of the mediastinum)
Myelofibrosis (bone marrow)
Peyronies disease (penis)
Nephrogenic systemic fibrosis (skin)
Progressive massive fibrosis (lungs); a complication of coal workers pneumoconiosis
Retroperitoneal fibrosis (soft tissue of the retroperitoneum)
Scleroderma/systemic sclerosis (skin, lungs)
Some forms of adhesive capsulitis (shoulder)
References
External links
Media related to Fibrosis at Wikimedia Commons |
Cardiogenic shock | Cardiogenic shock (CS) is a medical emergency resulting from inadequate blood flow due to the dysfunction of the ventricles of the heart. Signs of inadequate blood flow include low urine production (<30 mL/hour), cool arms and legs, and altered level of consciousness. People may also have a severely low blood pressure and heart rate.
Causes of cardiogenic shock include cardiomyopathic, arrhythmic, and mechanical. CS is most commonly precipitated by acute myocardial infarction. People can have combined types of shock.
Treatment of cardiogenic shock depends on the cause with the initial goals to improve blood flow to the body. This can be done in a number of ways—fluid resuscitation, blood transfusions, vasopressors, and ionotropes. If cardiogenic shock is due to a heart attack, attempts to open the hearts arteries may help. An intra-aortic balloon pump or left ventricular assist device may improve matters until this can be done. Medications that improve the hearts ability to contract (positive inotropes) may help; however, it is unclear which is best and at present there is no convincing evidence supporting inotropic or vasodilating therapy to reduce mortality in hemodynamically unstable patients. Norepinephrine may be better if the blood pressure is very low whereas dopamine or dobutamine may be more useful if only slightly low. Cardiogenic shock is a condition that is difficult to fully reverse even with an early diagnosis. With that being said, early initiation of mechanical circulatory support, early percutaneous coronary intervention, inotropes, and heart transplantation may improve outcomes. Care is directed to the dysfunctional organs (dialysis for the kidneys, mechanical ventilation for lungs dysfunction).Mortality rates have been decreasing in the United States. This is likely due to the rapid identification and treatment of the CS. Some studies have suggested that this possibly related to the increased use of coronary reperfusion strategies, like heart stents. Nonetheless, the mortality rates remain high. Multi-organ failure is associated with higher rates of mortality.
Signs and symptoms
The presentation is the following:
Anxiety, restlessness, altered mental state due to decreased blood flow to the brain and subsequent hypoxia.
Low blood pressure due to decrease in cardiac output.
A rapid, weak, thready pulse due to decreased circulation combined with tachycardia.
Cool, clammy, and mottled skin (cutis marmorata) due to vasoconstriction and subsequent hypoperfusion of the skin.
Distended jugular veins due to increased jugular venous pressure.
Oliguria (low urine output) due to inadequate blood flow to the kidneys if the condition persists.
Rapid and deeper respirations (hyperventilation) due to sympathetic nervous system stimulation and acidosis.
Fatigue due to hyperventilation and hypoxia.
Absent pulse in fast and abnormal heart rhythms.
Pulmonary edema, involving fluid back-up in the lungs due to insufficient pumping of the heart.
Causes
Cardiogenic shock is caused by the failure of the heart to pump effectively. It is due to damage to the heart muscle, most often from a heart attack or myocardial contusion. Other causes include abnormal heart rhythms, cardiomyopathy, heart valve problems, ventricular outflow obstruction (i.e. systolic anterior motion (SAM) in hypertrophic cardiomyopathy), or ventriculoseptal defects. It can also be caused by a sudden decompressurization (e.g. in an aircraft), where air bubbles are released into the bloodstream (Henrys law), causing heart failure.
Diagnosis
Electrocardiogram
An electrocardiogram helps to establish the exact diagnosis and guides treatment, it may reveal:
Abnormal heart rhythms, such as bradycardia (slowed heart rate)
myocardial infarction (ST-elevation MI, STEMI, is usually more dangerous than non-STEMIs; MIs that affect the ventricles are usually more dangerous than those that affect the atria; those affecting the left side of the heart, especially the left ventricle, are usually more dangerous than those affecting the right side, unless that side is severely compromised)
Signs of cardiomyopathy
Echocardiography
Echocardiography may show poor ventricular function, signs of PED, rupture of the interventricular septum, an obstructed outflow tract or cardiomyopathy.
Swan-Ganz catheter
The Swan–Ganz catheter or pulmonary artery catheter may assist in the diagnosis by providing information on the hemodynamics.
Biopsy
When cardiomyopathy is suspected as the cause of cardiogenic shock, a biopsy of heart muscle may be needed to make a definite diagnosis.
Cardiac index
If the cardiac index falls acutely below 2.2 L/min/m2, the person may be in cardiogenic shock.
Treatment
Depending on the type of cardiogenic shock, treatment involves infusion of fluids, or in shock refractory to fluids, inotropic medications. In case of an abnormal heart rhythm immediate synchronized cardioversion or anti-arrhythmic agents may be administered, e.g. adenosine.Positive inotropic agents (such as dobutamine or milrinone), which enhance the hearts pumping capabilities, are used to improve the contractility and correct the low blood pressure. Should that not suffice an intra-aortic balloon pump (which reduces workload for the heart, and improves perfusion of the coronary arteries) or a left ventricular assist device (which augments the pump-function of the heart) can be considered. Mechanical ventilation or ECMO may be used to help stabilize people with severe or refractory cardiogenic shock until they can be given some type of definitive treatment, such as a ventricular assist device. Finally, as a last resort, if the person is stable enough and otherwise qualifies, heart transplantation, or if not eligible an artificial heart, can be placed. These invasive measures are important tools—more than 50% of patients who do not die immediately due to cardiac arrest from a lethal abnormal heart rhythm and live to reach the hospital (who have usually experienced a severe acute myocardial infarction, which in itself still has a relatively high mortality rate), die within the first 24 hours. The mortality rate for those still living at time of admission who develop complications (among others, cardiac arrest or further abnormal heart rhythms, heart failure, cardiac tamponade, a ruptured or dissecting aneurysm, or another heart attack) from cardiogenic shock is even worse around 85%, especially without drastic measures such as ventricular assist devices or transplantation.Cardiogenic shock may be treated with intravenous dobutamine, which acts on β1 receptors of the heart leading to increased contractility and heart rate.
References
External links
Cardiogenic Shock by eMedicine |
Dysarthria | Dysarthria is a speech sound disorder resulting from neurological injury of the motor component of the motor–speech system and is characterized by poor articulation of phonemes. In other words, it is a condition in which problems effectively occur with the muscles that help produce speech, often making it very difficult to pronounce words. It is unrelated to problems with understanding language (that is, dysphasia or aphasia), although a person can have both. Any of the speech subsystems (respiration, phonation, resonance, prosody, and articulation) can be affected, leading to impairments in intelligibility, audibility, naturalness, and efficiency of vocal communication. Dysarthria that has progressed to a total loss of speech is referred to as anarthria. The term dysarthria is from New Latin, dys- "dysfunctional, impaired" and arthr- "joint, vocal articulation".Neurological injury due to damage in the central or peripheral nervous system may result in weakness, paralysis, or a lack of coordination of the motor–speech system, producing dysarthria. These effects in turn hinder control over the tongue, throat, lips or lungs; for example, swallowing problems (dysphagia) are also often present in those with dysarthria. Cranial nerves that control the muscles relevant to dysarthria include the trigeminal nerves motor branch (V), the facial nerve (VII), the glossopharyngeal nerve (IX), the vagus nerve (X), and the hypoglossal nerve (XII).
Dysarthria does not include speech disorders from structural abnormalities, such as cleft palate and must not be confused with apraxia of speech, which refers to problems in the planning and programming aspect of the motor–speech system. Just as the term "articulation" can mean either "speech" or "joint movement", so is the combining form of arthr- the same in the terms "dysarthria", "dysarthrosis", and "arthropathy"; the term "dysarthria" is conventionally reserved for the speech problem and is not used to refer to arthropathy, whereas "dysarthrosis" has both senses but usually refers to arthropathy.
Causes
There are many potential causes of dysarthria. They include toxic, metabolic, degenerative diseases, traumatic brain injury, or thrombotic or embolic stroke.Degenerative diseases include parkinsonism, amyotrophic lateral sclerosis (ALS), multiple sclerosis, Huntingtons disease, Niemann-Pick disease, and Friedreichs ataxia. Toxic and metabolic conditions include: Wilsons disease, hypoxic encephalopathy such as in drowning, and central pontine myelinolysis.These result in lesions to key areas of the brain involved in planning, executing, or regulating motor operations in skeletal muscles (i.e. muscles of the limbs), including muscles of the head and neck (dysfunction of which characterises dysarthria). These can result in dysfunction, or failure of: the motor or somatosensory cortex of the brain, corticobulbar pathways, the cerebellum, basal nuclei (consisting of the putamen, globus pallidus, caudate nucleus, substantia nigra etc.), brainstem (from which the cranial nerves originate), or the neuromuscular junction (in diseases such as myasthenia gravis) which block the nervous systems ability to activate motor units and effect correct range and strength of movements.Causes:
Brain tumor
Cerebral palsy
Guillain–Barré syndrome
Hypothermia
Idiopathic intracranial hypertension (formerly known as pseudotumor cerebri)
Lyme disease
Stroke
Tay–Sachs disease, and late-onset Tay–Sachs disease (LOTS)
Transient ischemic attack, a mini stroke
Diagnosis
Classification
Dysarthrias are classified in multiple ways based on the presentation of symptoms. Specific dysarthrias include spastic (resulting from bilateral damage to the upper motor neuron), flaccid (resulting from bilateral or unilateral damage to the lower motor neuron), ataxic (resulting from damage to cerebellum), unilateral upper motor neuron (presenting milder symptoms than bilateral UMN damage), hyperkinetic and hypokinetic (resulting from damage to parts of the basal ganglia, such as in Huntingtons disease or Parkinsonism), and the mixed dysarthrias (where symptoms of more than one type of dysarthria are present). The majority of dysarthric patients are diagnosed as having mixed dysarthria, as neural damage resulting in dysarthria is rarely contained to one part of the nervous system — for example, multiple strokes, traumatic brain injury, and some kinds of degenerative illnesses (such as amyotrophic lateral sclerosis) usually damage many different sectors of the nervous system.Ataxic dysarthria is an acquired neurological and sensorimotor speech deficit. It is a common diagnosis among the clinical spectrum of ataxic disorders. Since regulation of skilled movements is a primary function of the cerebellum, damage to the superior cerebellum and the superior cerebellar peduncle is believed to produce this form of dysarthria in ataxic patients. Growing evidence supports the likelihood of cerebellar involvement specifically affecting speech motor programming and execution pathways, producing the characteristic features associated with ataxic dysarthria. This link to speech motor control can explain the abnormalities in articulation and prosody, which are hallmarks of this disorder. Some of the most consistent abnormalities observed in patients with ataxia dysarthria are alterations of the normal timing pattern, with prolongation of certain segments and a tendency to equalize the duration of syllables when speaking. As the severity of the dysarthria increases, the patient may also lengthen more segments as well as increase the degree of lengthening of each individual segment.Common clinical features of ataxic dysarthria include abnormalities in speech modulation, rate of speech, explosive or scanning speech, slurred speech, irregular stress patterns, and vocalic and consonantal misarticulations.Ataxic dysarthria is associated with damage to the left cerebellar hemisphere in right-handed patients.Dysarthria may affect a single system; however, it is more commonly reflected in multiple motor–speech systems. The etiology, degree of neuropathy, existence of co-morbidities, and the individuals response all play a role in the effect the disorder has on the individuals quality of life. Severity ranges from occasional articulation difficulties to verbal speech that is completely unintelligible.Individuals with dysarthria may experience challenges in the following:
Timing
Vocal quality
Pitch
Volume
Breath control
Speed
Strength
Steadiness
Range
ToneExamples of specific observations include a continuous breathy voice, irregular breakdown of articulation, monopitch, distorted vowels, word flow without pauses, and hypernasality.
Treatment
Articulation problems resulting from dysarthria are treated by speech language pathologists, using a variety of techniques. Techniques used depend on the effect the dysarthria has on control of the articulators. Traditional treatments target the correction of deficits in rate (of articulation), prosody (appropriate emphasis and inflection, affected e.g. by apraxia of speech, right hemisphere brain damage, etc.), intensity (loudness of the voice, affected e.g. in hypokinetic dysarthrias such as in Parkinsons), resonance (ability to alter the vocal tract and resonating spaces for correct speech sounds) and phonation (control of the vocal folds for appropriate voice quality and valving of the airway). These treatments have usually involved exercises to increase strength and control over articulator muscles (which may be flaccid and weak, or overly tight and difficult to move), and using alternate speaking techniques to increase speaker intelligibility (how well someones speech is understood by peers). With the speech–language pathologist, there are several skills that are important to learn; safe chewing and swallowing techniques, avoiding conversations when feeling tired, repeat words and syllables over and over in order to learn the proper mouth movements, and techniques to deal with the frustration while speaking. Depending on the severity of the dysarthria, another possibility includes learning how to use a computer or flip cards in order to communicate more effectively.More recent techniques based on the principles of motor learning (PML), such as LSVT (Lee Silverman voice treatment) speech therapy and specifically LSVT may improve voice and speech function in PD. For Parkinsons, aim to retrain speech skills through building new generalised motor programs, and attach great importance to regular practice, through peer/partner support and self-management. Regularity of practice, and when to practice, are the main issues in PML treatments, as they may determine the likelihood of generalization of new motor skills, and therefore how effective a treatment is.Augmentative and alternative communication (AAC) devices that make coping with a dysarthria easier include speech synthesis and text-based telephones. These allow people who are unintelligible, or may be in the later stages of a progressive illness, to continue to be able to communicate without the need for fully intelligible speech.
See also
Lists of language disorders
References
Further reading
External links
Online Speech and Voice Disorder Support (VoiceMatters.net)
American Speech-Language-Hearing Association
News About Dysarthria |
Lymphocytic colitis | Lymphocytic colitis is a subtype of microscopic colitis, a condition characterized by chronic non-bloody watery diarrhea.
Presentation
Causes
No definite cause has been determined. The peak incidence of lymphocytic colitis is in persons over age 50; the disease affects women and men equally. Some reports have implicated long-term usage of NSAIDs, proton pump inhibitors, and selective serotonin reuptake inhibitors, and other drugs. Associations with other autoimmune disorders suggests that overactive immune responses occur.
Diagnosis
The colonoscopy is normal but histology of the mucosal biopsy reveals an accumulation of lymphocytes in the colonic epithelium and connective tissue (lamina propria). Collagenous colitis shares this feature but additionally shows a distinctive thickening of the subepithelial collagen table.
Treatment
Budesonide, in colonic release preparations, has been shown in randomized controlled trials to be effective in treating this disorder. It helps control the diarrheal symptoms and treatment is usually given for several weeks. Sometimes it is used to prevent frequent relapses.Over-the-counter antidiarrheal drugs may be effective for some people with lymphocytic colitis. Anti-inflammatory drugs, such as salicylates, mesalazine, and systemic corticosteroids may be prescribed for people who do not respond to other drug treatment. The long-term prognosis for this disease is good with a proportion of people suffering relapses which respond to treatment.
History
Lymphocytic colitis was first described in 1989.
See also
Colitis
References
External links
NIH
eMedicine.com |
Metatarsalgia | Metatarsalgia, literally metatarsal pain and colloquially known as a stone bruise, is any painful foot condition affecting the metatarsal region of the foot. This is a common problem that can affect the joints and bones of the metatarsals.
Metatarsalgia is most often localized to the first metatarsal head – the ball of the foot just behind the big toe. There are two small sesamoid bones under the first metatarsal head. The next most frequent site of metatarsal head pain is under the second metatarsal. This can be due to either too short a first metatarsal bone or to "hypermobility of the first ray" – metatarsal bone and medial cuneiform bone behind it – both of which result in excess pressure being transmitted into the second metatarsal head.
Signs and Symptoms
Metatarsalgia is characterized by a sharp pain in the ball of the foot.
Causes
One cause of metatarsalgia is Mortons neuroma. When toes are squeezed together too often and for too long, the nerve that runs between the toes can swell and get thicker. This swelling can make it painful when walking on that foot. High-heeled, tight, or narrow shoes can make pain worse. This is common in runners, particularly of long distance. The ball of the foot takes a lot of weight over the years and if running on pavement or running in ill-fitting running shoes, the odds of developing Mortons neuroma increase. Changing to shoes that give the toes more room can help.
Diagnosis
Diagnosis is often done by patient self report.If a patient feels pain in the ball of the foot a podiatrist is the best source for a diagnosis. A podiatrist is a trained expert who can offer treatment options.
Management
The most common treatments are:
Rest
Ice
NSAID
Properly fitted shoes
Metatarsal pads
Arch SupportsRemoving excess callouses may be helpful. In extreme cases, injection or surgery may be indicated.
See also
Mortons neuroma
References
External links
Cleveland Clinic
heathline
Mayo Clinic
WebMD |
Attention deficit hyperactivity disorder | Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterised by excessive amounts of inattention, hyperactivity, and impulsivity that are pervasive, impairing in multiple contexts, and otherwise age-inappropriate.ADHD symptoms arise from executive dysfunction, and emotional dysregulation is often considered a core symptom. In children, problems paying attention may result in poor school performance. ADHD is associated with other neurodevelopmental and mental disorders as well as some non-psychiatric disorders, which can cause additional impairment, especially in modern society. Although people with ADHD struggle to focus on tasks they are not particularly interested in completing, they are often able to maintain an unusually prolonged and intense level of attention for tasks they do find interesting or rewarding; this is known as hyperfocus.
The precise causes of ADHD are unknown in the majority of cases. Genetic factors play an important role; ADHD tends to run in families and has a heritability rate of 74%. Toxins and infections during pregnancy and brain damage may be environmental risks.
It affects about 5–7% of children when diagnosed via the DSM-IV criteria, and 1–2% when diagnosed via the ICD-10 criteria. Rates are similar between countries and differences in rates depend mostly on how it is diagnosed. ADHD is diagnosed approximately twice as often in boys than in girls, and 1.6 times more often in men than in women, although the disorder is overlooked in girls or diagnosed in later life because their symptoms sometimes differ from diagnostic criteria. About 30–50% of people diagnosed in childhood continue to have ADHD in adulthood, with 2.58% of adults estimated to have ADHD which began in childhood. In adults, hyperactivity is usually replaced by inner restlessness, and adults often develop coping skills to compensate for their impairments. The condition can be difficult to tell apart from other conditions, as well as from high levels of activity within the range of normal behavior. ADHD has a negative impact on patients health related quality of life and that this may be further exacerbated by, or may increase the risk of, other psychiatric conditions such as anxiety and depression.ADHD management recommendations vary and usually involve some combination of medications, counseling, and lifestyle changes. The British guideline emphasises environmental modifications and education for individuals and carers about ADHD as the first response. If symptoms persist, parent-training, medication, or psychotherapy (especially cognitive behavioral therapy) can be recommended based on age. Canadian and American guidelines recommend medications and behavioral therapy together, except in preschool-aged children for whom the first-line treatment is behavioral therapy alone. Stimulant medications are the most effective pharmaceutical treatment, although there may be side effects and any improvements will be reverted if medication is ceased.ADHD, its diagnosis, and its treatment have been considered controversial since the 1970s. These controversies have involved doctors, teachers, policymakers, parents, and the media. Topics have included causes of ADHD and the use of stimulant medications in its treatment. ADHD is now a well-validated clinical diagnosis in children and adults, and the debate in the scientific community mainly centers on how it is diagnosed and treated. ADHD was officially known as attention deficit disorder (ADD) from 1980 to 1987; prior to the 1980s, it was known as hyperkinetic reaction of childhood. Symptoms similar to those of ADHD have been described in medical literature dating back to the 18th century.
Signs and symptoms
Inattention, hyperactivity (restlessness in adults), disruptive behavior, and impulsivity are common in ADHD. Academic difficulties are frequent as are problems with relationships. The symptoms can be difficult to define, as it is hard to draw a line at where normal levels of inattention, hyperactivity, and impulsivity end and significant levels requiring interventions begin.According to the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and its text revision (DSM-5-TR), symptoms must be present for six months or more to a degree that is much greater than others of the same age. This requires at least six symptoms of either inattention or hyperactivity/impulsivity for those under 17 and at least five symptoms for those 17 years or older. The symptoms must be present in at least two settings (e.g., social, school, work, or home), and must directly interfere with or reduce quality of functioning. Additionally, several symptoms must have been present before age twelve.
Subtypes
ADHD is divided into three primary presentations:
predominantly inattentive (ADHD-PI or ADHD-I)
predominantly hyperactive-impulsive (ADHD-PH or ADHD-HI)
combined type (ADHD-C).The table "Symptoms" lists the symptoms for ADHD-I and ADHD-HI from two major classification systems. Symptoms which can be better explained by another psychiatric or medical condition which an individual has are not considered to be a symptom of ADHD for that person.
Girls and women with ADHD tend to display fewer hyperactivity and impulsivity symptoms but more symptoms of inattention and distractibility.Symptoms are expressed differently and more subtly as the individual ages.: 6 Hyperactivity tends to become less overt with age and turns into inner restlessness, difficulty relaxing or remaining still, talkativeness or constant mental activity in teens and adults with ADHD.: 6–7 Impulsivity in adulthood may appear as thoughtless behaviour, impatience, irresponsible spending and sensation-seeking behaviours,: 6 while inattention may appear as becoming easily bored, difficulty with organization, remaining on task and making decisions, and sensitivity to stress.: 6 Although not listed as an official symptom for this condition, emotional dysregulation or mood lability is generally understood to be a common symptom of ADHD.: 6 People with ADHD of all ages are more likely to have problems with social skills, such as social interaction and forming and maintaining friendships. This is true for all presentations. About half of children and adolescents with ADHD experience social rejection by their peers compared to 10–15% of non-ADHD children and adolescents. People with attention deficits are prone to having difficulty processing verbal and nonverbal language which can negatively affect social interaction. They also may drift off during conversations, miss social cues, and have trouble learning social skills.Difficulties managing anger are more common in children with ADHD as are delays in speech, language and motor development. Poorer handwriting is more common in children with ADHD. Poor handwriting in many situations can be a side effect of ADHD in itself due to decreased attentiveness but when its a constant problem it may also be in part due to both Dyslexic and Dysgraphic individuals having higher rates of ADHD than the general population, with 3 in 10 people who have dyslexia also having ADHD. Although it causes significant difficulty, many children with ADHD have an attention span equal to or greater than that of other children for tasks and subjects they find interesting.
Comorbidities
Psychiatric
In children, ADHD occurs with other disorders about two-thirds of the time.Other neurodevelopmental conditions are common comorbidities. Autism spectrum disorder (ASD), co-occurring at a rate of 21% in those with ADHD, affects social skills, ability to communicate, behaviour, and interests. Both ADHD and ASD can be diagnosed in the same person. Learning disabilities have been found to occur in about 20–30% of children with ADHD. Learning disabilities can include developmental speech and language disorders, and academic skills disorders. ADHD, however, is not considered a learning disability, but it very frequently causes academic difficulties. Intellectual disabilities and Tourettes syndrome are also common.
ADHD is often comorbid with disruptive, impulse control, and conduct disorders. Oppositional defiant disorder (ODD) occurs in about 25% of children with an inattentive presentation and 50% of those with a combined presentation. It is characterised by angry or irritable mood, argumentative or defiant behavior and vindictiveness which are age-inappropriate. Conduct disorder (CD) occurs in about 25% of adolescents with ADHD. It is characterised by aggression, destruction of property, deceitfulness, theft and violations of rules. Adolescents with ADHD who also have CD are more likely to develop antisocial personality disorder in adulthood. Brain imaging supports that CD and ADHD are separate conditions, wherein conduct disorder was shown to reduce the size of ones temporal lobe and limbic system, and increase the size of ones orbitofrontal cortex, whereas ADHD was shown to reduce connections in the cerebellum and prefrontal cortex more broadly. Conduct disorder involves more impairment in motivation control than ADHD. Intermittent explosive disorder is characterised by sudden and disproportionate outbursts of anger and co-occurs in individuals with ADHD more frequently than in the general population.Anxiety and mood disorders are frequent comorbidities. Anxiety disorders have been found to occur more commonly in the ADHD population, as have mood disorders (especially bipolar disorder and major depressive disorder). Boys diagnosed with the combined ADHD subtype are more likely to have a mood disorder. Adults and children with ADHD sometimes also have bipolar disorder, which requires careful assessment to accurately diagnose and treat both conditions.Sleep disorders and ADHD commonly co-exist. They can also occur as a side effect of medications used to treat ADHD. In children with ADHD, insomnia is the most common sleep disorder with behavioral therapy being the preferred treatment. Problems with sleep initiation are common among individuals with ADHD but often they will be deep sleepers and have significant difficulty getting up in the morning. Melatonin is sometimes used in children who have sleep onset insomnia. Specifically, the sleep disorder restless legs syndrome has been found to be more common in those with ADHD and is often due to iron deficiency anemia. However, restless legs can simply be a part of ADHD and requires careful assessment to differentiate between the two disorders. Delayed sleep phase disorder is also a common comorbidity of those with ADHD.There are other psychiatric conditions which are often co-morbid with ADHD, such as substance use disorders. Individuals with ADHD are at increased risk of substance abuse.: 9 This is most commonly seen with alcohol or cannabis.: 9 The reason for this may be an altered reward pathway in the brains of ADHD individuals, self-treatment and increased psychosocial risk factors.: 9 This makes the evaluation and treatment of ADHD more difficult, with serious substance misuse problems usually treated first due to their greater risks. Other psychiatric conditions include reactive attachment disorder, characterised by a severe inability to appropriately relate socially, and sluggish cognitive tempo, a cluster of symptoms that potentially comprises another attention disorder and may occur in 30–50% of ADHD cases, regardless of the subtype. Individuals with ADHD are 4x more likely to develop and be diagnosed with an eating disorder (Anorexia, Bulimia, Binge Eating, ARFID) compared to those without ADHD. Individuals with diagnosed eating disorders are 2.6x more likely to have ADHD than those without eating disorders, though these numbers are likely much lower than actual rates, due to limitations with screening and diagnosis in marginalized populations.
Trauma
ADHD, trauma, and Adverse Childhood Experiences are also comorbid, which could in part be potentially explained by the similarity in presentation between different diagnoses. The symptoms of ADHD and PTSD can have significant behavioral overlap with ADHD—in particular, motor restlessness, difficulty concentrating, distractibility, irritability/anger, emotional constriction or dysregulation, poor impulse control, and forgetfulness are common in both. This could result in trauma-related disorders or ADHD being mis-identified as the other. Additionally, traumatic events in childhood are a risk factor for ADHD - it can lead to structural brain changes and the development of ADHD behaviors. Finally, the behavioral consequences of ADHD symptoms cause a higher chance of the individual experiencing trauma (and therefore ADHD leads to a concrete diagnosis of a trauma-related disorder).
Non-psychiatric
Some non-psychiatric conditions are also comorbidities of ADHD. This includes epilepsy, a neurological condition characterised by recurrent seizures. There are well established associations between ADHD and obesity, asthma and sleep disorders, and an association with celiac disease. Children with ADHD have a higher risk for migraine headaches, but have no increased risk of tension-type headaches. In addition, children with ADHD may also experience headaches as a result of medication.A 2021 review reported that several neurometabolic disorders caused by inborn errors of metabolism converge on common neurochemical mechanisms that interfere with biological mechanisms also considered central in ADHD pathophysiology and treatment. This highlights the importance of close collaboration between health services to avoid clinical overshadowing.
Suicide risk
Systematic reviews conducted in 2017 and 2020 found strong evidence that ADHD is associated with increased suicide risk across all age groups, as well as growing evidence that an ADHD diagnosis in childhood or adolescence represents a significant future suicidal risk factor. Potential causes include ADHDs association with functional impairment, negative social, educational and occupational outcomes, and financial distress. A 2019 meta-analysis indicated a significant association between ADHD and suicidal spectrum behaviors (suicidal attempts, ideations, plans, and completed suicides); across the studies examined, the prevalence of suicide attempts in individuals with ADHD was 18.9%, compared to 9.3% in individuals without ADHD, and the findings were substantially replicated among studies which adjusted for other variables. However, the relationship between ADHD and suicidal spectrum behaviors remains unclear due to mixed findings across individual studies and the complicating impact of comorbid psychiatric disorders. There is no clear data on whether there is a direct relationship between ADHD and suicidality, or whether ADHD increases suicide risk through comorbidities.
IQ test performance
Certain studies have found that people with ADHD tend to have lower scores on intelligence quotient (IQ) tests. The significance of this is controversial due to the differences between people with ADHD and the difficulty determining the influence of symptoms, such as distractibility, on lower scores rather than intellectual capacity. In studies of ADHD, higher IQs may be over-represented because many studies exclude individuals who have lower IQs despite those with ADHD scoring on average nine points lower on standardized intelligence measures. In individuals with high intelligence, there is increased risk of a missed ADHD diagnosis, possibly because of compensatory strategies in highly intelligent individuals.Studies of adults suggest that negative differences in intelligence are not meaningful and may be explained by associated health problems.
Causes
ADHD is generally claimed to be the result of neurological dysfunction in processes associated with the production or use of dopamine and norepinephrine in various brain structures, but there are no confirmed causes. It may involve interactions between genetics and the environment.
Genetics
ADHD has a high heritability of 74%, meaning that 74% of the presence of ADHD in the population is due to genetic factors. There are multiple gene variants which each slightly increase the likelihood of a person having ADHD; it is polygenic and arises through the combination of many gene variants which each have a small effect. The siblings of children with ADHD are three to four times more likely to develop the disorder than siblings of children without the disorder.Arousal is related to dopaminergic functioning, and ADHD presents with low dopaminergic functioning. Typically, a number of genes are involved, many of which directly affect dopamine neurotransmission. Those involved with dopamine include DAT, DRD4, DRD5, TAAR1, MAOA, COMT, and DBH. Other genes associated with ADHD include SERT, HTR1B, SNAP25, GRIN2A, ADRA2A, TPH2, and BDNF. A common variant of a gene called latrophilin 3 is estimated to be responsible for about 9% of cases and when this variant is present, people are particularly responsive to stimulant medication. The 7 repeat variant of dopamine receptor D4 (DRD4–7R) causes increased inhibitory effects induced by dopamine and is associated with ADHD. The DRD4 receptor is a G protein-coupled receptor that inhibits adenylyl cyclase. The DRD4–7R mutation results in a wide range of behavioral phenotypes, including ADHD symptoms reflecting split attention. The DRD4 gene is both linked to novelty seeking and ADHD. The genes GFOD1 and CDH13 show strong genetic associations with ADHD. CHD13s association with ASD, schizophrenia, bipolar disorder, and depression make it an interesting candidate causative gene. Another candidate causative gene that has been identified is ADGRL3. In zebrafish, knockout of this gene causes a loss of dopaminergic function in the ventral diencephalon and the fish display a hyperactive/impulsive phenotype.For genetic variation to be used as a tool for diagnosis, more validating studies need to be performed. However, smaller studies have shown that genetic polymorphisms in genes related to catecholaminergic neurotransmission or the SNARE complex of the synapse can reliably predict a persons response to stimulant medication. Rare genetic variants show more relevant clinical significance as their penetrance (the chance of developing the disorder) tends to be much higher. However their usefulness as tools for diagnosis is limited as no single gene predicts ADHD. ASD shows genetic overlap with ADHD at both common and rare levels of genetic variation.
Environment
In addition to genetics, some environmental factors might play a role in causing ADHD. Alcohol intake during pregnancy can cause fetal alcohol spectrum disorders which can include ADHD or symptoms like it. Children exposed to certain toxic substances, such as lead or polychlorinated biphenyls, may develop problems which resemble ADHD. Exposure to the organophosphate insecticides chlorpyrifos and dialkyl phosphate is associated with an increased risk; however, the evidence is not conclusive. Exposure to tobacco smoke during pregnancy can cause problems with central nervous system development and can increase the risk of ADHD. Nicotine exposure during pregnancy may be an environmental risk.Extreme premature birth, very low birth weight, and extreme neglect, abuse, or social deprivation also increase the risk as do certain infections during pregnancy, at birth, and in early childhood. These infections include, among others, various viruses (measles, varicella zoster encephalitis, rubella, enterovirus 71). At least 30% of children with a traumatic brain injury later develop ADHD and about 5% of cases are due to brain damage.Some studies suggest that in a small number of children, artificial food dyes or preservatives may be associated with an increased prevalence of ADHD or ADHD-like symptoms, but the evidence is weak and may only apply to children with food sensitivities. The European Union has put in place regulatory measures based on these concerns. In a minority of children, intolerances or allergies to certain foods may worsen ADHD symptoms.Individuals with hypokalemic sensory overstimulation are sometimes diagnosed as having attention deficit hyperactivity disorder (ADHD), raising the possibility that a subtype of ADHD has a cause that can be understood mechanistically and treated in a novel way. The sensory overload is treatable with oral potassium gluconate.
Research does not support popular beliefs that ADHD is caused by eating too much refined sugar, watching too much television, parenting, poverty or family chaos; however, they might worsen ADHD symptoms in certain people.
Society
The youngest children in a class have been found to be more likely to be diagnosed as having ADHD, possibly due to them being developmentally behind their older classmates. They also appear to use ADHD medications at nearly twice the rate of their peers.In some cases, an inappropriate diagnosis of ADHD may reflect a dysfunctional family or a poor educational system, rather than any true presence of ADHD in the individual. In other cases, it may be explained by increasing academic expectations, with a diagnosis being a method for parents in some countries to get extra financial and educational support for their child. Behaviors typical of ADHD occur more commonly in children who have experienced violence and emotional abuse.
Pathophysiology
Current models of ADHD suggest that it is associated with functional impairments in some of the brains neurotransmitter systems, particularly those involving dopamine and norepinephrine. The dopamine and norepinephrine pathways that originate in the ventral tegmental area and locus coeruleus project to diverse regions of the brain and govern a variety of cognitive processes. The dopamine pathways and norepinephrine pathways which project to the prefrontal cortex and striatum are directly responsible for modulating executive function (cognitive control of behavior), motivation, reward perception, and motor function; these pathways are known to play a central role in the pathophysiology of ADHD. Larger models of ADHD with additional pathways have been proposed.
Brain structure
In children with ADHD, there is a general reduction of volume in certain brain structures, with a proportionally greater decrease in the volume in the left-sided prefrontal cortex. The posterior parietal cortex also shows thinning in individuals with ADHD compared to controls. Other brain structures in the prefrontal-striatal-cerebellar and prefrontal-striatal-thalamic circuits have also been found to differ between people with and without ADHD.The subcortical volumes of the accumbens, amygdala, caudate, hippocampus, and putamen appears smaller in individuals with ADHD compared with controls. Structural MRI studies have also revealed differences in white matter, with marked differences in inter-hemispheric asymmetry between ADHD and typically developing youths Functional MRI fMRI studies have revealed a number of differences between ADHD and control brains. Independent component analysis performed on resting-state fMRI data have revealed that individuals with the inattentive type of ADHD have significantly more independent components are required to describe the variance of this data.
Neurotransmitter pathways
Previously, it had been suggested that the elevated number of dopamine transporters in people with ADHD was part of the pathophysiology, but it appears the elevated numbers may be due to adaptation following exposure to stimulant medication. Current models involve the mesocorticolimbic dopamine pathway and the locus coeruleus-noradrenergic system. ADHD psychostimulants possess treatment efficacy because they increase neurotransmitter activity in these systems. There may additionally be abnormalities in serotonergic, glutamatergic, or cholinergic pathways.
Executive function and motivation
The symptoms of ADHD arise from a deficiency in certain executive functions (e.g., attentional control, inhibitory control, and working memory). Executive functions are a set of cognitive processes that are required to successfully select and monitor behaviors that facilitate the attainment of ones chosen goals. The executive function impairments that occur in ADHD individuals result in problems with staying organised, time keeping, excessive procrastination, maintaining concentration, paying attention, ignoring distractions, regulating emotions, and remembering details. People with ADHD appear to have unimpaired long-term memory, and deficits in long-term recall appear to be attributed to impairments in working memory. Due to the rates of brain maturation and the increasing demands for executive control as a person gets older, ADHD impairments may not fully manifest themselves until adolescence or even early adulthood.ADHD has also been associated with motivational deficits in children. Children with ADHD often find it difficult to focus on long-term over short-term rewards, and exhibit impulsive behavior for short-term rewards.
Paradoxical reaction to neuroactive substances
Another sign of the structurally altered signal processing in the central nervous system in this group of people is the conspicuously common Paradoxical reaction (c. 10–20% of patients). These are unexpected reactions in the opposite direction as with a normal effect, or otherwise significant different reactions. These are reactions to neuroactive substances such as local anesthetic at the dentist, sedative, caffeine, antihistamine, weak neuroleptics and central and peripheral painkillers. Since the causes of paradoxical reactions are at least partly genetic, it may be useful in critical situations, for example before operations, to ask whether such abnormalities may also exist in family members.
Diagnosis
ADHD is diagnosed by an assessment of a persons behavioral and mental development, including ruling out the effects of drugs, medications, and other medical or psychiatric problems as explanations for the symptoms. ADHD diagnosis often takes into account feedback from parents and teachers with most diagnoses begun after a teacher raises concerns. It may be viewed as the extreme end of one or more continuous human traits found in all people. Imaging studies of the brain do not give consistent results between individuals; thus, they are only used for research purposes and not a diagnosis.In North America and Australia, DSM-5 criteria are used for diagnosis, while European countries usually use the ICD-10. The DSM-IV criteria for diagnosis of ADHD is 3–4 times more likely to diagnose ADHD than is the ICD-10 criteria. ADHD is alternately classified as neurodevelopmental disorder or a disruptive behavior disorder along with ODD, CD, and antisocial personality disorder. A diagnosis does not imply a neurological disorder.Associated conditions that should be screened for include anxiety, depression, ODD, CD, and learning and language disorders. Other conditions that should be considered are other neurodevelopmental disorders, tics, and sleep apnea.Self-rating scales, such as the ADHD rating scale and the Vanderbilt ADHD diagnostic rating scale, are used in the screening and evaluation of ADHD. Electroencephalography is not accurate enough to make an ADHD diagnosis.
Classification
Diagnostic and Statistical Manual
As with many other psychiatric disorders, a formal diagnosis should be made by a qualified professional based on a set number of criteria. In the United States, these criteria are defined by the American Psychiatric Association in the DSM. Based on the DSM-5 criteria published in 2013 and the DSM-5-TR criteria published in 2022, there are three presentations of ADHD:
ADHD, predominantly inattentive type, presents with symptoms including being easily distracted, forgetful, daydreaming, disorganization, poor concentration, and difficulty completing tasks.
ADHD, predominantly hyperactive-impulsive type, presents with excessive fidgeting and restlessness, hyperactivity, and difficulty waiting and remaining seated.
ADHD, combined type, is a combination of the first two presentations.This subdivision is based on presence of at least six (in children) or five (in older teenagers and adults) out of nine long-term (lasting at least six months) symptoms of inattention, hyperactivity–impulsivity, or both. To be considered, several symptoms must have appeared by the age of six to twelve and occur in more than one environment (e.g. at home and at school or work). The symptoms must be inappropriate for a child of that age and there must be clear evidence that they are causing social, school or work related problems.The DSM-5 and the DSM-5-TR also provide two diagnoses for individuals who have symptoms of ADHD but do not entirely meet the requirements. Other Specified ADHD allows the clinician to describe why the individual does not meet the criteria, whereas Unspecified ADHD is used where the clinician chooses not to describe the reason.
International Classification of Diseases
In the eleventh revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-11) by the World Health Organization, the disorder is classified as Attention deficit hyperactivity disorder (with the code 6A05). The defined subtypes are similar to those of the DSM-5: predominantly inattentive presentation (6A05.0); predominantly hyperactive-impulsive presentation(6A05.1); combined presentation (6A05.2). However, the ICD-11 includes two residual categories for individuals who do not entirely match any of the defined subtypes: other specified presentation (6A05.Y) where the clinician includes detail on the individuals presentation; and presentation unspecified (6A05.Z) where the clinician does not provide detail.In the tenth revision (ICD-10), the symptoms of hyperkinetic disorder were analogous to ADHD in the ICD-11. When a conduct disorder (as defined by ICD-10) is present, the condition was referred to as hyperkinetic conduct disorder. Otherwise, the disorder was classified as disturbance of activity and attention, other hyperkinetic disorders or hyperkinetic disorders, unspecified. The latter was sometimes referred to as hyperkinetic syndrome.
Social construct theory
The social construct theory of ADHD suggests that, because the boundaries between normal and abnormal behavior are socially constructed (i.e. jointly created and validated by all members of society, and in particular by physicians, parents, teachers, and others), it then follows that subjective valuations and judgements determine which diagnostic criteria are used and thus, the number of people affected. This difference means using DSM-IV criteria could diagnose ADHD at rates three to four times higher than ICD-10 criteria. Thomas Szasz, a supporter of this theory, has argued that ADHD was "invented and then given a name".
Adults
Adults with ADHD are diagnosed under the same criteria, including that their signs must have been present by the age of six to twelve. The individual is the best source for information in diagnosis, however others may provide useful information about the individuals symptoms currently and in childhood; a family history of ADHD also adds weight to a diagnosis |
Attention deficit hyperactivity disorder | .: 7, 9 While the core symptoms of ADHD are similar in children and adults, they often present differently in adults than in children: for example, excessive physical activity seen in children may present as feelings of restlessness and constant mental activity in adults.: 6 Worldwide, it is estimated that 2.58% of adults have persistent ADHD (where the individual currently meets the criteria and there is evidence of childhood onset), and 6.76% of adults have symptomatic ADHD (meaning that they currently meet the criteria for ADHD, regardless of childhood onset). In 2020, this was 139.84 million and 366.33 million affected adults respectively. Around 15% of children with ADHD continue to meet full DSM-IV-TR criteria at 25 years of age, and 50% still experience some symptoms.: 2 As of 2010, most adults remain untreated. Many adults with ADHD without diagnosis and treatment have a disorganised life, and some use non-prescribed drugs or alcohol as a coping mechanism. Other problems may include relationship and job difficulties, and an increased risk of criminal activities.: 6 Associated mental health problems include depression, anxiety disorders, and learning disabilities.Some ADHD symptoms in adults differ from those seen in children. While children with ADHD may climb and run about excessively, adults may experience an inability to relax, or may talk excessively in social situations.: 6 Adults with ADHD may start relationships impulsively, display sensation-seeking behavior, and be short-tempered.: 6 Addictive behavior such as substance abuse and gambling are common.: 6 This led to those who presented differently as they aged having outgrown the DSM-IV criteria.: 5–6 The DSM-5 criteria does specifically deal with adults unlike that of DSM-IV, which does not fully take into account the differences in impairments seen in adulthood compared to childhood.: 5 For diagnosis in an adult, having symptoms since childhood is required. Nevertheless, a proportion of adults who meet the criteria for ADHD in adulthood would not have been diagnosed with ADHD as children. Most cases of late-onset ADHD develop the disorder between the ages of 12-16 and may therefore be considered early adult or adolescent-onset ADHD.
Differential diagnosis
The DSM provides potential differential diagnoses - potential alternate explanations for specific symptoms. Assessment and investigation of clinical history determines which is the most appropriate diagnosis. The DSM-5 suggests ODD, intermittent explosive disorder, and other neurodevelopmental disorders (such as stereotypic movement disorder and Tourettes disorder), in addition to specific learning disorder, intellectual developmental disorder, ASD, reactive attachment disorder, anxiety disorders, depressive disorders, bipolar disorder, disruptive mood dysregulation disorder, substance use disorder, personality disorders, psychotic disorders, medication-induced symptoms, and neurocognitive disorders. Many but not all of these are also common comorbidities of ADHD. The DSM-5-TR also suggests post-traumatic stress disorder.Symptoms of ADHD, such as low mood and poor self-image, mood swings, and irritability, can be confused with dysthymia, cyclothymia or bipolar disorder as well as with borderline personality disorder.: 10 Some symptoms that are due to anxiety disorders, personality disorder, developmental disabilities or intellectual disability or the effects of substance abuse such as intoxication and withdrawal can overlap with ADHD. These disorders can also sometimes occur along with ADHD. Medical conditions which can cause ADHD-type symptoms include: hyperthyroidism, seizure disorder, lead toxicity, hearing deficits, hepatic disease, sleep apnea, drug interactions, untreated celiac disease, and head injury.Primary sleep disorders may affect attention and behavior and the symptoms of ADHD may affect sleep. It is thus recommended that children with ADHD be regularly assessed for sleep problems. Sleepiness in children may result in symptoms ranging from the classic ones of yawning and rubbing the eyes, to hyperactivity and inattentiveness. Obstructive sleep apnea can also cause ADHD-type symptoms.
Management
The management of ADHD typically involves counseling or medications, either alone or in combination. While treatment may improve long-term outcomes, it does not get rid of negative outcomes entirely. Medications used include stimulants, atomoxetine, alpha-2 adrenergic receptor agonists, and sometimes antidepressants. In those who have trouble focusing on long-term rewards, a large amount of positive reinforcement improves task performance. ADHD stimulants also improve persistence and task performance in children with ADHD. "Recent evidence from observational and registry studies indicates that pharmacological treatment of ADHD is associated with increased achievement and decreased absenteeism at school, a reduced risk of trauma-related emergency hospital visits, reduced risks of suicide and attempted suicide, and decreased rates of substance abuse and criminality".
Behavioral therapies
There is good evidence for the use of behavioral therapies in ADHD. They are the recommended first-line treatment in those who have mild symptoms or who are preschool-aged. Psychological therapies used include: psychoeducational input, behavior therapy, cognitive behavioral therapy, interpersonal psychotherapy, family therapy, school-based interventions, social skills training, behavioral peer intervention, organization training, and parent management training. Neurofeedback has greater treatment effects than non-active controls for up to 6 months and possibly a year following treatment, and may have treatment effects comparable to active controls (controls proven to have a clinical effect) over that time period. Despite efficacy in research, there is insufficient regulation of neurofeedback practice, leading to ineffective applications and false claims regarding innovations. Parent training may improve a number of behavioral problems including oppositional and non-compliant behaviors.There is little high-quality research on the effectiveness of family therapy for ADHD—but the existing evidence shows that it is similar to community care, and better than placebo. ADHD-specific support groups can provide information and may help families cope with ADHD.Social skills training, behavioral modification, and medication may have some limited beneficial effects in peer relationships. Stable, high-quality friendships with non-deviant peers protect against later psychological problems.
Medication
Stimulants
Methylphenidate and amphetamine or its derivatives are first-line treatments for ADHD as they are considered the most effective pharmaceutical treatments. About 70 percent respond to the first stimulant tried and as few as 10 percent respond to neither amphetamines nor methylphenidate. Stimulants may also reduce the risk of unintentional injuries in children with ADHD. Magnetic resonance imaging studies suggest that long-term treatment with amphetamine or methylphenidate decreases abnormalities in brain structure and function found in subjects with ADHD. A 2018 review found the greatest short-term benefit with methylphenidate in children, and amphetamines in adults.The likelihood of developing insomnia for ADHD patients taking stimulants has been measured at between 11 and 45 percent for different medications, and may be a main reason for discontinuation. Other side effects, such as tics, decreased appetite and weight loss, or emotional lability, may also lead to discontinuation. Stimulant psychosis and mania are rare at therapeutic doses, appearing to occur in approximately 0.1% of individuals, within the first several weeks after starting amphetamine therapy. The safety of these medications in pregnancy is unclear. Symptom improvement is not sustained if medication is ceased.The long-term effects of ADHD medication have yet to be fully determined, although stimulants are generally beneficial and safe for up to two years for children and adolescents.
Regular monitoring has been recommended in those on long-term treatment. There are indications suggesting that stimulant therapy for children and adolescents should be stopped periodically to assess continuing need for medication, decrease possible growth delay, and reduce tolerance. Although potentially addictive at high doses, stimulants used to treat ADHD have low potential for abuse. Treatment with stimulants is either protective against substance abuse or has no effect.: 12 The majority of studies on nicotine and other nicotinic agonists as treatments for ADHD have shown favorable results; however, no nicotinic drug has been approved for ADHD treatment. Caffeine was formerly used as a second-line treatment for ADHD. It is considered less effective than methylphenidate or amphetamine but more so than placebo for children with ADHD. Pseudoephedrine and ephedrine do not affect ADHD symptoms.Modafinil has shown some efficacy in reducing the severity of ADHD in children and adolescents. It may be prescribed off-label to treat ADHD.
Non-stimulants
There are a number of non-stimulant medications, such as Viloxazine, atomoxetine, bupropion, guanfacine, amantadine (effective in children and adolescents but still not been seen for adults), and clonidine, that may be used as alternatives, or added to stimulant therapy. There are no good studies comparing the various medications; however, they appear more or less equal with respect to side effects. For children, stimulants appear to improve academic performance while atomoxetine does not.Atomoxetine, due to its lack of addiction liability, may be preferred in those who are at risk of recreational or compulsive stimulant use, although evidence is lacking to support its use over stimulants for this reason.: 13 Evidence supports its ability to improve symptoms when compared to placebo.Amantadine was shown to induce similar improvements in children treated methylphenidate, with less frequent side effects. A 2021 retrospective study showed showed that amantadine may serve as an effective adjunct to stimulants for ADHD–related symptoms and appears to be a safer alternative to second- or third-generation antipsychotics.There is little evidence on the effects of medication on social behaviors. Antipsychotics may also be used to treat aggression in ADHD.
Guidelines
Guidelines on when to use medications vary by country. The United Kingdoms National Institute for Health and Care Excellence recommends use for children only in severe cases, though for adults medication is a first-line treatment. Conversely, most United States guidelines recommend medications in most age groups. Medications are especially not recommended for preschool children. Underdosing of stimulants can occur, and can result in a lack of response or later loss of effectiveness. This is particularly common in adolescents and adults as approved dosing is based on school-aged children, causing some practitioners to use weight-based or benefit-based off-label dosing instead.
Exercise
Regular physical exercise, particularly aerobic exercise, is an effective add-on treatment for ADHD in children and adults, particularly when combined with stimulant medication (although the best intensity and type of aerobic exercise for improving symptoms are not currently known). The long-term effects of regular aerobic exercise in ADHD individuals include better behavior and motor abilities, improved executive functions (including attention, inhibitory control, and planning, among other cognitive domains), faster information processing speed, and better memory. Parent-teacher ratings of behavioral and socio-emotional outcomes in response to regular aerobic exercise include: better overall function, reduced ADHD symptoms, better self-esteem, reduced levels of anxiety and depression, fewer somatic complaints, better academic and classroom behavior, and improved social behavior. Exercising while on stimulant medication augments the effect of stimulant medication on executive function. It is believed that these short-term effects of exercise are mediated by an increased abundance of synaptic dopamine and norepinephrine in the brain.
Diet
Dietary modifications are not recommended as of 2019 by the American Academy of Pediatrics, the National Institute for Health and Care Excellence, or the Agency for Healthcare Research and Quality due to insufficient evidence.
A 2013 meta-analysis found less than a third of children with ADHD see some improvement in symptoms with free fatty acid supplementation or decreased eating of artificial food coloring. These benefits may be limited to children with food sensitivities or those who are simultaneously being treated with ADHD medications. This review also found that evidence does not support removing other foods from the diet to treat ADHD. A 2014 review found that an elimination diet results in a small overall benefit in a minority of children, such as those with allergies. A 2016 review stated that the use of a gluten-free diet as standard ADHD treatment is not advised. A 2017 review showed that a few-foods elimination diet may help children too young to be medicated or not responding to medication, while free fatty acid supplementation or decreased eating of artificial food coloring as standard ADHD treatment is not advised. Chronic deficiencies of iron, magnesium and iodine may have a negative impact on ADHD symptoms. There is a small amount of evidence that lower tissue zinc levels may be associated with ADHD. In the absence of a demonstrated zinc deficiency (which is rare outside of developing countries), zinc supplementation is not recommended as treatment for ADHD. However, zinc supplementation may reduce the minimum effective dose of amphetamine when it is used with amphetamine for the treatment of ADHD.
Prognosis
ADHD persists into adulthood in about 30–50% of cases. Those affected are likely to develop coping mechanisms as they mature, thus compensating to some extent for their previous symptoms. Children with ADHD have a higher risk of unintentional injuries. Effects of medication on functional impairment and quality of life (e.g. reduced risk of accidents) have been found across multiple domains. Rates of smoking among those with ADHD are higher than in the general population at about 40%.Individuals with ADHD are significantly overrepresented in prison populations. Although there is no generally accepted estimate of ADHD prevalence among inmates, a 2015 meta-analysis estimated a prevalence of 25.5%, and a larger 2018 meta-analysis estimated the frequency to be 26.2%. ADHD is more common among longer-term inmates; a 2010 study at Norrtälje Prison, a high-security prison in Sweden, found an estimated ADHD prevalence of 40%.
Epidemiology
ADHD is estimated to affect about 6–7% of people aged 18 and under when diagnosed via the DSM-IV criteria. When diagnosed via the ICD-10 criteria, rates in this age group are estimated around 1–2%. Children in North America appear to have a higher rate of ADHD than children in Africa and the Middle East; this is believed to be due to differing methods of diagnosis rather than a difference in underlying frequency. As of 2019, it was estimated to affect 84.7 million people globally. If the same diagnostic methods are used, the rates are similar between countries. ADHD is diagnosed approximately three times more often in boys than in girls. This may reflect either a true difference in underlying rate, or that women and girls with ADHD are less likely to be diagnosed.Rates of diagnosis and treatment have increased in both the United Kingdom and the United States since the 1970s. Prior to 1970, it was rare for children to be diagnosed with ADHD, while in the 1970s rates were about 1%. This is believed to be primarily due to changes in how the condition is diagnosed and how readily people are willing to treat it with medications rather than a true change in how common the condition is. It was believed changes to the diagnostic criteria in 2013 with the release of the DSM-5 would increase the percentage of people diagnosed with ADHD, especially among adults.Due to disparities in the treatment and understanding of ADHD between caucasian and non-caucasian populations, many non-caucasian children go undiagnosed and unmedicated. It was found that within the US that there was often a disparity between caucasian and non-caucasian understandings of ADHD. This led to a difference in the classification of the symptoms of ADHD, and therefore, its misdiagnosis. It was also found that it was common in non-caucasian families and teachers to understand the symptoms of ADHD as behavioral issues, rather than mental illness.Crosscultural differences in diagnosis of ADHD can also be attributed to the long-lasting effects of harmful, racially targeted medical practices. Medical pseudosciences, particularly those that targeted African American populations during the period of slavery in the US, lead to a distrust of medical practices within certain communities. The combination of ADHD symptoms often being regarded as misbehavior rather than as a psychiatric condition, and the use of drugs to regulate ADHD, result in a hesitancy to trust a diagnosis of ADHD. Cases of misdiagnosis in ADHD can also occur due to stereotyping of non-caucasian individuals. Due to ADHDs subjectively determined symptoms, medical professionals may diagnose individuals based on stereotyped behavior or misdiagnose due to differences in symptom presentation between Caucasian and non-Caucasian individuals.
History
Hyperactivity has long been part of the human condition. Sir Alexander Crichton describes "mental restlessness" in his book An inquiry into the nature and origin of mental derangement written in 1798. He made observations about children showing signs of being inattentive and having the "fidgets". The first clear description of ADHD is credited to George Still in 1902 during a series of lectures he gave to the Royal College of Physicians of London. He noted both nature and nurture could be influencing this disorder.Alfred Tredgold proposed an association between brain damage and behavioral or learning problems which was able to be validated by the encephalitis lethargica epidemic from 1917 through 1928.The terminology used to describe the condition has changed over time and has included: minimal brain dysfunction in the DSM-I (1952), hyperkinetic reaction of childhood in the DSM-II (1968), and attention-deficit disorder with or without hyperactivity in the DSM-III (1980). In 1987, this was changed to ADHD in the DSM-III-R, and in 1994 the DSM-IV in split the diagnosis into three subtypes: ADHD inattentive type, ADHD hyperactive-impulsive type, and ADHD combined type. These terms were kept in the DSM-5 in 2013 and in the DSM-5-TR in 2022. Prior to the DSM, terms included minimal brain damage in the 1930s.In 1934, Benzedrine became the first amphetamine medication approved for use in the United States. Methylphenidate was introduced in the 1950s, and enantiopure dextroamphetamine in the 1970s. The use of stimulants to treat ADHD was first described in 1937. Charles Bradley gave the children with behavioral disorders Benzedrine and found it improved academic performance and behavior.Once neuroimaging studies were possible, studies conducted in the 1990s provided support for the pre-existing theory that neurological differences - particularly in the frontal lobes - were involved in ADHD. During this same period, a genetic component was identified and ADHD was acknowledged to be a persistent, long-term disorder which lasted from childhood into adulthood.ADHD was split into the current three sub-types because of a field trial completed by Lahey and colleagues.
Controversy
ADHD, its diagnosis, and its treatment have been controversial since the 1970s. The controversies involve clinicians, teachers, policymakers, parents, and the media. Positions range from the view that ADHD is within the normal range of behavior to the hypothesis that ADHD is a genetic condition. Other areas of controversy include the use of stimulant medications in children, the method of diagnosis, and the possibility of overdiagnosis. In 2009, the National Institute for Health and Care Excellence, while acknowledging the controversy, states that the current treatments and methods of diagnosis are based on the dominant view of the academic literature. In 2014, Keith Conners, one of the early advocates for recognition of the disorder, spoke out against overdiagnosis in a The New York Times article. In contrast, a 2014 peer-reviewed medical literature review indicated that ADHD is underdiagnosed in adults.With widely differing rates of diagnosis across countries, states within countries, races, and ethnicities, some suspect factors other than the presence of the symptoms of ADHD are playing a role in diagnosis, such as cultural norms. Some sociologists consider ADHD to be an example of the medicalization of deviant behavior, that is, the turning of the previously non-medical issue of school performance into a medical one. Most healthcare providers accept ADHD as a genuine disorder, at least in the small number of people with severe symptoms. Among healthcare providers the debate mainly centers on diagnosis and treatment in the much greater number of people with mild symptoms.The nature and range of desirable endpoints of ADHD treatment vary among diagnostic standards for ADHD. In most studies, the efficacy of treatment is determined by reductions in ADHD symptoms. However, some studies have included subjective ratings from teachers and parents as part of their assessment of ADHD treatment efficacies. By contrast, the subjective ratings of children undergoing ADHD treatment are seldom included in studies evaluating the efficacy of ADHD treatments.
There have been notable differences in the diagnosis patterns of birthdays in school-age children. Those born relatively younger to the school starting age than others in a classroom environment are shown to be more likely diagnosed with ADHD. Boys who were born in December in which the school age cut-off was December 31 were shown to be 30% more likely to be diagnosed and 41% to be treated than others born in January. Girls born in December had a diagnosis percentage of 70% and 77% treatment more than ones born the following month. Children who were born at the last 3 days of a calendar year were reported to have significantly higher levels of diagnosis and treatment for ADHD than children born at the first 3 days of a calendar year. The studies suggest that ADHD diagnosis is prone to subjective analysis.
Research directions
Possible positive traits
Possible positive traits of ADHD are a new avenue of research, and therefore limited.A 2020 review found that creativity may be associated with ADHD symptoms, particularly divergent thinking and quantity of creative achievements, but not with the disorder of ADHD itself – i.e. it has not been found to be increased in people diagnosed with the disorder, only in people with subclinical symptoms or those that possess traits associated with the disorder. Divergent thinking is the ability to produce creative solutions which differ significantly from each other and consider the issue from multiple perspectives. Those with ADHD symptoms could be advantaged in this form of creativity as they tend to have diffuse attention, allowing rapid switching between aspects of the task under consideration; flexible associative memory, allowing them to remember and use more distantly-related ideas which is associated with creativity; and impulsivity, which causes people with ADHD symptoms to consider ideas which others may not have. However, people with ADHD may struggle with convergent thinking, which is a cognitive process through which a set of obviously relevant knowledge is utilized in a focused effort to arrive at a single perceived best solution to a problem.A 2020 article suggested that historical documentation supported Leonardo da Vincis difficulties with procrastination and time management as characteristic of ADHD and that he was constantly on the go, but often jumping from task to task.
Possible biomarkers for diagnosis
Reviews of ADHD biomarkers have noted that platelet monoamine oxidase expression, urinary norepinephrine, urinary MHPG, and urinary phenethylamine levels consistently differ between ADHD individuals and non-ADHD controls. These measurements could potentially serve as diagnostic biomarkers for ADHD, but more research is needed to establish their diagnostic utility. Urinary and blood plasma phenethylamine concentrations are lower in ADHD individuals relative to controls and the two most commonly prescribed drugs for ADHD, amphetamine and methylphenidate, increase phenethylamine biosynthesis in treatment-responsive individuals with ADHD. Lower urinary phenethylamine concentrations are also associated with symptoms of inattentiveness in ADHD individuals.
See also
Accident-proneness § Hypophobia
References
Further reading
External links
National Institute of Mental Health. NIMH Pages About Attention-Deficit/Hyperactivity Disorder (ADHD). National Institutes of Health (NIH), U.S. Department of Health and Human Services. Archived 4 November 2021 at the Wayback Machine
New Zealand Ministry of Health Guidelines for the Assessment and Treatment of Attention-Deficit/Hyperactivity Disorder. 2 July 2001. Archived 27 October 2014 at the Wayback Machine
"Women and girls with ADHD" (video). (17 April 2020), with Stephen P. Hinshaw and others, Knowable Magazine Attention deficit hyperactivity disorder at the Internet Archive. |
Shaken baby syndrome | Shaken baby syndrome (SBS), also known as abusive head trauma (AHT), is the leading cause of fatal head injuries in children younger than two years. Diagnosing the syndrome has proved both challenging and contentious for medical professionals, in that objective witnesses to the initial trauma are generally unavailable. This is said to be particularly problematic when the trauma is deemed non accidental. Some medical professionals propose that SBS is the result of respiratory abnormalities leading to hypoxia and swelling of the brain The courtroom has become a forum for conflicting theories with which generally accepted medical literature has not been reconciled. Often there are no outwardly visible signs of trauma, despite the presence of severe internal brain and eye . Complications include seizures, visual impairment, cerebral palsy, cognitive impairment, and death.The cause may be blunt trauma, vigorous shaking, or a combination of both. Often this occurs as a result of a caregiver becoming frustrated due to the child crying. Diagnosis can be difficult as symptoms may be nonspecific. A CT scan of the head is typically recommended if a concern is present. If there are concerning findings on the CT scan, a full work-up for child abuse should occur, including an eye exam and skeletal survey. Retinal hemorrhage is highly associated with AHT, occurring in 78% of cases of AHT versus 5% of cases of non-abusive head trauma.Educating new parents appears to be beneficial in decreasing rates of the condition. SBS is estimated to occur in three to four per 10,000 babies a year. It occurs most frequently in those less than five years of age. The risk of death is about 25%. The diagnosis include retinal bleeds, multiple fractures of the long bones, and subdural hematomas (bleeding in the brain). These signs have evolved through the years as the accepted and recognized signs of child abuse. Medical professionals strongly suspect shaking as the cause of injuries when a young child presents with retinal bleed, fractures, soft tissue injuries, or subdural hematoma that cannot be explained by accidental trauma or other medical conditions.Retinal hemorrhage (bleeding) occurs in around 85% of SBS cases and the severity of retinal hemorrhage correlates with severity of head injury. The type of retinal bleeds are often believed to be particularly characteristic of this condition, making the finding useful in establishing the diagnosis.Fractures of the vertebrae, long bones, and ribs may also be associated with SBS. Dr. John Caffey reported in 1972 that metaphyseal avulsions (small fragments of bone torn off where the periosteum covering the bone and the cortical bone are tightly bound together) and "bones on both the proximal and distal sides of a single joint are affected, especially at the knee".Infants may display irritability, failure to thrive, alterations in eating patterns, lethargy, vomiting, seizures, bulging or tense fontanels (the soft spots on a babys head), increased size of the head, altered breathing, and dilated pupils.
Risk factors
Caregivers that are at risk for becoming abusive often have unrealistic expectations of the child and may display "role reversal", expecting the child to fulfill the needs of the caregiver. Substance abuse and emotional stress, resulting for example from financial troubles, are other risk factors for aggression and impulsiveness in caregivers. Caregivers of any gender can cause SBS. Although it had been previously speculated that SBS was an isolated event, evidence of prior child abuse is a common finding. In an estimated 33–40% of cases, evidence of prior head injuries, such as old intracranial bleeds, is present.
Mechanism
Effects of SBS are thought to be diffuse axonal injury, oxygen deprivation and swelling of the brain, which can raise pressure inside the skull and damage delicate brain tissue, although witnessed shaking events have not led to such injuries.
Traumatic shaking occurs when a child is shaken in such a way that its head is flung backwards and forwards. In 1971, Guthkelch, a neurosurgeon, hypothesized that such shaking can result in a subdural hematoma, in the absence of any detectable external signs of injury to the skull. The article describes two cases in which the parents admitted that for various reasons they had shaken the child before it became ill. Moreover, one of the babies had retinal hemorrhages. The association between traumatic shaking, subdural hematoma and retinal hemorrhages was described in 1972 and referred to as whiplash shaken infant syndrome. The injuries were believed to occur because shaking the child subjected the head to acceleration–deceleration and rotational forces.
Force
There has been controversy regarding the amount of force required to produce the brain damage seen in SBS. There is broad agreement, even amongst skeptics, that shaking of a baby is dangerous and can be fatal.A biomechanical analysis published in 2005 reported that "forceful shaking can severely injure or kill an infant, this is because the cervical spine would be severely injured and not because subdural hematomas would be caused by high head rotational accelerations... an infant head subjected to the levels of rotational velocity and acceleration called for in the SBS literature, would experience forces on the infant neck far exceeding the limits for structural failure of the cervical spine. Furthermore, shaking cervical spine injury can occur at much lower levels of head velocity and acceleration than those reported for SBS." Other authors were critical of the mathematical analysis by Bandak, citing concerns about the calculations the author used concluding "In light of the numerical errors in Bandaks neck force estimations, we question the resolute tenor of Bandaks conclusions that neck injuries would occur in all shaking events." Other authors critical of the model proposed by Bandak concluding "the mechanical analogue proposed in the paper may not be entirely appropriate when used to model the motion of the head and neck of infants when a baby is shaken." Bandak responded to the criticism in a letter to the editor published in Forensic Science International in February 2006.
Diagnosis
Diagnosis can be difficult as symptoms may be nonspecific. A CT scan of the head is typically recommended if a concern is present. It is unclear how useful subdural haematoma, retinal hemorrhages, and encephalopathy are alone at making the diagnosis.
Triad
While the findings of SBS are complex and many, they are often incorrectly referred to as a "triad" for legal proceedings; distilled down to retinal hemorrhages, subdural hematomas, and encephalopathy.SBS may be misdiagnosed, underdiagnosed, and overdiagnosed, and caregivers may lie or be unaware of the mechanism of injury. Commonly, there are no externally visible signs of the condition. Examination by an experienced ophthalmologist is critical in diagnosing shaken baby syndrome, as particular forms of ocular bleeding are strongly associated with AHT. Magnetic resonance imaging may also depict retinal hemorrhaging but is much less sensitive than an eye exam. Conditions that are often excluded by clinicians include hydrocephalus, sudden infant death syndrome (SIDS), seizure disorders, and infectious or congenital diseases like meningitis and metabolic disorders. CT scanning and magnetic resonance imaging are used to diagnose the condition. Conditions that often accompany SBS/AHT include classic patterns of skeletal fracturing (rib fractures, corner fractures), injury to the cervical spine (in the neck), retinal hemorrhage, cerebral bleed or atrophy, hydrocephalus, and papilledema (swelling of the optic disc).The terms non-accidental head injury or inflicted traumatic brain injury have been used in place of "abusive head trauma" or "SBS".
Classification
The term abusive head trauma (AHT) is preferred as it better represents the broader potential causes.
The US Centers for Disease Control and Prevention identifies SBS as "an injury to the skull or intracranial contents of an infant or young child (< 5 years of age) due to inflicted blunt impact and/or violent shaking". In 2009, the American Academy of Pediatrics recommended the use of the term AHT to replace SBS, in part to differentiate injuries arising solely from shaking and injuries arising from shaking as well as trauma to the head.The Crown Prosecution Service for England and Wales recommended in 2011 that the term shaken baby syndrome be avoided and the term non accidental head injury (NAHI) be used instead.
Differential diagnosis
Vitamin C deficiency
Some authors have suggested that certain cases of suspected shaken baby syndrome may result from vitamin C deficiency. This contested hypothesis is based upon a speculated marginal, near scorbutic condition or lack of essential nutrient(s) repletion and a potential elevated histamine level. However, symptoms consistent with increased histamine levels, such as low blood pressure and allergic symptoms, are not commonly associated with scurvy as clinically significant vitamin C deficiency. A literature review of this hypothesis in the journal Pediatrics International concluded the following: "From the available information in the literature, concluded that there was no convincing evidence to conclude that vitamin C deficiency can be considered to be a cause of shaken baby syndrome."The proponents of such hypotheses often question the adequacy of nutrient tissue levels, especially vitamin C, for those children currently or recently ill, bacterial infections, those with higher individual requirements, those with environmental challenges (e.g. allergies), and perhaps transient vaccination-related stresses. At the time of this writing, infantile scurvy in the United States is practically nonexistent. No cases of scurvy mimicking SBS or sudden infant death syndrome have been reported, and scurvy typically occurs later in infancy, rarely causes death or intracranial bleeding, and is accompanied by other changes of the bones and skin and invariably an unusually deficient dietary history.In one study vaccination was shown not associated with retinal hemorrhages.
Gestational problems
Gestational problems affecting both mother and fetus, the birthing process, prematurity and nutritional deficits can accelerate skeletal and hemorrhagic pathologies that can also mimic SBS, even before birth.
Prevention
Interventions by neonatal nurses include giving parents information about abusive head trauma, normal infant crying and reasons for crying, teaching how to calm an infant, and how to cope if the infant was inconsolable may reduce rates of SBS.
Treatment
Treatment involves monitoring intracranial pressure (the pressure within the skull). Treatment occasionally requires surgery, such as to place a cerebral shunt to drain fluid from the cerebral ventricles, and, if an intracranial hematoma is present, to drain the blood collection.
Prognosis
Prognosis depends on severity and can range from total recovery to severe disability to death when the injury is severe.
One third of these patients die, one third survives with a major neurological condition, and only one third survives in good condition; therefore shaken baby syndrome puts children at risk of long-term disability. The most frequent neurological impairments are learning disabilities, seizure disorders, speech disabilities, hydrocephalus, cerebral palsy, and visual disorders.
Epidemiology
Small children are at particularly high risk for the abuse that causes SBS given the large difference in size between the small child and an adult. SBS usually occurs in children under the age of two but may occur in those up to age five. In the US, deaths due to SBS constitute about 10% of deaths due to child abuse.
History
In 1971, Norman Guthkelch proposed that whiplash injury caused subdural bleeding in infants by tearing the veins in the subdural space. The term "whiplash shaken infant syndrome" was introduced by Dr. John Caffey, a pediatric radiologist, in 1973, describing a set of symptoms found with little or no external evidence of head trauma, including retinal bleeds and intracranial bleeds with subdural or subarachnoid bleeding or both. Development of computed tomography and magnetic resonance imaging techniques in the 1970s and 1980s advanced the ability to diagnose the syndrome.
Legal issues
The Presidents Council of Advisers on Science and Technology (PCAST) noted in its September 2016 report that there are concerns regarding the scientific validity of forensic evidence of abusive head trauma that "require urgent attention". Similarly, the Maguire model, suggested in 2011 as a potential statistical model for determining the probability that a childs trauma was caused by abuse, has been questioned. A proposed clinical prediction rule with high sensitivity and low specificity, to rule out Abusive Head Trauma, has been published.In July 2005, the Court of Appeals in the United Kingdom heard four appeals of SBS convictions: one case was dropped, the sentence was reduced for one, and two convictions were upheld. The court found that the classic triad of retinal bleeding, subdural hematoma, and acute encephalopathy are not 100% diagnostic of SBS and that clinical history is also important. In the Courts ruling, they upheld the clinical concept of SBS but dismissed one case and reduced another from murder to manslaughter. In their words: "Whilst a strong pointer to NAHI [non-accidental head injury] on its own we do not think it possible to find that it must automatically and necessarily lead to a diagnosis of NAHI. All the circumstances, including the clinical picture, must be taken into account."The court did not believe the "unified hypothesis", proposed by British physician J. F. Geddes and colleagues, as an alternative mechanism for the subdural and retinal bleeding found in suspected cases of SBS. The unified hypothesis proposed that the bleeding was not caused by shearing of subdural and retinal veins but rather by cerebral hypoxia, increased intracranial pressure, and increased pressure in the brains blood vessels. The court reported that "the unified hypothesis [could] no longer be regarded as a credible or alternative cause of the triad of injuries": subdural haemorrhage, retinal bleeding and encephalopathy due to hypoxemia (low blood oxygen) found in suspected SBS.On 31 January 2008, the Wisconsin Court of Appeals granted Audrey A. Edmunds a new trial based on "competing credible medical opinions in determining whether there is a reasonable doubt as to Edmundss guilt." Specifically, the appeals court found that "Edmunds presented evidence that was not discovered until after her conviction, in the form of expert medical testimony, that a significant and legitimate debate in the medical community has developed in the past ten years over whether infants can be fatally injured through shaking alone, whether an infant may suffer head trauma and yet experience a significant lucid interval prior to death, and whether other causes may mimic the symptoms traditionally viewed as indicating shaken baby or shaken impact syndrome."In 2012, A. Norman Guthkelch, the neurosurgeon often credited with "discovering" the diagnosis of SBS, published an article "after 40 years of consideration," which is harshly critical of shaken baby prosecutions based solely on the triad of injuries. Again, in 2012, Guthkelch stated in an interview, "I think we need to go back to the drawing board and make a more thorough assessment of these fatal cases, and I am going to bet ... that we are going to find in every – or at least the large majority of cases, the child had another severe illness of some sort which was missed until too late." Furthermore, in 2015, Guthkelch went so far as to say, "I was against defining this thing as a syndrome in the first instance. To go on and say every time you see it, its a crime... It became an easy way to go into jail."On the other hand, Teri Covington, who runs the National Center for Child Death Review Policy and Practice, worries that such caution has led to a growing number of cases of child abuse in which the abuser is not punished.In March 2016, Waney Squier, a paediatric neuropathologist who has served as an expert witness in many shaken baby trials, was struck off the medical register for misconduct. Shortly after her conviction, Squier was given the "champion of justice" award by the International Innocence Network for her efforts to free those wrongfully convicted of shaken baby syndrome.Squier denied the allegations and appealed the decision to strike her off the medical register. As her case was heard by the High Court of England and Wales in October 2016, an open letter to the British Medical Journal questioning the decision to strike off Squier, was signed by 350 doctors, scientists, and attorneys. On 3 November 2016, the court published a judgment which concluded that "the determination of the MPT [Medical Practitioners Tribunal] is in many significant respects flawed". The judge found that she had committed serious professional misconduct but was not dishonest. She was reinstated to the medical register but prohibited from giving expert evidence in court for the next three years.The Louise Woodward case relied on the "shaken baby syndrome".
References
External links
Centers for Disease Control and Prevention – Abusive head trauma |
Glucocorticoid remediable aldosteronism | Glucocorticoid remediable aldosteronism also describable as aldosterone synthase hyperactivity, is an autosomal dominant disorder in which the increase in aldosterone secretion produced by ACTH is no longer transient.
It is a cause of primary hyperaldosteronism.
Symptoms and signs
Patients with GRA may be asymptomatic, but the following symptoms can be present:
Fatigue
Headache
High blood pressure
Hypokalemia
Intermittent or temporary paralysis
Muscle spasms
Muscle weakness
Numbness
Polyuria
Polydipsia
Tingling
Hypernatraemia
Metabolic alkalosis
Normal Physiology
Aldosterone synthase is a steroid hydroxylase cytochrome P450 oxidase enzyme involved in the generation of aldosterone. It is localized to the mitochondrial inner membrane. The enzyme has steroid 18-hydroxylase activity to synthesize aldosterone and other steroids. Aldosterone synthase is found within the zona glomerulosa at the outer edge of the adrenal cortex. Aldosterone synthase normally is not ACTH sensitive, and is only activated by angiotensin II.Aldosterone causes the tubules of the kidneys to retain sodium and water. This increases the volume of fluid in the body and drives up blood pressure.Steroid hormones are synthesized from cholesterol within the adrenal cortex. Aldosterone and corticosterone share the first part of their biosynthetic pathway. The last part is either mediated by the aldosterone synthase (for aldosterone) or by the 11β-hydroxylase (for corticosterone).
Aldosterone synthesis is stimulated by several factors:
by increase in the plasma concentration of angiotensin III.
by increased plasma angiotensin II, ACTH, or potassium levels.
The ACTH stimulation test is sometimes used to stimulate the production of aldosterone along with cortisol to determine if primary or secondary adrenal insufficiency is present.
by plasma acidosis.
by the stretch receptors located in the atria of the heart.
by adrenoglomerulotropin, a lipid factor, obtained from pineal extracts. It selectively stimulates secretion of aldosterone.
The secretion of aldosterone has a diurnal rhythm.Control of aldosterone release from the adrenal cortex:
The role of the renin–angiotensin system:Angiotensin is involved in regulating aldosterone and is the core regulator. Angiotensin II acts synergistically with potassium.The role of sympathetic nerves:Aldosterone production is also affected to one extent or another by nervous control which integrates the inverse of carotid artery pressure, pain, posture, and probably emotion (anxiety, fear, and hostility)(including surgical stress).The role of baroreceptors:Pressure in the carotid artery decreases aldosteroneThe role of the juxtaglomerular apparatus
The plasma concentration of potassium:The amount of aldosterone secreted is a direct function of the serum potassium as probably determined by sensors in the carotid artery.The plasma concentration of sodium:Aldosterone is a function of the inverse of the sodium intake as sensed via osmotic pressure.Miscellaneous regulation:ACTH, a pituitary peptide, also has some stimulating effect on aldosterone probably by stimulating deoxycorticosterone formation which is a precursor of aldosterone.
Aldosterone is increased by blood loss, pregnancy, and possibly by other circumstances such as physical exertion, endotoxin shock, and burns.Aldosterone feedback:
Feedback by aldosterone concentration itself is of a non-morphological character (that is, other than changes in cell number or structure) and is relatively poor so that electrolyte feedback predominates in the short term.
Pathophysiology
The genes encoding aldosterone synthase and 11β-hydroxylase are 95% identical and are close together on chromosome 8. In individuals with GRA, there is unequal crossing over so that the 5 regulatory region of the 11-hydroxylase gene is fused to the coding region of the aldosterone synthase.The product of this hybrid gene is aldosterone synthase that is ACTH-sensitive in the zona fasciculata of the adrenal gland.Although in normal subjects, ACTH accelerates the first step of aldosterone synthesis, ACTH normally has no effect on the activity of aldosterone synthase. However, in subjects with glucocorticoid-remediable aldosteronism, ACTH increases the activity of existing aldosterone synthase, resulting in an abnormally high rate of aldosterone synthesis and hyperaldosteronism.
Diagnosis
Genetic testing is done to ascertain that the individual in question does indeed have the condition
Treatment
In GRA, the hypersecretion of aldosterone and the accompanying hypertension are remedied when ACTH secretion is suppressed by administering glucocorticoids.Dexamethasone, spironolactone and eplerenone have been used in treatment.
See also
Inborn errors of steroid metabolism
Hyperaldosteronism
Pseudohyperaldosteronism
Apparent mineralocorticoid excess syndrome
Aldosterone and aldosterone synthase
References
== External links == |
Cervical fracture | A cervical fracture, commonly called a broken neck, is a fracture of any of the seven cervical vertebrae in the neck. Examples of common causes in humans are traffic collisions and diving into shallow water. Abnormal movement of neck bones or pieces of bone can cause a spinal cord injury resulting in loss of sensation, paralysis, or usually instant death.
Causes
Considerable force is needed to cause a cervical fracture. Vehicle collisions and falls are common causes. A severe, sudden twist to the neck or a severe blow to the head or neck area can cause a cervical fracture.
Although high energy trauma is often associated with cervical fractures in the younger population, low energy trauma is more common in the geriatric population. In a study from Norway the most common cause was falls and the relative incidence of CS-fx increased significantly with age.Sports that involve violent physical contact carry a risk of cervical fracture, including American football, association football (especially the goalkeeper), ice hockey, rugby, and wrestling. Spearing an opponent in football or rugby, for instance, can cause a broken neck. Cervical fractures may also be seen in some non-contact sports, such as gymnastics, skiing, diving, surfing, powerlifting, equestrianism, mountain biking, and motor racing.
Certain penetrating neck injuries can also cause cervical fracture which can also cause internal bleeding among other complications.
Execution by hanging is intended to cause a fatal cervical fracture. The knot in the noose is placed to the left of the condemned, so that at the end of the drop, the head is jolted sharply upwards and to the right. The force breaks the neck, causing an immediate loss of consciousness and death within a few minutes.
Diagnosis
Physical examination
A medical history and physical examination can be sufficient in clearing the cervical spine. Notable clinical prediction rules to determine which patients need medical imaging are Canadian C-spine rule and the National Emergency X-Radiography Utilization Study (NEXUS).
Choice of medical imaging
In children, a CT scan of the neck is indicated in more severe cases such as neurologic deficits, whereas X-ray is preferable in milder cases, by both US and UK guidelines. Swedish guidelines recommend CT rather than X-ray in all children over the age of 5.
In adults, UK guidelines are largely similar as in children. US guidelines, on the other hand, recommend CT in all cases where medical imaging is indicated, and that X-ray is only acceptable where CT is not readily available.
Radiographic detection
On CT scan or X-ray, a cervical fracture may be directly visualized. In addition, indirect signs of injury by the vertebral column are incongruities of the vertebral lines, and/or increased thickness of the prevertebral space:
Classification
There are proper names for several types of cervical fractures, including:
Fracture of C1, including Jefferson fracture
Fracture of C2, including Hangmans fracture
Flexion teardrop fracture – a fracture of the anteroinferior aspect of a cervical vertebraThe AO Foundation has developed a descriptive system for cervical fractures, the AOSpine subaxial cervical spine fracture classification system.
Surgery indication
The indication to surgically stabilize a cervical fracture can be estimated from the Subaxial Injury Classification (SLIC). In this system, a score of 3 or less indicates that conservative management is appropriate, a score of 5 or more indicates that surgery is needed, and a score of 4 is equivocal. The score is the sum from 3 different categories: morphology, discs and ligaments, and neurology:
Treatment
Complete immobilization of the head and neck should be done as early as possible and before moving the patient. Immobilization should remain in place until movement of the head and neck is proven safe. In the presence of severe head trauma, cervical fracture must be presumed until ruled out. Immobilization is imperative to minimize or prevent further spinal cord injury. The only exceptions are when there is imminent danger from an external cause, such as becoming trapped in a burning building.
Non-steroidal anti-inflammatory drugs, such as Aspirin or Ibuprofen, are contraindicated because they interfere with bone healing. Paracetamol is a better option. Patients with cervical fractures will likely be prescribed medication for pain control.
In the long term, physical therapy will be given to build strength in the muscles of the neck to increase stability and better protect the cervical spine.
Collars, traction and surgery can be used to immobilize and stabilize the neck after a cervical fracture.
Cervical collar
Minor fractures can be immobilized with a cervical collar without need for traction or surgery. A soft collar is fairly flexible and is the least limiting but can carry a high risk of further neck damage in patients with osteoporosis. It can be used for minor injuries or after healing has allowed the neck to become more stable.
A range of manufactured rigid collars are also used, usually comprising a firm plastic bi-valved shell secured with Velcro straps and removable padded liners. The most frequently prescribed are the Aspen, Malibu, Miami J, and Philadelphia collars. All these can be used with additional chest and head extension pieces to increase stability.
Rigid braces
Rigid braces that support the head and chest are also prescribed. Examples include the Sterno-Occipital Mandibular Immobilization Device (SOMI), Lerman Minerva and Yale types. Special patients, such as very young children or non-cooperative adults, are sometimes still immobilized in medical plaster of paris casts, such as the Minerva cast.
Traction
Traction can be applied by free weights on a pulley or a halo type brace. The halo brace is the most rigid cervical brace, used when limiting motion to the minimum that is essential, especially with unstable cervical fractures. It can provide stability and support during the time (typically 8–12 weeks) needed for the cervical bones to heal.
Surgery
Surgery may be needed to stabilize the neck and relieve pressure on the spinal cord. A variety of surgeries are available depending on the injury. Surgery to remove a damaged intervertebral disc may be done to relieve pressure on the spinal cord. The discs are cushions between the vertebrae. After the disc is removed, the vertebrae may be fused together to provide stability. Metal plates, screws, or wires may be needed to hold vertebrae or pieces in place.
History
Arab physician and surgeon Ibn al-Quff (d. 1286 CE) described a treatment of cervical fractures through the oral route in his book Kitab al-ʿUmda fı Ṣinaʿa al-Jiraḥa (Book of Basics in the Art of Surgery).
See also
Brown-Séquard syndrome
Cervical dislocation
Internal decapitation
Spinal cord injury
References
External links
Van Waes OJ, Cheriex KC, Navsaria PH, van Riet PA, Nicol AJ, Vermeulen J (January 2012). "Management of penetrating neck injuries". The British Journal of Surgery. 99 (Suppl 1): 149–154. doi:10.1002/bjs.7733. hdl:1765/37154. PMID 22441870. S2CID 205512500. |
Hypercalcaemia | Hypercalcemia, also spelled hypercalcaemia, is a high calcium (Ca2+) level in the blood serum. The normal range is 2.1–2.6 mmol/L (8.8–10.7 mg/dL, 4.3–5.2 mEq/L), with levels greater than 2.6 mmol/L defined as hypercalcemia. Those with a mild increase that has developed slowly typically have no symptoms. In those with greater levels or rapid onset, symptoms may include abdominal pain, bone pain, confusion, depression, weakness, kidney stones or an abnormal heart rhythm including cardiac arrest.Most outpatient cases are due to primary hyperparathyroidism and inpatient cases due to cancer. Other causes of hypercalcemia include sarcoidosis, tuberculosis, Paget disease, multiple endocrine neoplasia (MEN), vitamin D toxicity, familial hypocalciuric hypercalcaemia and certain medications such as lithium and hydrochlorothiazide. Diagnosis should generally include either a corrected calcium or ionized calcium level and be confirmed after a week. Specific changes, such as a shortened QT interval and prolonged PR interval, may be seen on an electrocardiogram (ECG).Treatment may include intravenous fluids, furosemide, calcitonin, intravenous bisphosphonate, in addition to treating the underlying cause. The evidence for furosemide use, however, is poor. In those with very high levels, hospitalization may be required. Haemodialysis may be used in those who do not respond to other treatments. In those with vitamin D toxicity, steroids may be useful. Hypercalcemia is relatively common. Primary hyperparathyroidism occurs in 1–7 per 1,000 people, and hypercalcaemia occurs in about 2.7% of those with cancer.
Signs and symptoms
The neuromuscular symptoms of hypercalcaemia are caused by a negative bathmotropic effect due to the increased interaction of calcium with sodium channels. Since calcium blocks sodium channels and inhibits depolarization of nerve and muscle fibers, increased calcium raises the threshold for depolarization. This results in diminished deep tendon reflexes (hyporeflexia), and skeletal muscle weakness.Other symptoms include cardiac arrhythmias (especially in those taking digoxin), fatigue, nausea, vomiting (emesis), loss of appetite, abdominal pain, & paralytic ileus. If kidney impairment occurs as a result, manifestations can include increased urination, urination at night, and increased thirst. Psychiatric manifestation can include emotional instability, confusion, delirium, psychosis, and stupor. Calcium deposits known as limbus sign may be visible in the eyes.Symptoms are more common at high calcium blood values (12.0 mg/dL or 3 mmol/L). Severe hypercalcaemia (above 15–16 mg/dL or 3.75–4 mmol/L) is considered a medical emergency: at these levels, coma and cardiac arrest can result. The high levels of calcium ions decrease the neuron membrane permeability to sodium ions, thus decreasing excitability, which leads to hypotonicity of smooth and striated muscle. This explains the fatigue, muscle weakness, low tone and sluggish reflexes in muscle groups. The sluggish nerves also explain drowsiness, confusion, hallucinations, stupor or coma. In the gut this causes constipation. Hypocalcaemia causes the opposite by the same mechanism.
Hypercalcaemic crisis
A hypercalcaemic crisis is an emergency situation with a severe hypercalcaemia, generally above approximately 14 mg/dL (or 3.5 mmol/L).The main symptoms of a hypercalcaemic crisis are oliguria or anuria, as well as somnolence or coma. After recognition, primary hyperparathyroidism should be proved or excluded.In extreme cases of primary hyperparathyroidism, removal of the parathyroid gland after surgical neck exploration is the only way to avoid death. The diagnostic program should be performed within hours, in parallel with measures to lower serum calcium. Treatment of choice for acutely lowering calcium is extensive hydration and calcitonin, as well as bisphosphonates (which have effect on calcium levels after one or two days).
Causes
Primary hyperparathyroidism and malignancy account for about 90% of cases of hypercalcaemia.Causes of hypercalcemia can be divided into those that are PTH dependent or PTH independent.
Parathyroid function
Primary hyperparathyroidism
Solitary parathyroid adenoma
Primary parathyroid hyperplasia
Parathyroid carcinoma
Multiple endocrine neoplasia (MEN1 & MEN2A)
Familial isolated hyperparathyroidism
Lithium use
Familial hypocalciuric hypercalcemia/familial benign hypercalcemia
Cancer
Solid tumour with metastasis (e.g. breast cancer or classically squamous cell carcinoma, which can be PTHrP-mediated)
Solid tumour with humoral mediation of hypercalcaemia (e.g. lung cancer, most commonly non-small cell lung cancer or kidney cancer, phaeochromocytoma)
Haematologic cancers (multiple myeloma, lymphoma, leukaemia)
Ovarian small cell carcinoma of the hypercalcemic type
Vitamin-D disorders
Hypervitaminosis D (vitamin D intoxication)
Elevated 1,25(OH)2D (see calcitriol under Vitamin D) levels (e.g. sarcoidosis and other granulomatous diseases such as tuberculosis, berylliosis, histoplasmosis, Crohns disease, and granulomatosis with polyangiitis)
Idiopathic hypercalcaemia of infancy
Rebound hypercalcaemia after rhabdomyolysis
High bone-turnover
Hyperthyroidism
Multiple myeloma
Prolonged immobilization
Pagets disease
Thiazide use
Vitamin A intoxication
Kidney failure
Tertiary hyperparathyroidism
Aluminium intoxication
Milk-alkali syndrome
Other
Acromegaly
Adrenal insufficiency
Zollinger–Ellison syndrome
Williams Syndrome
Diagnosis
Diagnosis should generally include either a calculation of corrected calcium or direct measurement of ionized calcium level and be confirmed after a week. This is because either high or low serum albumin levels does not show the true levels of ionised calcium. There is, however, controversy around the usefulness of corrected calcium as it may be no better than total calcium.Once calcium is confirmed to be elevated, a detailed history taken from the subject, including review of medications, any vitamin supplementations, herbal preparations, and previous calcium values. Chronic elevation of calcium with absent or mild symptoms often points to primary hyperparathyroidism or Familial hypocalciuric hypercalcemia. For those who has underlying malignancy, the cancers may be sufficiently severe to show up in history and examination to point towards the diagnosis with little laboratory investigations.If detailed history and examination does not narrow down the differential diagnoses, further laboratory investigations are performed. Intact PTH (iPTH, biologically active parathyroid hormone molecules) is measured with immunoradiometric or immunochemoluminescent assay. Elevated (or high-normal) iPTH with high urine calcium/creatinine ratio (more than 0.03) is suggestive of primary hyperparathyroidism, usually accompanied by low serum phosphate. High iPTH with low urine calcium/creatinine ratio is suggestive of familial hypocalciuric hypercalcemia. Low iPTH should be followed up with Parathyroid hormone-related protein (PTHrP) measurements (though not available in all labs). Elevated PTHrP is suggestive of malignancy. Normal PTHrP is suggestive of multiple myeloma, vitamin A excess, milk-alkali syndrome, thyrotoxicosis, and immobilisation. Elevated Calcitriol is suggestive of lymphoma, sarcoidosis, granulomatous disorders, and excessive calcitriol intake. Elevated calcifediol is suggestive of vitamin D or excessive calcifediol intake.The normal range is 2.1–2.6 mmol/L (8.8–10.7 mg/dL, 4.3–5.2 mEq/L), with levels greater than 2.6 mmol/L defined as hypercalcaemia. Moderate hypercalcaemia is a level of 2.88–3.5 mmol/L (11.5–14 mg/dL) while severe hypercalcaemia is > 3.5 mmol/L (>14 mg/dL).
ECG
Abnormal heart rhythms can also result, and ECG findings of a short QT interval suggest hypercalcaemia. Significant hypercalcaemia can cause ECG changes mimicking an acute myocardial infarction. Hypercalcaemia has also been known to cause an ECG finding mimicking hypothermia, known as an Osborn wave.
Treatments
The goal of therapy is to treat the hypercalcaemia first and subsequently effort is directed to treat the underlying cause.
Fluids and diuretics
Initial therapy:
hydration, increasing salt intake, and forced diuresis.
hydration is needed because many patients are dehydrated due to vomiting or kidney defects in concentrating urine.
increased salt intake also can increase body fluid volume as well as increasing urine sodium excretion, which further increases urinary calcium excretion.
after rehydration, a loop diuretic such as furosemide can be given to permit continued large volume intravenous salt and water replacement while minimizing the risk of blood volume overload and pulmonary oedema. In addition, loop diuretics tend to depress calcium reabsorption by the kidney thereby helping to lower blood calcium levels
can usually decrease serum calcium by 1–3 mg/dL within 24 hours
caution must be taken to prevent potassium or magnesium depletion
Bisphosphonates and calcitonin
Additional therapy:
bisphosphonates are pyrophosphate analogues with high affinity for bone, especially areas of high bone-turnover.
they are taken up by osteoclasts and inhibit osteoclastic bone resorption
current available drugs include (in order of potency): (1st gen) etidronate, (2nd gen) tiludronate, IV pamidronate, alendronate (3rd gen) zoledronate and risedronate
all people with cancer-associated hypercalcaemia should receive treatment with bisphosphonates since the first line therapy (above) cannot be continued indefinitely nor is it without risk. Further, even if the first line therapy has been effective, it is a virtual certainty that the hypercalcaemia will recur in the person with hypercalcaemia of malignancy. Use of bisphosphonates in such circumstances, then, becomes both therapeutic and preventative
people in kidney failure and hypercalcaemia should have a risk-benefit analysis before being given bisphosphonates, since they are relatively contraindicated in kidney failure.
Denosumab is a bone anti-resorptive agent that can be used to treat hypercalcemia in patients with a contraindication to bisphosphonates such as severe kidney failure or allergy.
Calcitonin blocks bone resorption and also increases urinary calcium excretion by inhibiting calcium reabsorption by the kidney
Usually used in life-threatening hypercalcaemia along with rehydration, diuresis, and bisphosphonates
Helps prevent recurrence of hypercalcaemia
Dose is 4 international units per kilogram via subcutaneous or intramuscular route every 12 hours, usually not continued indefinitely due to quick onset of decreased response to calcitonin
Other therapies
rarely used, or used in special circumstances
plicamycin inhibits bone resorption (rarely used)
gallium nitrate inhibits bone resorption and changes structure of bone crystals (rarely used)
glucocorticoids increase urinary calcium excretion and decrease intestinal calcium absorption
no effect on calcium level in normal or primary hyperparathyroidism
effective in hypercalcaemia due to osteolytic malignancies (multiple myeloma, leukaemia, Hodgkins lymphoma, carcinoma of the breast) due to antitumour properties
also effective in hypervitaminosis D and sarcoidosis
dialysis usually used in severe hypercalcaemia complicated by kidney failure. Supplemental phosphate should be monitored and added if necessary
phosphate therapy can correct the hypophosphataemia in the face of hypercalcaemia and lower serum calcium
Other animals
Research has led to a better understanding of hypercalcemia in non-human animals. Often the causes of hypercalcemia have a correlation to the environment in which the organisms live. Hypercalcemia in house pets is typically due to disease, but other cases can be due to accidental ingestion of plants or chemicals in the home. Outdoor animals commonly develop hypercalcemia through vitamin D toxicity from wild plants within their environments.
Household pets
Household pets such as dogs and cats are found to develop hypercalcemia. It is less common in cats, and many feline cases are idiopathic. In dogs, lymphosarcoma, Addison’s disease, primary hyperparathyroidism, and chronic kidney failure are the main causes of hypercalcemia, but there are also environmental causes usually unique to indoor pets. Ingestion of small amounts of calcipotriene found in psoriasis cream can be fatal to a pet. Calcipotriene causes a rapid rise in calcium ion levels. Calcium ion levels can remain high for weeks if untreated and lead to an array of medical issues. There are also cases of hypercalcemia reported due to dogs ingesting rodenticides containing a chemical similar to calcipotriene found in psoriasis cream. Additionally, ingestion of household plants is a cause of hypercalcemia. Plants such as Cestrum diurnum, and Solanum malacoxylon contain ergocalciferol or cholecalciferol which cause the onset of hypercalcemia. Consuming small amounts of these plants can be fatal to pets. Observable symptoms may develop such as polydipsia, polyuria, extreme fatigue, or constipation.
Outdoor animals
In certain outdoor environments, animals such as horses, pigs, cattle, and sheep experience hypercalcemia commonly. In southern Brazil and Mattewara India, approximately 17 percent of sheep are affected, with 60 percent of these cases being fatal. Many cases are also documented in Argentina, Papua New Guinea, Jamaica, Hawaii, and Bavaria. These cases of hypercalcemeia are usually caused by ingesting Trisetum flavescens before it has dried out. Once Trisetum flavescens is dried out, the toxicity of it is diminished. Other plants causing hypercalcemia are Cestrum diurnum, Nierembergia veitchii, Solanum esuriale, Solanum torvum, and Solanum malacoxylon. These plants contain calcitriol or similar substances that cause rises in calcium ion levels. Hypercalcemia is most common in grazing lands at altitudes above 1500 meters where growth of plants like Trisetum flavescens is favorable. Even if small amounts are ingested over long periods of time, the prolonged high levels of calcium ions have large negative effects on the animals. The issues these animals experience are muscle weakness, and calcification of blood vessels, heart valves, liver, kidneys, and other soft tissues, which eventually can lead to death.
See also
Calcium metabolism
Dents disease
Electrolyte disturbance
Disorders of calcium metabolism
References
== External links == |
Isovaleric acidemia | Isovaleric acidemia is a rare autosomal recessive metabolic disorder which disrupts or prevents normal metabolism of the branched-chain amino acid leucine. It is a classical type of organic acidemia.
Symptoms and signs
A characteristic feature of isovaleric acidemia is a distinctive odor of sweaty feet. This odor is caused by the buildup of a compound called isovaleric acid in affected individuals.In about half of cases, the signs and symptoms of this disorder become apparent within a few days after birth and include poor feeding, vomiting, seizures, and lack of energy that can progress to coma. These medical problems are typically severe and can be life-threatening. In the other half of cases, the signs and symptoms of the disorder appear during childhood and may come and go over time. They are often triggered by an infection or by eating an increased amount of protein-rich foods.
Genetics
The disorder has an autosomal recessive inheritance pattern, which means the defective gene is located on an autosome, and two copies of the gene - one from each parent - must be inherited to be affected by the disorder. The parents of a child with an autosomal recessive disorder are carriers of one copy of the defective gene, but are usually not affected by the disorder.Mutations in both copies of the IVD gene result in isovaleric acidemia.
Pathophysiology
The enzyme encoded by IVD, isovaleric acid-CoA dehydrogenase (EC 1.3.99.10), plays an essential role in breaking down proteins from the diet. Specifically, the enzyme is responsible for the third step in processing leucine, an essential amino acid. If a mutation in the IVD gene reduces or eliminates the activity of this enzyme, the body is unable to break down leucine properly. As a result, isovaleric acid and related compounds build up to toxic levels, damaging the brain and nervous system.
Diagnosis
The urine of newborns can be screened for isovaleric acidemia using mass spectrometry, allowing for early diagnosis. Elevations of isovalerylglycine in urine and of isovalerylcarnitine in plasma are found.
Screening
On 9 May 2014, the UK National Screening Committee (UK NSC) announced its recommendation to screen every newborn baby in the UK for four further genetic disorders as part of its NHS Newborn Blood Spot Screening programme, including isovaleric acidemia.
Treatment
Treatment consists of dietary protein restriction, particularly leucine. During acute episodes, glycine is sometimes given, which conjugates with isovalerate forming isovalerylglycine, or carnitine which has a similar effect.
Elevated 3-hydroxyisovaleric acid is a clinical biomarker of biotin deficiency. Without biotin, leucine and isoleucine cannot be metabolized normally and results in elevated synthesis of isovaleric acid and consequently 3-hydroxyisovaleric acid, isovalerylglycine, and other isovaleric acid metabolites as well. Elevated serum 3-hydroxyisovaleric acid concentrations can be caused by supplementation with 3-hydroxyisovaleric acid, genetic conditions, or dietary deficiency of biotin. Some patients with isovaleric acidemia may benefit from supplemental biotin. Biotin deficiency on its own can have severe physiological and cognitive consequences that closely resemble symptoms of organic acidemias.
Prognosis
A 2011 review of 176 cases found that diagnoses made early in life (within a few days of birth) were associated with more severe disease and a mortality of 33%. Children diagnosed later, and who had milder symptoms, showed a lower mortality rate of ~3%.
Epidemiology
Isovaleric acidemia is estimated to affect at least 1 in 250,000 births in the United States.
See also
Maple syrup urine disease
Methylmalonic acidemia
Propionic acidemia
References
External links
Isovaleric acidemia at NLM Genetics Home Reference
GeneReviews: The Organic Acidemias |
Bartonellosis | Bartonellosis is an infectious disease produced by bacteria of the genus Bartonella.Bartonella species cause diseases such as Carrións disease, trench fever, cat-scratch disease, bacillary angiomatosis, peliosis hepatis, chronic bacteremia, endocarditis, chronic lymphadenopathy, and neurological disorders.
Presentation
Carrións disease
Patients can develop two clinical phases: an acute septic phase and a chronic eruptive phase associated with skin lesions. In the acute phase (also known as Oroya fever or fiebre de la Oroya), B. bacilliformis infection is a sudden, potentially life-threatening infection associated with high fever and decreased levels of circulating red blood cells (i.e., hemolytic anemia) and transient immunosuppression. B. bacilliformis is considered the most deadly species to date, with a death rate of up to 90% during the acute phase, which typically lasts two to four weeks. Peripheral blood smears show anisomacrocytosis with many bacilli adherent to red blood cells. Thrombocytopenia is also seen and can be very severe. Neurologic manifestations (neurobartonellosis) are altered mental status, agitation, or even coma, ataxia, spinal meningitis, or paralysis. It is seen in 20% of patients with acute infection, in which the prognosis is very guarded with an about 50% mortality. The most feared complication is overwhelming infection mainly by Enterobacteriaceae, particularly Salmonella (both S. typhi and S. non-typhi, as well as reactivation of toxoplasmosis and other opportunistic infections .The chronic manifestation consists of a benign skin eruption with raised, reddish-purple nodules (angiomatous tumours). The bacterium can be seen microscopically, if a skin biopsy is silver stained (the Warthin–Starry method).
Cat-scratch disease
Cat-scratch disease is due to an infection by B. henselae and manifests as gradual regional lymph nodes enlargement (axilla, groin, neck) which may last 2–3 months or longer and a distal scratch and/or red-brown skin papule (not always seen at the time of the disease). The enlarged lymph node is painful and tender. The lymph nodes may suppurate, some patients may remain afebrile or asymptomatic. Other presentations include fever (particularly in children), Parinauds oculoglandular syndrome, encephalopathy, and neuroretinitis.B. henselae can be associated with bacteremia, bacillary angiomatosis, and peliosis hepatis in HIV patients, and bacteremia and endocarditis in immunocompetent and immunocompromised patients. Symptoms may include fatigue, headaches, fever, memory loss, disorientation, insomnia, and loss of coordination. The bacteria block the normal immune response by suppressing the NF-κB apoptosis pathway. Disease progression may be accelerated if the host is subsequently infected by an immune-suppressing virus such as Epstein Barr virus.
Bacillary angiomatosis
B. henselae and B. quintana can cause bacillary angiomatosis, a vascular proliferative disease involving mainly the skin, and other organs. The disease was first described in human immunodeficiency virus (HIV) patients and organ transplant recipients. Severe, progressive and disseminated disease may occur in HIV patients. Differential diagnoses include Kaposis sarcoma, pyogenic granuloma, hemangioma, verruga peruana, and subcutaneous tumors. Lesions can affect bone marrow, liver, spleen, or lymph nodes.
Peliosis hepatis
B. henselae is the etiologic agent for peliosis hepatis, which is defined as a vascular proliferation of sinusoid hepatic capillaries resulting in blood-filled spaces in the liver in HIV patients and organ transplant recipients. Peliosis hepatis can be associated with peliosis of the spleen, as well as bacillary angiomatosis of the skin in HIV patients.
Trench fever
Trench fever, also known as five-day fever or quintan fever, is the initial manifestation of B. quintana infection. Clinical manifestations range from asymptomatic infection to severe illness. Classical presentations include a febrile illness of acute onset, headache, dizziness, and shin pain. Chronic infection manifestations include attacks of fever and aching in some cases and persistent bacteremia in soldiers and homeless people.
Microbiology
Members of the genus Bartonella are facultative intracellular bacteria, alpha 2 subgroup Pseudomonadota. The genus comprises:
Pathophysiology
In mammals, each Bartonella species is highly adapted to its reservoir host as the result of intracellular parasitism and can persist in the bloodstream of the host. Intraerythrocytic parasitism is only observed in the acute phase of Carrions disease. Bartonella species also have a tropism for endothelial cells, observed in the chronic phase of Carrions disease (also known as verruga Peruana) and bacillary angiomatosis.
Pathological response can vary with the immune status of the host. Infection with B. henselae can result in a focal suppurative reaction (CSD in immunocompetent patients), a multifocal angioproliferative response (bacillary angiomatosis in immunocompromised patients), endocarditis, or meningitis.
Diagnosis
There are several methods used for diagnosing Bartonella infection including microscopy, serology, and PCR. Microscopy of blood smears is used to diagnose Carrións disease (B. bacilliformis), however for other Bartonella species, microscopy and silver staining are insensitive, not highly specific, and cannot differentiate species. The CDC does not recommend lymph node aspiration for diagnostic purposes.
Serology and protein-based methods
IFA (immunofluorescence antibody assay) testing for the presence of antibodies in serum is used to diagnose B. henselae infection at the acute onset of Cat Scratch Disease symptoms, followed by PCR to confirm infecting species. IFA can generally be used to confirm a diagnosis of Bartonella infection, but is limited by antibody cross-reactivity with other bacteria species which can cause a false positive, and antigen variability which can result in false negatives.Bartonella spp. often evade an immune response, thus antibodies may not be detected even concurrent with an infection, resulting in an IFA false negative rate of up to 83% in chronically infected patients when other test results (e.g. organism isolation or PCR) are positive. IFA sensitivity may range from 14 to 100%, causing discrepancies between PCR and serology test results. Positive IFA results do not distinguish between current infection and prior exposure.ELISA (enzyme-linked immunosorbent assay) is another method that has been used to detect Bartonella, but it has a low sensitivity (17-35%). Western blot for protein detection of Bartonella-associated proteins has also been reported, but this method does not show clear immunoreactive profiles.
PCR
The CDC states that PCR testing from a single blood draw is not sufficiently sensitive for B. henselae testing, and can result in high false negative rates due to a small sample volume and levels below the limit of molecular detection.Bartonella spp. are fastidious, slow-growing bacteria that are difficult to grow using traditional solid agar plate culture methods due to complex nutritional requirements and potentially a low number of circulating bacteria. This conventional method of culturing Bartonella spp. from blood inoculates plated directly onto solid agar plates requires an extended incubation period of 21 days due to the slow growth rate.
Enrichment Culture
Bartonella growth rates improve when cultured in an enrichment inoculation step in a liquid insect-based medium such as Bartonella Alphaproteobacteria Growth Medium (BAPGM) or Schneiders Drosophila-based insect powder medium. Several studies have optimized the growing conditions of Bartonella spp. cultures in these liquid media, with no change in bacterial protein expressions or host interactions in vitro. Insect-based liquid media supports the growth and co-culturing of at least seven Bartonella species, reduces bacterial culturing time and facilitates PCR detection and isolation of Bartonella spp. from animal and patient samples. Research shows that DNA may be detected following direct extraction from blood samples and become negative following enrichment culture, thus PCR is recommended after direct sample extraction and also following incubation in enrichment culture. Several studies have successfully optimized sensitivity and specificity by using PCR amplification (pre-enrichment PCR) and enrichment culturing of blood draw samples, followed by PCR (post-enrichment PCR) and DNA sequence identification.
Serial Testing
As Bartonella spp. infect at low levels and cycle between blood and tissues, multiple blood draws over time may be necessary to detect infection.
Treatment
Treatment of infections caused by Bartonella species include:
Some authorities recommend the use of azithromycin.
Epidemiology
Carrións disease, or Oroya fever, or Peruvian wart is a rare infectious disease found only in Peru, Ecuador, and Colombia. It is endemic in some areas of Peru, is caused by infection with the bacterium Bartonella bacilliformis, and transmitted by sandflies of genus Lutzomyia.
Cat scratch disease occurs worldwide. Cats are the main reservoir of Bartonella henselae, and the bacterium is transmitted to cats by the cat flea Ctenocephalides felis. Infection in cats is very common with a prevalence estimated between 40 and 60%, younger cats being more commonly infective. Cats usually become immune to the infection, while dogs may be very symptomatic. Humans may also acquire it through flea or tick bites from infected dogs, cats, coyotes, and foxes.Trench fever, produced by Bartonella quintana infection, is transmitted by the human body louse Pediculus humanus corporis. Humans are the only known reservoir. Thorough washing of clothing may help to interrupt the transmission of infection.A possible role for ticks in transmission of Bartonella species remains to be elucidated; in November 2011, Bartonella rochalimae, B. quintana, and B. elizabethae DNA was first reported in Rhipicephalus sanguineus and Dermacentor nitens ticks in Peru.
History
Carrións disease
The disease was named after medical student Daniel Alcides Carrión from Cerro de Pasco, Peru. Carrión described the disease after being inoculated at his request with the pus of a skin lesion from patient Carmen Paredes in 1885 by Doctor Evaristo M. Chávez, a close friend and coworker in Dos de Mayo National Hospital. Carrión developed the disease three weeks after the inoculation and kept a meticulous record of clinical symptoms and signs until the disease rendered him incapable of the task and he died at age 28 several weeks later—October 5, 1885. Carrión proved that Oroya fever and verruga peruana were two stages of the same disease, and not two different diseases as was thought at the time. His work did not result in a cure immediately, but his research started the process. Peru has named October 5 as "Peruvian Medicine Day" in his honor.Peruvian microbiologist Alberto Barton discovered the causative bacterium in 1905, but his results were not published until 1909. Barton originally identified them as "endoglobular" structures, bacteria living inside red blood cells. Until 1993, the genus Bartonella, within the family Bartonellaceae, contained only one species; 23 are now identified.
CSD
In 1988, English et al. isolated and cultured a bacterium that was named Afipia felis in 1992 after the team at the Armed Forces Institute of Pathology that discovered it. This agent was considered the cause of cat-scratch Disease (CSD) but further studies failed to support this conclusion. Serologic studies associated CSD with Bartonella henselae, reported in 1992. In 1993, Dolan isolated Rochalimae henselae (now called Bartonella henselae) from lymph nodes of patients with CSD.
Bartonella spp. are commonly treated with antibiotics including azithromycin, based on a single small randomized clinical trial. Treatment may take up to one year to eliminate the disease.
CSD often resolves spontaneously without treatment.
Trench fever
Detailed descriptions of the disease were reported in soldiers during the First World War. It is also known as five-day fever, quintan fever, Wolhinie fever, and urban trench fever, because it occurs in homeless people and alcoholics .
References
== External links == |
Trypanosomiasis | Trypanosomiasis or trypanosomosis is the name of several diseases in vertebrates caused by parasitic protozoan trypanosomes of the genus Trypanosoma. In humans this includes African trypanosomiasis and Chagas disease. A number of other diseases occur in other animals.
African trypanosomiasis, which is caused by either Trypanosoma brucei gambiense or Trypanosoma brucei rhodesiense, threatens some 65 million people in sub-Saharan Africa, especially in rural areas and populations disrupted by war or poverty. The number of cases has been going down due to systematic eradication efforts: in 1998 almost 40,000 cases were reported but almost 300,000 cases were suspected to have occurred; in 2009, the number dropped below 10,000; and in 2018 it dropped below 1000. Chagas disease causes 21,000 deaths per year mainly in Latin America.
Signs and symptoms
The tsetse fly bite erupts into a red chancre sore and within a few weeks, the person can experience fever, swollen lymph glands, blood in urine, aching muscles and joints, headaches and irritability. In the first phase, the patient has only intermittent bouts of fever with lymphadenopathy together with other non-specific signs and symptoms. The second stage of the disease is marked by involvement of the central nervous system with extensive neurological effects like changes in personality, alteration of the biological clock (the circadian rhythm), confusion, slurred speech, seizures and difficulty in walking and talking. These problems can develop over many years and if not treated, the person dies. It is common in Subsaharan Africa.
Diagnosis
Cattle may show enlarged lymph nodes and internal organs. Haemolytic anaemia is a characteristic sign. Systemic disease and reproductive wastage are common, and cattle appear to waste away.
Horses with dourine show signs of ventral and genital edema and urticaria.
Infected dogs and cats may show severe systemic signs.
Diagnosis relies on recognition of the flagellate on a blood smear. Motile organisms may be visible in the buffy coat when a blood sample is spun down. Serological testing is also common.
Prevention
The use of trypanotolerant breeds for livestock farming should be considered if the disease is widespread. Fly control is another option but is difficult to implement. The main approaches to controlling African trypanosomiasis are to reduce the reservoirs of infection and the presence of the tsetse fly. Screening of people at risk helps identify patients at an early stage. Diagnosis should be made as early as possible and before the advanced stage to avoid complicated, difficult and risky treatment procedures.
Treatment
Stage I of the condition is usually treated with pentamidine or suramin through intramuscular injection or intravenous infusion if sufficient observation is possible. Stage II of the disease is typically treated with melarsoprol or eflornithine preferably introduced to the body intravenously. Both pentamidine and suramin have limited side effects. Melarsoprol is extremely effective but has many serious side effects which can cause neurological damage to a patient, however, the drug is often a patients last hope in many late stage cases. Eflornithine is extremely expensive but has side effects that may be treated with ease. In regions of the world where the disease is common eflornithine is provided for free by the World Health Organization.
Research
Trypanosomiasis could, in future be prevented by genetically altering the tsetse fly. As the tsetse fly is the main vector of transmission, making the fly immune to the disease by altering its genome could be the main component in an effort to eradicate the disease. New technologies such as CRISPR allowing cheaper and easier genetic engineering could allow for such measures. A pilot program in Senegal, funded by the International Atomic Energy Agency, has considerably reduced the tsetse fly population by introducing male flies which have been sterilized by exposure to gamma rays. This has allowed a change of cattle breeds from lower producing trypanotolerant breeds to higher-producing foreign breeds, and was selected as one of the Best Sustainable Development Practices on Food Security by EXPO Milan 2015.
Other animals
Nagana, or animal African trypanosomiasis, also called Souma or Soumaya in Sudan.
Surra
Mal de cadeiras, or Quebra Bunda (of central South America, Brazil)
Murrina de caderas (of Panama; Derrengadera de caderas)
Dourine
Cachexial fevers (various)
Gambian horse sickness (of central Africa)
Baleri (of Sudan)
Kaodzera (Rhodesian trypanosomiasis)
Tahaga (a disease of camels in Algeria)
Galziekte, galzietzke (bilious fever of cattle; gall sickness of South Africa)
Peste-boba (of Venezuela; Derrengadera)Some species of cattle such as the African buffalo, Ndama, and Keteku appear trypanotolerant and do not develop symptoms. Calves are more resistant than adults.Tsetse-borne species of trypanosomes have entered zoos outside the traditional tsetse zone in infected animals imported for the zoo.
References
Bibliography
Thomas, H Wolferstan (1905). Report on trypanosomes, trypanosomiasis, and sleeping sickness : being an experimental investigation into their pathology and treatment. London: University Press of Liverpool. OCLC 11692559.
Manson, Patrick (1914). Tropical diseases : a manual of diseases of warm climates (5th ed.). New York: William Wood. OCLC 812165069.
Daniels, Charles Wilberforce (1914). Tropical Medicine and Hygiene. New York. OCLC 810109334.
Maudlin, Ian; Holmes, Peter; Miles, Michael W (2004). The trypanosomiases. Wallingford, UK; Cambridge, Massachusetts: CABI Publishing. ISBN 9780851990347. OCLC 58543155.
External links
Animal Trypanosomosis reviewed and published by Wikivet.
Disease card on World Organisation for Animal Health |
Malunion | A malunion is when a fractured bone does not heal properly. Some ways that it shows is by having the bone being twisted, shorter, or bent. Malunions can occur by having the bones improperly aligned when immobilized, having the cast taken off too early, or never seeking medical treatment after the break. Malunions are painful and commonly produce swelling around the area, possible immobilization, and deterioration of the bone and tissue.
Signs and symptoms
Malunions are presented by excessive swelling, twisting, bending, and possibly shortening of the bone. Patients may have trouble placing weight on or near the malunion.
Diagnosis
An X-ray is essential for the proper diagnosis of a malunion. The doctor will look into the patient’s history and the treatment process for the bone fracture. Oftentimes a CT scan and probably an MRI are also used in diagnosis. MRI are used to check of cartilage and ligament issues that developed due to the malunion and misalignment. CT scans are used to locate normal or abnormal structures within the body and to help during procedures to guide the placement of instruments and/or treatments.
Treatment
Once diagnosed and located, surgery is the most common treatment for a malunion. The surgery consists of the surgeon re-breaking the bone and realigning it to the anatomically correct position. There are different types and levels of severity for malunions which helps determine the treatment. Most often, either screws, plates or pins are used secure the new alignment. In some cases, the bone may be trimmed to allow full orientation at the fractured spot. It is also possible that a bone graft could be used to help with healing.During follow ups an X-ray or a CT scan may be used to verify that the fracture is healing properly and is now in the anatomically correct position.
See also
Monteggia fracture
Duverney fracture
Clavicle fracture
== References == |
Impulsivity | In psychology, impulsivity (or impulsiveness) is a tendency to act on a whim, displaying behavior characterized by little or no forethought, reflection, or consideration of the consequences. Impulsive actions are typically "poorly conceived, prematurely expressed, unduly risky, or inappropriate to the situation that often result in undesirable consequences," which imperil long-term goals and strategies for success. Impulsivity can be classified as a multifactorial construct. A functional variety of impulsivity has also been suggested, which involves action without much forethought in appropriate situations that can and does result in desirable consequences. "When such actions have positive outcomes, they tend not to be seen as signs of impulsivity, but as indicators of boldness, quickness, spontaneity, courageousness, or unconventionality" Thus, the construct of impulsivity includes at least two independent components: first, acting without an appropriate amount of deliberation, which may or may not be functional; and second, choosing short-term gains over long-term ones.Impulsivity is both a facet of personality and a major component of various disorders, including FASD, ADHD, substance use disorders, bipolar disorder, antisocial personality disorder, and borderline personality disorder. Abnormal patterns of impulsivity have also been noted instances of acquired brain injury and neurodegenerative diseases. Neurobiological findings suggest that there are specific brain regions involved in impulsive behavior, although different brain networks may contribute to different manifestations of impulsivity, and that genetics may play a role.Many actions contain both impulsive and compulsive features, but impulsivity and compulsivity are functionally distinct. Impulsivity and compulsivity are interrelated in that each exhibits a tendency to act prematurely or without considered thought and often include negative outcomes. Compulsivity may be on a continuum with compulsivity on one end and impulsivity on the other, but research has been contradictory on this point. Compulsivity occurs in response to a perceived risk or threat, impulsivity occurs in response to a perceived immediate gain or benefit, and, whereas compulsivity involves repetitive actions, impulsivity involves unplanned reactions.
Impulsivity is a common feature of the conditions of gambling and alcohol addiction. Research has shown that individuals with either of these addictions discount delayed money at higher rates than those without, and that the presence of gambling and alcohol abuse lead to additive effects on discounting.
Impulse
An impulse is a wish or urge, particularly a sudden one. It can be considered as a normal and fundamental part of human thought processes, but also one that can become problematic, as in a condition like obsessive-compulsive disorder, borderline personality disorder, attention deficit hyperactivity disorder, or in fetal alcohol spectrum disorders.
The ability to control impulses, or more specifically control the desire to act on them, is an important factor in personality and socialization. Deferred gratification, also known as impulse control is an example of this, concerning impulses primarily relating to things that a person wants or desires. Delayed gratification comes when one avoids acting on initial impulses. Delayed gratification has been studied in relation to childhood obesity. Resisting the urge to act on impulses is important to teach children, because it teaches the value of delayed gratification.
Many psychological problems are characterized by a loss of control or a lack of control in specific situations. Usually, this lack of control is part of a pattern of behavior that also involves other maladaptive thoughts and actions, such as substance abuse problems or sexual disorders like the paraphilias (e.g. pedophilia and exhibitionism). When loss of control is only a component of a disorder, it usually does not have to be a part of the behavior pattern, and other symptoms must also be present for the diagnosis to be made. (Franklin)
The five traits that can lead to impulsive actions
For many years it was understood that impulsivity is a trait but with further analysis it can be found that there were five traits that can lead to impulsive actions:
positive urgency,
negative urgency,
sensation seeking,
lack of planning, and
lack of perseverance.
Associated behavioral and societal problems
Attention-deficit hyperactivity disorder
Attention deficit-hyperactivity disorder (ADHD) is a multiple component disorder involving inattention, impulsivity, and hyperactivity. The Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) breaks ADHD into three subtypes according to the behavioral symptoms:
Attention-Deficit/Hyperactivity Disorder Predominantly Inattentive Type,
Attention-Deficit/Hyperactivity Disorder Predominantly Hyperactive-Impulsive Type, and
Attention-Deficit/Hyperactivity Disorder Combined Type.
Predominantly hyperactive-impulsive type symptoms may include
fidgeting and squirming in seats,
talking nonstop,
dashing around and touching or playing with anything in sight,
having trouble sitting still during dinner/school/story time,
being constantly in motion, and
having difficulty doing quiet tasks or activities.
Other manifestations primarily of impulsivity include
being very impatient,
having difficulty waiting for things they want or waiting their turns in games,
often interrupting conversations or others activities, or
blurting out inappropriate comments, showing their emotions without restraint, and act without regard for consequences.
Prevalence of the disorder worldwide is estimated to be between 4% and 10%, with reports as low as 2.2% and as high as 17.8%. Variation in rate of diagnoses may be attributed to differences between populations (i.e. culture), and differences in diagnostic methodologies. Prevalence of ADHD among females is less than half that of males, and females more commonly fall into the inattentive subtype.Despite an upward trend in diagnoses of the inattentive subtype of ADHD, impulsivity is commonly considered to be the central feature of ADHD, and the impulsive and combined subtypes are the major contributors to the societal costs associated with ADHD. The estimated cost of illness for a child with ADHD is $14,576 (in 2005 dollars) annually. Prevalence of ADHD among prison populations is significantly higher than that of the normal population.In both adults and children, ADHD has a high rate of comorbidity with other mental health disorders such as learning disability, conduct disorder, anxiety disorder, major depressive disorder, bipolar disorder, and substance use disorders.
The precise genetic and environmental factors contributing to ADHD are relatively unknown, but endophenotypes offer a potential middle ground between genes and symptoms. ADHD is commonly linked to "core" deficits involving "executive function," "delay aversion," or "activation/arousal" theories that attempt to explain ADHD through its symptomology. Endophenotypes, on the other hand, purport to identify potential behavioral markers that correlate with specific genetic etiology. There is some evidence to support deficits in response inhibition as one such marker. Problems inhibiting prepotent responses are linked with deficits in pre-frontal cortex (PFC) functioning, which is a common dysfunction associated with ADHD and other impulse-control disorders.Evidence-based psychopharmacological and behavioral interventions exist for ADHD.
Substance abuse
Impulsivity appears to be linked to all stages of substance abuse.The acquisition phase of substance abuse involves the escalation from single use to regular use. Impulsivity may be related to the acquisition of substance abuse because of the potential role that instant gratification provided by the substance may offset the larger future benefits of abstaining from the substance, and because people with impaired inhibitory control may not be able to overcome motivating environmental cues, such as peer pressure. "Similarly, individuals that discount the value of delayed reinforcers begin to abuse alcohol, marijuana, and cigarettes early in life, while also abusing a wider array of illicit drugs compared to those who discounted delayed reinforcers less."Escalation or dysregulation is the next and more severe phase of substance abuse. In this phase individuals "lose control" of their addiction with large levels of drug consumption and binge drug use. Animal studies suggest that individuals with higher levels of impulsivity may be more prone to the escalation stage of substance abuse.Impulsivity is also related to the abstinence, relapse, and treatment stages of substance abuse. People who scored high on the Barratt Impulsivity Scale (BIS) were more likely to stop treatment for cocaine abuse. Additionally, they adhered to treatment for a shorter duration than people that scored low on impulsivity. Also, impulsive people had greater cravings for drugs during withdrawal periods and were more likely to relapse. This effect was shown in a study where smokers that test high on the BIS had increased craving in response to smoking cues, and gave into the cravings more quickly than less impulsive smokers. Taken as a whole the current research suggests that impulsive individuals are less likely to abstain from drugs and more likely to relapse earlier than less impulsive individuals.While it is important to note the effect of impulsivity on substance abuse, the reciprocating effect whereby substance abuse can increase impulsivity has also been researched and documented. The promoting effect of impulsivity on substance abuse and the effect of substance abuse on increased impulsivity creates a positive feedback loop that maintains substance seeking behaviors. It also makes conclusions about the direction of causality difficult. This phenomenon has been shown to be related to several substances, but not all. For example, alcohol has been shown to increase impulsivity while amphetamines have had mixed results.Substance use disorder treatments include prescription of medications such as acamprosate, buprenorphine, disulfiram, LAAM, methadone, and naltrexone, as well as effective psychotherapeutic treatment like
behavioral couples therapy, CBT, contingency management, motivational enhancement therapy, and relapse prevention.
Eating
Impulsive overeating spans from an episode of indulgence by an otherwise healthy person to chronic binges by a person with an eating disorder.Consumption of a tempting food by non-clinical individuals increases when self-regulatory resources are previously depleted by another task, suggesting that it is caused by a breakdown in self control. Impulsive eating of unhealthy snack foods appears to be regulated by individual differences in impulsivity when self-control is weak and by attitudes towards the snack and towards healthy eating when self-control is strong. There is also evidence that greater food consumption occurs when people are in a sad mood, although it is possible that this is due more to emotional regulation than to a lack of self-control. In these cases, overeating will only take place if the food is palatable to the person, and if so individual differences in impulsivity can predict the amount of consumption.Chronic overeating is a behavioral component of binge eating disorder, compulsive overeating, and bulimia nervosa. These diseases are more common for women and may involve eating thousands of calories at a time. Depending on which of these disorders is the underlying cause, an episode of overeating can have a variety of different motivations. Characteristics common among these three disorders include low self-esteem, depression, eating when not physically hungry, preoccupation with food, eating alone due to embarrassment, and feelings of regret or disgust after an episode. In these cases, overeating is not limited to palatable foods.Impulsivity differentially affects disorders involving the overcontrol of food intake (such as anorexia nervosa) and disorders involving the lack of control of food intake (such as bulimia nervosa). Cognitive impulsivity, such as risk-taking, is a component of many eating disorders, including those that are restrictive. However, only people with disorders involving episodes of overeating have elevated levels of motoric impulsivity, such as reduced response inhibition capacity.One theory suggests that binging provides a short-term escape from feelings of sadness, anger, or boredom, although it may contribute to these negative emotions in the long-term. Another theory suggests that binge eating involves reward seeking, as evidenced by decreased serotonin binding receptors of binge-eating women compared to matched-weight controls and predictive value of heightened reward sensitivity/drive in dysfunctional eating.Treatments for clinical-grade overeating include cognitive behavioral therapy to teach people how to track and change their eating habits and actions, interpersonal psychotherapy to help people analyze the contribution of their friends and family in their disorder, and pharmacological therapies including antidepressants and SSRIs.
Impulse buying
Impulse buying consists of purchasing a product or service without any previous intent to make that purchase. It has been speculated to account for as much as eighty percent of all purchases in the United States.There are several theories pertaining to impulsive buying. One theory suggests that it is exposure combining with the speed that a reward can be obtained that influences an individual to choose lesser immediate rewards over greater rewards that can be obtained later. For example, a person might choose to buy a candy bar because they are in the candy aisle even though they had decided earlier that they would not buy candy while in the store.
Another theory is one of self-regulation which suggests that the capacity to refrain from impulsive buying is a finite resource. As this capacity is depleted with repeated acts of restraint susceptibility to purchasing other items on impulse increases.Finally, a third theory suggests an emotional and behavioral tie between the purchaser and the product which drives both the likelihood of an impulsive purchase as well as the degree that a person will retroactively be satisfied with that purchase result. Some studies have shown a large number of individuals are happy with purchases made on impulse (41% in one study) which is explained as a preexisting emotional attachment which has a positive relationship both with the likelihood of initiating the purchase as well as mitigating post purchase satisfaction. As an example, when purchasing team-related college paraphernalia a large percentage of those purchases are made on impulse and are tied to the degree with which a person has positive ties to that team.Impulsive buying is seen both as an individual trait in which each person has a preconditioned or hereditary allotment, as well as a situational construct which is mitigated by such things as emotion in the moment of the purchase and the preconditioned ties an individual has with the product.Psychotherapy and pharmacological treatments have been shown to be helpful interventions for patients with impulsive-compulsive buying disorder.
Psychotherapy interventions include the use of desensitization techniques, self-help books or attending a support group.
Pharmacological interventions include the use of SSRIs, such as fluvoxamine, citalopram, escitalopram, and naltrexone.
Impulse control disorders not elsewhere classified
Impulse control disorder (ICDs) are a class of DSM diagnoses that do not fall into the other diagnostic categories of the manual (e.g. substance use disorders), and that are characterized by extreme difficulty controlling impulses or urges despite negative consequences. Individuals suffering from an impulse control disorder frequently experience five stages of symptoms: compelling urge or desire, failure to resist the urge, a heightened sense of arousal, succumbing to the urge (which usually yields relief from tension), and potential remorse or feelings of guilt after the behavior is completed. Specific disorders included within this category include intermittent explosive disorder, kleptomania, pathological gambling, pyromania, trichotillomania (hair-pulling disorder), and impulse control disorders not otherwise specified (ICD NOS). ICD NOS includes other significant difficulties that seem to be related to impulsivity but do not meet the criteria for a specific DSM diagnosis.There has been much debate over whether or not the ICDs deserve a diagnostic category of their own, or whether they are in fact phenomenologically and epidemiologically related to other major psychiatric conditions like obsessive-compulsive disorder (OCD), affective disorders, and addictive disorders. In fact, the ICD classification is likely to change with the release of the DSM-V in May 2013. In this new revision the ICD NOS will likely be reduced or removed; proposed revisions include reclassifying trichotillomania (to be renamed hair-pulling disorder) and skin-picking disorder as obsessive-compulsive and related disorders, moving intermittent explosive disorder under the diagnostic heading of disruptive, impulse control, and conduct disorders, and gambling disorder may be included in addiction and related disorders.The role of impulsivity in the ICDs varies. Research on kleptomania and pyromania is lacking, though there is some evidence that greater kleptomania severity is tied to poor executive functioning.Trichotillomania and skin-picking disorder seem to be disorders that primarily involve motor impulsivity, and will likely be classified in the DSM-V within the obsessive-compulsive and related disorders category.Pathological gambling, in contrast, seems to involve many diverse aspects of impulsivity and abnormal reward circuitry (similar to substance use disorders) that has led to it being increasingly conceptualized as a non-substance or behavioral addiction. Evidence elucidating the role of impulsivity in pathological gambling is accumulating, with pathological gambling samples demonstrating greater response impulsivity, choice impulsivity, and reflection impulsivity than comparison control samples. Additionally, pathological gamblers tend to demonstrate greater response perseveration (compulsivity) and risky decisionmaking in laboratory gambling tasks compared to controls, though there is no strong evidence suggesting that attention and working memory are impaired in pathological gamblers. These relations between impulsivity and pathological gambling are confirmed by brain function research: pathological gamblers demonstrate less activation in the frontal cortical regions (implicated in impulsivity) compared to controls during behavioral tasks tapping response impulsivity, compulsivity, and risk/reward. Preliminary, though variable, findings also suggest that striatal activation is different between gamblers and controls, and that neurotransmitter differences (e.g. dopamine, serotonin, opioids, glutamate, norepinephrine) may exist as well.Individuals with intermittent explosive disorder, also known as impulsive aggression, have exhibited serotonergic abnormalities and show differential activation in response to emotional stimuli and situations. Notably, intermittent explosive disorder is not associated with a higher likelihood of diagnosis with any of the other ICDs but is highly comorbid with disruptive behavior disorders in childhood. Intermittent explosive disorder is likely to be re-classified in the DSM-V under the heading of disruptive, impulse control, and conduct disorders.These sorts of impulse control disorders are most often treated using certain types of psychopharamcological interventions (e.g. antidepressants) and behavioral treatments like cognitive behavioral therapy.
Theories of impulsivity
Ego (cognitive) depletion
According to the ego (or cognitive) depletion theory of impulsivity, self-control refers to the capacity for altering ones own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals. Self-control enables a person to restrain or override one response, thereby making a different response possible.
A major tenet of the theory is that engaging in acts of self-control draws from a limited "reservoir" of self-control that, when depleted, results in reduced capacity for further self-regulation. Self-control is viewed as analogous to a muscle: Just as a muscle requires strength and energy to exert force over a period of time, acts that have high self-control demands also require strength and energy to perform. Similarly, as muscles become fatigued after a period of sustained exertion and have reduced capacity to exert further force, self-control can also become depleted when demands are made of self-control resources over a period of time. Baumeister and colleagues termed the state of diminished self-control strength ego depletion (or cognitive depletion).The strength model of self-control asserts that:
Just as exercise can make muscles stronger, there are signs that regular exertions of self-control can improve willpower strength. These improvements typically take the form of resistance to depletion, in the sense that performance at self-control tasks deteriorates at a slower rate. Targeted efforts to control behavior in one area, such as spending money or exercise, lead to improvements in unrelated areas, such as studying or household chores. And daily exercises in self-control, such as improving posture, altering verbal behavior, and using ones nondominant hand for simple tasks, gradually produce improvements in self-control as measured by laboratory tasks. The finding that these improvements carry over into tasks vastly different from the daily exercises shows that the improvements are not due to simply increasing skill or acquiring self-efficacy from practice.
Just as athletes begin to conserve their remaining strength when their muscles begin to tire, so do self-controllers when some of their self-regulatory resources have been expended. The severity of behavioral impairment during depletion depends in part on whether the person expects further challenges and demands. When people expect to have to exert self-control later, they will curtail current performance more severely than if no such demands are anticipated.
Consistent with the conservation hypothesis, people can exert self-control despite ego depletion if the stakes are high enough. Offering cash incentives or other motives for good performance counteracts the effects of ego depletion. This may seem surprising but in fact it may be highly adaptive. Given the value and importance of the capacity for self-control, it would be dangerous for a person to lose that capacity completely, and so ego depletion effects may occur because people start conserving their remaining strength. When people do exert themselves on the second task, they deplete the resource even more, as reflected in severe impairments on a third task that they have not anticipated.Empirical tests of the ego-depletion effect typically adopt dual-task paradigm. Participants assigned to an experimental ego-depletion group are required to engage in two consecutive tasks requiring self-control. Control participants are also required to engage in two consecutive tasks, but only the second task requires self-control. The strength model predicts that the performance of the experimental-group on the second self-control task will be impaired relative to that of the control group. This is because the finite self-control resources of the experimental participants will be diminished after the initial self-control task, leaving little to draw on for the second task.The effects of ego depletion do not appear to be a product of mood or arousal. In most studies, mood and arousal has not been found to differ between participants who exerted self-control and those who did not. Likewise, mood and arousal was not related to final self-control performance. The same is true for more specific mood items, such as frustration, irritation, annoyance, boredom, or interest as well. Feedback about success and failure of the self-control efforts does not appear to affect performance. In short, the decline in self-control performance after exerting self-control appears to be directly related to the amount of self-control exerted and cannot be easily explained by other, well-established psychological processes.
Automatic vs. controlled processes/cognitive control
Dual process theory states that mental processes operate in two separate classes: automatic and controlled. In general, automatic processes are those that are experiential in nature, occur without involving higher levels of cognition, and are based on prior experiences or informal heuristics. Controlled decisions are effortful and largely conscious processes in which an individual weighs alternatives and makes a more deliberate decision.
Automatic Process: Automatic processes have four main features. They occur unintentionally or without a conscious decision, the cost of the decision is very low in mental resources, they cannot be easily stopped, and they occur without conscious thought on the part of the individual making them.
Controlled Process: Controlled processes also have four main features that are very close to the opposite in spectrum from their automatic counterparts. Controlled processes occur intentionally, they require the expenditure of cognitive resources, the individual making the decision can stop the process voluntarily, and the mental process is a conscious one.Dual process theories at one time considered any single action/thought as either being automatic or controlled. However, currently they are seen as operating more along a continuum as most impulsive actions will have both controlled and automatic attributes. Automatic processes are classified according to whether they are meant to inhibit or to facilitate a thought process. For example, in one study researchers offered individuals a choice between a 1 in 10 chance of winning a prize and a 10 in 100 chance. Many participants chose one of the choices over the other without identifying that the chances inherent in each were the same as they saw either only 10 chances total as more beneficial, or of having 10 chances to win as more beneficial. In effect impulsive decisions can be made as prior information and experiences dictate one of the courses of action is more beneficial when in actuality careful consideration would better enable the individual to make a more informed and improved decision.
Intertemporal choice
Intertemporal choice is defined as "decisions with consequences that play out over time". This is often assessed using the relative value people assign to rewards at different points in time, either by asking experimental subjects to choose between alternatives or examining behavioral choices in a naturalistic setting.Intertemporal choice is commonly measured in the laboratory using a "delayed discounting" paradigm, which measures the process of devaluing rewards and punishments that happen in the future. In this paradigm, subjects must choose between a smaller reward delivered soon and a larger reward delivered at a delay in the future. Choosing the smaller-sooner reward is considered impulsive. By repeatedly making these choices, indifference points can be estimated. For example, if someone chose $70 now over $100 in a week, but chose the $100 in a week over $60 now, it can be inferred that they are indifferent between $100 in a week and an intermediate value between $60 and $70. A delay discounting curve can be obtained for each participant by plotting their indifference points with different reward amounts and time delays. Individual differences in discounting curves are affected by personality characteristics such as self-reports of impulsivity and locus of control; personal characteristics such as age, gender, IQ, race, and culture; socioeconomic characteristics such as income and education; and many other variables. to drug addiction. Lesions of the nucleus accumbens core subregion or basolateral amygdala produce shifts towards choosing the smaller-sooner reward, suggesting the involvement of these brain regions in the preference for delayed reinforcers. There is also evidence that the orbitofrontal cortex is involved in delay discounting, although there is currently debate on whether lesions in this region result in more or less impulsivity.Economic theory suggests that optimal discounting involves the exponential discounting of value over time. This model assumes that people and institutions should discount the value of rewards and punishments at a constant rate according to how delayed they are in time. While economically rational, recent evidence suggests that people and animals do not discount exponentially. Many studies suggest that humans and animals discount future values according to a hyperbolic discounting curve where the discount factor decreases with the length of the delay (for example, waiting from today to tomorrow involves more loss of value than waiting from twenty days to twenty-one days). Further evidence for non-constant delay discounting is suggested by the differential involvement of various brain regions in evaluating immediate versus delayed consequences. Specifically, the prefrontal cortex is activated when choosing between rewards at a short delay or a long delay, but regions associated with the dopamine system are additionally activated when the option of an immediate reinforcer is added. Additionally, intertemporal choices differ from economic models because they involve anticipation (which may involve a neurological "reward" even if the reinforcer is delayed), self-control (and the breakdown of it when faced with temptations), and representation (how the choice is framed may influence desirability of the reinforcer), none of which are accounted for by a model that assumes economic rationality.One facet of intertemporal choice is the possibility for preference reversal, when a tempting reward becomes more highly valued than abstaining only when immediately available. For example, when sitting home alone, a person may report that they value the health benefit of not smoking a cigarette over the effect of smoking one. However, later at night when the cigarette is immediately available, their subjective value of the cigarette may rise and they may choose to smoke it.A theory called the "primrose path" is intended to explain how preference reversal can lead to addiction in the long run. As an example, a lifetime of sobriety may be more highly valued than a lifetime of alcoholism, but, at the same time, one drink now may be more highly valued than not drinking now. Because it is always "now," the drink is always chosen, and a paradoxical effect occurs whereby the more-valued long-term alternative is not achieved because the more-valued short-term alternative is always chosen. This is an example of complex ambivalence, when a choice is made not between two concrete alternatives but between one immediate and tangible alternative (i.e. having a drink) and one delayed and abstract alternative (i.e. sobriety).
Similarities between humans and non-human animals in intertemporal choice have been studied. Pigeons and rats also discount hyperbolically; tamarin monkeys do not wait more than eight seconds to triple the amount of a food reward. The question arises as to whether this is a difference of homology or analogy—that is, whether the same underlying process underlies human-animal similarities or whether different processes are manifesting in similar patterns of results.
Inhibitory control
Inhibitory |
Impulsivity | control, often conceptualized as an executive function, is the ability to inhibit or hold back a prepotent response. It is theorized that impulsive behavior reflects a deficit in this ability to inhibit a response; impulsive people may find it more difficult to inhibit action whereas non-impulsive people may find it easier to do so. There is evidence that, in normal adults, commonly used behavioral measures of inhibitory control correlate with standard self-report measures of impulsivity.Inhibitory control may itself be multifaceted, evidenced by numerous distinct inhibition constructs that can be measured in different ways, and relate to specific types of psychopathology. Joel Nigg developed a useful working taxonomy of these different types of inhibition, drawing heavily from the fields of cognitive and personality psychology Niggs eight proposed types of inhibition include the following:
Executive Inhibition
Interference control
Suppression of a stimulus that elicits an interfering response, enabling a person to complete the primary response. Interference control can also refer to suppressing distractors.Interference control has been measured using cognitive tasks like the stroop test, flanker tasks, dual task interference, and priming tasks. Personality researchers have used the Rothbart effortful control measures and the conscientiousness scale of the Big Five as inventory measures of interference control. Based on imaging and neural research it is theorized that the anterior cingulate, the dorsolateral prefrontal/premotor cortex, and the basal ganglia are related to interference control.
Cognitive inhibition
Cognitive inhibition is the suppression of unwanted or irrelevant thoughts to protect working memory and attention resources.Cognitive inhibition is most often measured through tests of directed ignoring, self-report on ones intrusive thoughts, and negative priming tasks. As with interference control, personality psychologists have measured cognitive inhibition using the Rothbart Effortful Control scale and the Big Five Conscientiousness scale. The anterior cingulate, the prefrontal regions, and the association cortex seem to be involved in cognitive inhibition.
Behavioral inhibition
Behavioral Inhibition is the suppression of prepotent response.Behavioral inhibition is usually measured using the Go/No Go task, Stop signal task, and reports of suppression of attentional orienting. Surveys that are theoretically relevant to behavioral inhibition include the Rothbart effortful control scale, and the Big Five Conscientiousness dimension. The rationale behind the use of behavioral measures like the Stop signal task is that "go" processes and "stop processes" are independent, and that, upon "go" and "stop" cues, they "race" against each other; if the go process wins the race, the prepotent response is executed, whereas if the stop processes wins the race, the response is withheld. In this context, impulsivity is conceptualized as a relatively slow stop process. The brain regions involved in behavioral inhibition appear to be the lateral and orbital prefrontal regions along with premotor processes.
Oculomotor Inhibition
Oculomotor Inhibition is the effortful suppression of reflexive saccade.Oculomotor inhibition is tested using antisaccade and oculomotor tasks. Also, the Rothbart effortful control measure and the Big Five Conscientiousness dimension are thought to tap some of the effortful processes underlying the ability to suppress saccade. The frontal eye fields and the dorsolateral prefrontal cortex are involved in oculomotor inhibition.
Motivational inhibition
In response to punishment
Motivational inhibition and response in the face of punishment can be measured using tasks tapping inhibition of primary response, modified go/no go tasks, inhibition of competing response, and emotional Stroop tasks. Personality psychologists also use the Gray behavioral inhibition system measure, the Eysenck scale for neurotic introversion, and the Zuckerman Neuroticism-Anxiety scale. The Septal-hippocampal formation, cingulate, and motor systems seem to be the brain areas most involved in response to punishment.
In response to novelty
Response to novelty has been measured using the Kagan behavioral inhibition system measure and scales of neurotic introversion. The amygdaloid system is implicated in novelty response.
Automatic inhibition of attention
Recently inspected stimuli
Suppression of recently inspected stimuli for both attention and oculomotor saccade is usually measured using attentional and oculomotor inhibition of return tests. The superior colliculus and the midbrain, oculomotor pathway are involved in suppression of stimuli.
Neglected stimuli
Information at locations that are not presently being attended to is suppressed, while attending elsewhere.This involves measures of covert attentional orienting and neglect, along with personality scales on neuroticism. The posterior association cortex and subcortical pathways are implicated in this sort of inhibition.
Action/Inaction goals
Recent psychology research also yields out the condition of impulsivity in relation to peoples general goal setting. It is possible these action and inaction goals are underlying peoples behavioral differences in their daily lives since they can demonstrate "patterns comparable to natural variation in overall activity levels". More specifically, the level of impulsivity and mania people have might positive correlated with favorable attitudes about and goals of general action while negatively respond to favorable attitudes about and goals of general inaction.
Assessment of impulsivity
Personality tests and reports
Barratt Impulsiveness Scale
The Barratt Impulsiveness Scale (BIS) is one of the oldest and most widely used measures of impulsive personality traits. The first BIS was developed in 1959 by Dr. Ernest Barratt. It has been revised extensively to achieve two major goals: (1) to identify a set of "impulsiveness" items that was orthogonal to a set of "anxiety" items as measured by the Taylor Manifest Anxiety Scale (MAS) or the Cattell Anxiety Scale, and (2) to define impulsiveness within the structure of related personality traits like Eysencks Extraversion dimension or Zuckermans Sensation-Seeking dimension, especially the disinhibition subfactor. The BIS-11 with 30 items was developed in 1995. According to Patton and colleagues, there are 3 subscales (Attentional Impulsiveness, Motor Impulsiveness, and Non-Planning Impulsiveness) with six factors:
Attention: "focusing on a task at hand".
Motor impulsiveness: "acting on the spur of the moment".
Self-control: "planning and thinking carefully".
Cognitive complexity: "enjoying challenging mental tasks".
Perseverance: "a consistent life style".
Cognitive instability: "thought insertion and racing thoughts".
Eysenck Impulsiveness Scale
The Eysenck Impulsiveness Scale (EIS) is a 54-item yes/no questionnaire designed to measure impulsiveness. Three subscales are computed from this measure: Impulsiveness, Venturesomeness, and Empathy. Impulsiveness is defined as "behaving without thinking and without realizing the risk involved in the behavior". Venturesomeness is conceptualized as "being conscious of the risk of the behavior but acting anyway" The questionnaire was constructed through factor analysis to contain items that most highly loaded on impulsiveness and venturesomeness. The EIS is a widely used and well-validated measure.
Dickman Impulsivity Inventory
The Dickman Impulsivity Inventory was first developed in 1990 by Scott J. Dickman. This scale is based on Dickmans proposal that there are two types of impulsivity that are significantly different from one another. This includes functional impulsivity which is characterized by quick decision making when it is optimal, a trait that is often considered to be a source of pride. The scale also includes dysfunctional impulsivity which is characterized by making quick decisions when it is not optimal. This type of impulsivity is most often associated with life difficulties including substance abuse problems and other negative outcomes.This scale includes 63 items of which 23 are related to dysfunctional impulsivity, 17 are related to functional impulsivity, and 23 are filler questions that relate to neither construct. This scale has been developed into a version for use with children as well as into several languages. Dickman showed there is no correlation between these two tendencies across individuals, and they also have different cognitive correlates.
UPPS Impulsive Behavior Scale
The UPPS Impulsive Behavior Scale is a 45-item self-report questionnaire that was designed to measure impulsivity across dimensions of the Five Factor Model of personality. The UPPS includes 4 sub-scales: lack of premeditation, urgency, lack of perseverance, and sensation-seeking.
UPPS-P Impulsive Behavior Scale (UPPS-P) is a revised version of the UPPS, including 59 items. It assesses an additional personality pathway to impulsive behavior, Positive Urgency, in addition to the four pathways assessed in the original version of the scale: Urgency (now Negative Urgency), (lack of) Premeditation, (lack of) Perseverance, and Sensation Seeking
UPPS-P short version (UPPS-Ps) is 20-item scale that evaluates five different impulsivity facets (4 items per dimension).
UPPS-R Interview is a semi-structured interview that measures the degree to which individuals exhibit the various components of impulsivity assessed by the UPPS-P.
Lifetime History of Impulsive Behaviors
Lifetime History of Impulsive Behaviors (LHIB) is a 53-item questionnaire designed to assess lifetime history of impulsive behavior (as opposed to impulsive tendencies) as well as the level of distress and impairment associated with these behaviors. The assessment battery was designed to measure the following six dimensions: (a) impulsivity, (b) sensation seeking, (c) trait anxiety, (d) state depression, (e) empathy, and (f) social desirability. The LHIB consists of scales for clinically significant impulsivity, non-clinically significant impulsivity, and impulsivity related distress/impairment.
Behavioral Inhibition System/Behavioral Activation System
Behavioral Inhibition System/Behavioral Activation System (BIS/BAS) was developed based on the Grays biopsychological theory of personality which suggests that there are two general motivational systems that underlie behavior and affect: BIS and BAS. This 20-item self-report questionnaire is designed to assess dispositional BIS and BAS sensitivities.
Impulsive/Premeditated Aggression Scale
Impulsive/Premeditated Aggression Scale (IPAS) is a 30-item self-report questionnaire. Half of the items describe impulsive aggression and half the items describe premeditated aggression. Aggressive behavior has traditionally been classified into two distinct subtypes, impulsive or premeditated. Impulsive aggression is defined as a hair-trigger aggressive response to provocation with loss of behavioral control. Premeditated aggression is defined as a planned or conscious aggressive act, not spontaneous or related to an agitated state. The IPAS is designed to characterize aggressive behavior as predominately impulsive or predominately premeditated in nature. Those subjects who clustered on the impulsive factor showed a broad range of emotional and cognitive impairments; those who clustered on the premeditated factor showed a greater inclination for aggression and anti-social behaviour.
Padua Inventory
The Padua Inventory (PI) consists of 60 items describing common obsessional and compulsive behavior and allows investigation of such problems in normal and clinical subjects.
Behavioral paradigms
A wide variety of behavioral tests have been devised for the assessment of impulsivity in both clinical and experimental settings. While no single test is a perfect predictor or a sufficient replacement for an actual clinical diagnosis, when used in conjunction with parent/teacher reports, behavioral surveys, and other diagnostic criteria, the utility of behavioral paradigms lies in their ability to narrow in on specific, discrete aspects of the impulsivity umbrella. Quantifying specific deficits is of use to the clinician and the experimenter, both of whom are generally concerned with obtaining objectively measurable treatment effects.
Marshmallow test
One widely recognizable test for impulsivity is the delay of gratification paradigm commonly known as the marshmallow test. Developed in the 1960s to assess willpower and self-control in preschoolers, the marshmallow test consists of placing a single marshmallow in front of a child and informing them that they will be left alone in the room for some duration. The child is told that if the marshmallow remains uneaten when the experimenter returns, they will be awarded a second marshmallow, both of which can then be eaten.Despite its simplicity and ease of administration, evidence from longitudinal studies suggests that the number of seconds preschoolers wait to obtain the second marshmallow is predictive of higher SAT scores, better social and emotional coping in adolescence, higher educational achievement, and less cocaine/crack use.
Delay discounting
Like the marshmallow test, delay discounting is also a delay of gratification paradigm. It is designed around the principle that the subjective value of a reinforcer decreases, or is discounted, as the delay to reinforcement increases. Subjects are given varying choices between smaller, immediate rewards and larger, delayed rewards. By manipulating reward magnitude and/or reward delay over multiple trials, indifference points can be estimated whereby choosing the small, immediate reward, or the large, delayed reward are about equally likely. Subjects are labeled impulsive when their indifference points decline more steeply as a function of delay compared to the normal population (i.e. greater preference for immediate reward). Unlike the marshmallow test, delay discounting does not require verbal instruction and can be implemented on non-human animals.
Go/no-go and stop-signal reaction time tasks
Two common tests of response inhibition used in humans are the go/no-go task, and a slight variant known as the stop-signal reaction time (SSRT) test. During a go/no-task, the participant is trained over multiple trials to make a particular response (e.g., a key-press) when presented with a go signal. On some trials, a stop signal is presented just prior to, or simultaneously with the go signal, and the subject must inhibit the impending response.
The SSRT test is similar, except that the stop signal is presented after the go signal. This small modification increases the difficulty of inhibiting the go response, because the participant has typically already initiated the go response by the time the stop signal is presented. The participant is instructed to respond as fast as possible to the go signal while maintaining the highest possible inhibition accuracy (on no-go trials). During the task, the time at which the stop signal is presented (the stop signal delay or SSD) is dynamically adjusted to match the time after the go signal at which the participant is just able/unable to inhibit their go response. If the participant fails to inhibit their go response, the stop signal is moved slightly closer to the original go signal, and if the participant successfully inhibits their go response, the stop signal is moved slightly ahead in time. The SSRT is thus measured as the average go response time minus the average stop signal presentation time (SSD).
Balloon Analogue Risk Task
The balloon analogue risk task (BART) was designed to assess risk-taking behavior. Subjects are presented with a computer depiction of a balloon that can be incrementally inflated by pressing a response key. As the balloon inflates, the subject accumulates rewards with each new key-press. The balloon is programmed with a constant probability of popping. If the balloon pops, all rewards for that balloon are lost, or the subject may choose to stop inflating and bank the reward for that balloon at any time. Therefore, more key-presses equate to greater reward, but also greater probability of popping and cancelling rewards for that trial. The BART assumes that those with an affinity for risk-taking are more likely to pop the balloon, earning less reward overall than the typical population.
Iowa Gambling Task
The Iowa gambling task (IGT) is a test originally meant to measure decision making specifically within individuals who have ventromedial prefrontal cortex damage. The concept of impulsivity as relates to the IGT is one in which impulsive decisions are a function of an individuals lack of ability to make rational decisions over time due to an over amplification of emotional/somatic reward. In the IGT individuals are provided four decks of cards to choose from. Two of these decks provide much higher rewards but the deductions are also much higher while the second two decks have lower rewards per card but also much lower deductions. Over time anyone who chooses predominantly from the high rewards decks will lose money while those who choose from the smaller rewards decks will gain money.
The IGT uses hot and cold processes in its concept of decision making. Hot decision making involves emotional responses to the material presented based on motivation related to reward and punishment. Cold processes occur when an individual uses rational cognitive determinations when making decisions. Combined an individual should gain a positive emotional reaction when choices have beneficial consequences and will have negative emotional responses tied to choices that have greater negative consequences. In general, healthy responders to the IGT will begin to drift to the lower gain decks as they realize that they are gaining more money than they lose both through an ability to recognize that one is more consistently providing rewards as well as through the emotions related to winning consistently. However, those who have emotional deficits will fail to recognize that they are losing money over time and will continue to be more influenced by the exhilaration of higher value rewards without being influenced by the negative emotions of the loses associated with them.For more information concerning these process refer to the Somatic marker hypothesis
Differential Reinforcement of Low Response Rate Task
Differential reinforcement of low response rate (DRL) described by Ferster and Skinner is used to encourage low rates of responding. It is derived from research in operant conditioning that provides an excellent opportunity to measure the hyperactive childs ability to inhibit behavioral responding. Hyperactive children were relatively unable to perform efficiently on the task, and this deficit endured regardless of age, IQ, or experimental condition. Therefore, it can be used to discriminate accurately between teacher rated and parent rated hyperactive and nonhyperactive children. In this procedure, responses that occur before a set time interval has passed are not reinforced and reset the time required between behaviors.In a study, a child was taken to the experimental room and told that they were going to play a game in which they had a chance to win a lot of M&Ms. Every time they made the light of the reward indicator by pressing a red button, they would earn an M&Ms. However, they had to wait a while (6 seconds) before they could press it to get another point. If they had pressed the button too soon, then they would have not gotten a point, and the light would not go on, and they had to wait a while before they could press it to get another point.Researchers have also observed that subjects in a time-based situation will often engage in a sequence or chain of behaviors between reinforceable responses. This is because this collateral behavior sequence helps the subject "wait out" the required temporal delay between responses.
Other
Other common impulsivity tasks include the Continuous performance task (CPT), 5-choice serial reaction time task (5-CSRTT), Stroop task, and Matching Familiar Figures Task.
Pharmacology and neurobiology
Neurobiological findings
Although the precise neural mechanisms underlying disorders of impulse control are not fully known, the prefrontal cortex (PFC) is the brain region most ubiquitously implicated in impulsivity. Damage to the prefrontal cortex has been associated with difficulties preparing to act, switching between response alternatives, and inhibiting inappropriate responses. Recent research has uncovered additional regions of interest, as well as highlighted particular subregions of the PFC, that can be tied to performance in specific behavioral tasks.
Delay discounting
Excitotoxic lesions in the nucleus accumbens core have been shown to increase preference for the smaller, immediate reward, whereas lesions to the nucleus accumbens shell have had no observable effect. Additionally, lesions of the basolateral amygdala, a region tied closely to the PFC, negatively affect impulsive choice similarly to what is observed in the nucleus accumbens core lesions. Moreover, dorsal striatum may also be involved in impulsive choice in an intricate manner.
Go/No-go and Stop-signal reaction time test
The orbitofrontal cortex is now thought to play a role in disinhibiting, and injury to other brain structures, such as to the right inferior frontal gyrus, a specific subregion of the PFC, has been associated with deficits in stop-signal inhibition.
5-Choice Serial Reaction Time Task (5-CSRTT) and Differential Reinforcement of Low rates (DRL)
As with delay discounting, lesion studies have implicated the core region of the nucleus accumbens in response inhibition for both DRL and 5-CSRTT. Premature responses in the 5-CSRTT may also be modulated by other systems within the ventral striatum. In the 5-CSRTT, lesions of the anterior cingulate cortex have been shown to increase impulsive responding, and lesions to the prelimbic cortex impair attentional performance.
Iowa Gambling Task
Patients with damage to the ventromedial frontal cortex exhibit poor decision-making and persist in making risky choices in the Iowa Gambling Task.
Neurochemical and pharmacological findings
The primary pharmacological treatments for ADHD are methylphenidate (Ritalin) and amphetamine. Both methylphenidate and amphetamines block re-uptake of dopamine and norepinephrine into the pre-synaptic neuron, acting to increase post-synaptic levels of dopamine and norepinephrine. Of these two monoamines, increased availability of dopamine is considered the primary cause for the ameliorative effects of ADHD medications, whereas increased levels of norepinephrine may be efficacious only to the extent that it has downstream, indirect effects on dopamine.
The effectiveness of dopamine re-uptake inhibitors in treating the symptoms of ADHD has led to the hypothesis that ADHD may arise from low tonic levels of dopamine (particularly in the fronto-limbic circuitry), but evidence in support of this theory is mixed.
Genetics
There are several difficulties when it comes to trying to identify a gene for complex traits such as impulsivity, such as genetic heterogeneity. Another difficulty is that the genes in question might sometimes show incomplete penetrance, "where a given gene variant does not always cause the phenotype". Much of the research on the genetics of impulsivity-related disorders, such as ADHD, is based on family or linkage studies. There are several genes of interest that have been studied in an attempt to find the major genetic contributors to impulsivity. Some of these genes are:
DAT1 is the dopamine transporter gene which is responsible for the active reuptake of dopamine from the neural synapse. DAT1 polymorphisms have been shown to be linked to hyperactivity and ADHD.
DRD4 is the dopamine D4 receptor gene and is associated with ADHD and novelty seeking behaviors. It has been proposed that novelty seeking is associated with impulsivity. Mice deficient for DRD4 have shown less behavioral responses to novelty.
5HT2A is the serotonin receptor gene. The serotonin 2A receptor gene has been associated with both hyper locomotion, ADHD, as well as impulsivity. Subjects with a particular polymorphism of the 5HT2A gene made more commission errors during a punishment-reward condition in a go/no-go task.
HTR2B a serotonin receptor gene.
CTNNA2 encodes for a brain-expressed α-catenin that has been associated with Excitement-Seeking in a genome-wide association study (GWAS) of 7860 individuals.
Intervention
Interventions to impact impulsivity generally
While impulsivity can take on pathological forms (e.g. substance use disorder, ADHD), there are less severe, non-clinical forms of problematic impulsivity in many peoples daily lives. Research on the different facets of impulsivity can inform small interventions to change decision making and reduce impulsive behavior For example, changing cognitive representations of rewards (e.g. making long term rewards seem more concrete) and/or creating situations of "precommitment" (eliminating the option of changing ones mind later) can reduce the preference for immediate reward seen in delay discounting.
Brain training
Brain training interventions include laboratory-based interventions (e.g. training using tasks like go/no go) as well as community, family, and school based interventions that are ecologically valid (e.g. teaching techniques for regulating emotions or behaviors) and can be used with individuals with non-clinical levels of impulsivity. Both sorts of interventions are aimed at improving executive functioning and self-control capacities, with different interventions specifically targeting different aspects of executive functioning like inhibitory control, working memory, or attention. Emerging evidence suggests that brain training interventions may succeed in impacting executive function, including inhibitory control. Inhibitory control training specifically is accumulating evidence that it can help individuals resist temptation to consume high calorie food and drinking behavior. Some have voiced concerns that the favorable results of studies testing working memory training should be interpreted with caution, claiming that conclusions regarding changes to abilities are measured using single tasks, inconsistent use of working memory tasks, no-contact control groups, and subjective measurements of change.
Treatment of specific disorders of impulsivity
Behavioral, psychosocial, and psychopharmacological treatments for disorders involving impulsivity are common.
Psychopharmacological intervention
Psychopharmacological intervention in disorders of impulsivity has shown evidence of positive effects; common pharmacological interventions include the use of stimulant medication, selective serotonin reuptake inhibitors (SSRIs) and other antidepressants. ADHD has a well-established evidence base supporting the use of stimulant medication for the reduction of ADHD symptoms. Pathological gambling has also been studied in drug trials, and there is evidence that gambling is responsive to SSRIs and other antidepressants. Evidence based pharmacological treatment for trichotillomania is not yet available, with mixed results of studies investigating the use of SSRIs, though cognitive behavioral therapy has shown positive effects. Intermittent explosive disorder is most often treated with mood stabilizers, SSRIs, beta blockers, alpha agonists, and anti-psychotics (all of which have shown positive effects). There is evidence that some pharmacological interventions are efficacious in treating substance-use disorders, though their use can depend on the type of substance that is abused. Pharmacological treatments for substance-use disorders include acamprosate, buprenorphine, disulfiram, LAAM, methadone, and naltrexone.
Behavioral interventions
Behavioral interventions also have a fairly strong evidence base in impulse-control disorders. In ADHD, the behavioral interventions of behavioral parent training, behavioral classroom management, and intensive peer-focused behavioral interventions in recreational settings meet stringent guidelines qualifying them for evidence based treatment status. In addition, a recent meta-analysis of evidence-based ADHD treatment found organization training to be a well-established treatment method. Empirically validated behavioral treatments for substance use disorder are fairly similar across substance use disorders, and include behavioral couples therapy, CBT, contingency management, motivational enhancement therapy, and relapse prevention. Pyromania and kleptomania are understudied (due in large part to the illegality of the behaviors), though there is some evidence that psychotherapeutic interventions (CBT, short term counseling, day treatment programs) are efficacious in treating pyromania, while kleptomania seems to be best addressed using SSRIs. Additionally, therapies including CBT, family therapy, and social skill training have shown positive effects on explosive aggressive behaviors.
See also
Affect
ADHD
Addiction
Deferred gratification
Drive theory
Emotion
Feeling
Instinct
Impulse control disorder
Sensation seeking
Novelty seeking
Alternative five model of personality
References
Further reading
Evenden, J. L. (21 October 1999). "Varieties of impulsivity". Psychopharmacology. 146 (4): 348–361. doi:10.1007/PL00005481. PMID 10550486. S2CID 5972342.
Hollander, E.; Rosen, J. (March 2000). "Impulsivity". Journal of Psychopharmacology. 14 (2_suppl1): S39–S44. doi:10.1177/02698811000142S106. PMID 10888030. S2CID 243171966.
Moeller, F. Gerard; Barratt, Ernest S.; Dougherty, Donald M.; Schmitz, Joy M.; Swann, Alan C. (November 2001). "Psychiatric Aspects of Impulsivity". American Journal of Psychiatry. 158 (11): 1783–1793. doi:10.1176/appi.ajp.158.11.1783. PMID 11691682.
Chamberlain, Samuel R; Sahakian, Barbara J (May 200 |
Impulsivity | 7). "The neuropsychiatry of impulsivity". Current Opinion in Psychiatry. 20 (3): 255–261. doi:10.1097/YCO.0b013e3280ba4989. PMID 17415079. S2CID 22198972.
External links
Media related to Impulsivity at Wikimedia Commons
Impulsive Info |
Intraventricular hemorrhage | Intraventricular hemorrhage (IVH), also known as intraventricular bleeding, is a bleeding into the brains ventricular system, where the cerebrospinal fluid is produced and circulates through towards the subarachnoid space. It can result from physical trauma or from hemorrhagic stroke.
30% of intraventricular hemorrhage (IVH) are primary, confined to the ventricular system and typically caused by intraventricular trauma, aneurysm, vascular malformations, or tumors, particularly of the choroid plexus. However 70% of IVH are secondary in nature, resulting from an expansion of an existing intraparenchymal or subarachnoid hemorrhage. Intraventricular hemorrhage has been found to occur in 35% of moderate to severe traumatic brain injuries. Thus the hemorrhage usually does not occur without extensive associated damage, and so the outcome is rarely good.
Symptoms
Adults
Symptoms of IVH are similar to other intracerebral hemorrhages and include sudden onset of headache, nausea and vomiting, together with an alteration
of the mental state and/or level of consciousness. Focal neurological signs are either minimal or absent, but focal and/or generalized seizures may occur. Xanthochromia, yellow-tinged CSF, is the rule.
Infants
Some infants are asymptomatic and others may present with hard to detect abnormalities of consciousness, muscle tone, breathing, movements of their eyes, and body movements.
Causes
Adults
Causes of IVH in adults include physical trauma or from hemorrhagic stroke.
Infants
Infants that are preterm and very low birth weight are also at high risk. IVH in the preterm brain usually arises from the germinal matrix whereas IVH in the term infants originates from the choroid plexus. However, it is particularly common in premature infants or those of very low birth weight. The cause of IVH in premature infants, unlike that in older infants, children or adults, is rarely due to trauma. Instead it is thought to result from changes in perfusion of the delicate cellular structures that are present in the growing brain, augmented by the immaturity of the cerebral circulatory system, which is especially vulnerable to hypoxic ischemic encephalopathy. The lack of blood flow results in cell death and subsequent breakdown of the blood vessel walls, leading to bleeding. While this bleeding can result in further injury, it is itself a marker for injury that has already occurred. Most intraventricular hemorrhages occur in the first 72 hours after birth. The risk is increased with use of extracorporeal membrane oxygenation in preterm infants. Congenital cytomegalovirus infection can be an important cause.
Mechanism
Diagnosis
Diagnosis can be confirmed by the presence of blood inside the ventricles on CT.
Infants
In term and preterm infants with IVH, the amount of bleeding varies. IVH is often described in four grades:
Grade I - bleeding occurs just in the germinal matrix
Grade II - bleeding also occurs inside the ventricles, but they are not enlarged
Grade III - ventricles are enlarged by the accumulated blood
Grade IV - bleeding extends into the brain tissue around the ventriclesGrades I and II are most common, and often there are no further complications. Grades III and IV are the most serious and may result in long-term brain injury to the infant. After a grade III or IV IVH, blood clots may form which can block the flow of cerebrospinal fluid, leading to increased fluid in the brain (hydrocephalus).
Prevention
Head positioning in very preterm infants has been suggested as an approach to prevent germinal matrix haemorrhage; however, further research is required to determine the effectiveness at reducing mortality and the most appropriate positioning technique. Approaches include bed tilting, supine mid-line head positioning, supine head rotation 90 degrees, prone mid-line head positioning, head tiling.
Treatment
Treatment focuses on monitoring and should be accomplished with inpatient floor service for individuals responsive to commands or neurological ICU observation for those with impaired levels of consciousness. Extra attention should be placed on intracranial pressure (ICP) monitoring via an intraventricular catheter and medications to maintain ICP, blood pressure, and coagulation. In more severe cases an external ventricular drain may be required to maintain ICP and evacuate the hemorrhage, and in extreme cases an open craniotomy may be required. In cases of unilateral IVH with small intraparenchymal hemorrhage the combined method of stereotaxy and open craniotomy has produced promising results.
Infants
There have been various therapies employed into preventing the high rates of morbidity and mortality, including diuretic therapy, repeated lumbar puncture, streptokinase therapy and a combination novel intervention called DRIFT (drainage, irrigation and fibrinolytic therapy). More research is required, in the form of high quality randomized controlled trials, to determine the safety, dosing, and effectiveness of prophylactic heparin and antithrombin treatment for preterm neonates.
Prognosis
In infants, germinal matrix haemorrhage is associated with cerebral palsy, problems with cognition, and hydrocephalus. With improved technological advances in science and medicine, survival for preterm infants with this type of neurological disorder has improved and less preterm infants with germinal matrix haemorrhage have severe cerebral palsy. An estimated 15% of preterm infants who survive develop cerebral palsy and 27% of the infants who survive experience moderate to severe neurosensory deficits by the time they reach 18–24 months old.Prognosis is very poor when IVH results from intracerebral hemorrhage related to high blood pressure and is even worse when hydrocephalus follows. It can result in dangerous increases in ICP and can cause potentially fatal brain herniation. Even independently, IVH can cause morbidity and mortality. First, intraventricular blood can lead to a clot in the CSF conduits blocking its flow and leading to obstructive hydrocephalus which may quickly result in increased intracranial pressure and death. Second, the breakdown products from the blood clot may generate an inflammatory response that damages the arachnoid granulations, inhibiting the regular reabsorption of CSF and resulting in permanent communicating hydrocephalus.
Associated conditions
Brain contusions and subarachnoid hemorrhages are commonly associated with IVH. The bleeding can involve the anterior communicating artery or the posterior communicating artery.
In both adults and infants, IVH can cause dangerous increases in ICP, damage to the brain tissue, and hydrocephalus.
Epidemiology
IVH has been reported to occur in approximately 25% of infants who are born with a very low birth weight. In preterm infants, intraventricular haemorrhage and germinal matrix haemorrhage are the most widely reported neurological disorders. Approximately 12,000 infants each year are diagnosed with germinal matrix haemorrhage or intraventricular haemorrhage in the United States.
Research
In 2002, a Dutch retrospective study analysed cases where neonatologists had intervened and drained CSF by lumbar or ventricular punctures if ventricular width (as shown on ultrasound) exceeded the 97th centile as opposed to the 97th centile plus 4 mm. Professors Whitelaws original Cochrane review published in 2001 as well as evidence from previous randomised control trials indicated that interventions should be based on clinical signs and symptoms of ventricular dilatation. An international trial has instead looked an early (97th centile) versus late (97th centile plus 4 mm) for intervening and draining CSF.DRIFT has been tested in an international randomised clinical trial; although it did not significantly lower the need for shunt surgery, severe cognitive disability at two years Bayley (MDI <55) was significantly reduced. Repeated lumbar punctures are used widely to reduce the effects in increased intracranial pressure and an alternative to ventriculoperitoneal (VP) shunt surgery that cannot be performed in case of intraventricular haemorrhage. The relative risk of repeated lumbar puncture is close to 1.0, therefore it is not statistically therapeutic when compared to conservative management and does raise the risk of subsequent CSF infection.
References
External links
00511 at CHORUSUltrasound Pictures of Germinal Matrix IVH MedPix Image Database |
Aging brain | Aging is a major risk factor for most common neurodegenerative diseases, including mild cognitive impairment, dementias including Alzheimers disease, cerebrovascular disease, Parkinsons disease, and Lou Gehrigs disease. While much research has focused on diseases of aging, there are few informative studies on the molecular biology of the aging brain (usually spelled ageing brain in British English) in the absence of neurodegenerative disease or the neuropsychological profile of healthy older adults. However, research suggests that the aging process is associated with several structural, chemical, and functional changes in the brain as well as a host of neurocognitive changes. Recent reports in model organisms suggest that as organisms age, there are distinct changes in the expression of genes at the single neuron level. This page is devoted to reviewing the changes associated with healthy aging.
Structural changes
Aging entails many physical, biological, chemical, and psychological changes and the brain is no exception to this phenomenon. These various changes have attempted to be mapped by conceptual models like the Scaffolding Theory of Aging and Cognition (STAC) in 2009. The STAC model looks at factors like neural changes to the white matter, dopamine depletion, shrinkage, and cortical thinning .CT scans have found that the cerebral ventricles expand as a function of age. More recent MRI studies have reported age-related regional decreases in cerebral volume. Regional volume reduction is not uniform; some brain regions shrink at a rate of up to 1% per year, whereas others remain relatively stable until the end of the life-span. The brain is very complex, and is composed of many different areas and types of tissue, or matter. The different functions of different tissues in the brain may be more or less susceptible to age-induced changes. The brain matter can be broadly classified as either grey matter, or white matter. Grey matter consists of cell bodies in the cortex and subcortical nuclei, whereas white matter consists of tightly packed myelinated axons connecting the neurons of the cerebral cortex to each other and with the periphery.
Loss of neural circuits and brain plasticity
Brain plasticity refers to the brains ability to change structure and function. This ties into the common phrase, "if you dont use it, you lose it," which is another way of saying, if you dont use it, your brain will devote less somatotopic space for it. One proposed mechanism for the observed age-related plasticity deficits in animals is the result of age-induced alterations in calcium regulation. The changes in our abilities to handle calcium will ultimately influence neuronal firing and the ability to propagate action potentials, which in turn would affect the ability of the brain to alter its structure or function (i.e. its plastic nature). Due to the complexity of the brain, with all of its structures and functions, it is logical to assume that some areas would be more vulnerable to aging than others. Two circuits worth mentioning here are the hippocampal and neocortical circuits. It has been suggested that age-related cognitive decline is due in part not to neuronal death but to synaptic alterations. Evidence in support of this idea from animal work has also suggested that this cognitive deficit is due to functional and biochemical factors such as changes in enzymatic activity, chemical messengers, or gene expression in cortical circuits.
Thinning of the cortex
Advances in MRI technology have provided the ability to see the brain structure in great detail in an easy, non-invasive manner in vivo. Bartzokis et al., has noted that there is a decrease in grey matter volume between adulthood and old age, whereas white matter volume was found to increase from age 19–40, and decline after this age. Studies using Voxel-based morphometry have identified areas such as the insula and superior parietal gyri as being especially vulnerable to age-related losses in grey matter of older adults. Sowell et al., reported that the first 6 decades of an individuals life were correlated with the most rapid decreases in grey matter density, and this occurred over dorsal, frontal, and parietal lobes on both interhemispheric and lateral brain surfaces. It is also worth noting that areas such as the cingulate gyrus, and occipital cortex surrounding the calcarine sulcus appear exempt from this decrease in grey matter density over time. Age effects on grey matter density in the posterior temporal cortex appear more predominantly in the left versus right hemisphere, and were confined to posterior language cortices. Certain language functions such as word retrieval and production were found to be located to more anterior language cortices, and deteriorate as a function of age. Sowell et al., also reported that these anterior language cortices were found to mature and decline earlier than the more posterior language cortices. It has also been found that the width of sulcus not only increases with age, but also with cognitive decline in the elderly.
Age-related neuronal morphology
There is converging evidence from cognitive neuroscientists around the world that age-induced cognitive deficits may not be due to neuronal loss or cell death, but rather may be the result of small region-specific changes to the morphology of neurons. Studies by Duan et al., have shown that dendritic arbors and dendritic spines of cortical pyramidal neurons decrease in size and/or number in specific regions and layers of human and non-human primate cortex as a result of age (Duan et al., 2003; morph). A 46% decrease in spine number and spine density has been reported in humans older than 50 compared with younger individuals. An electron microscopy study in monkeys reported a 50% loss in spines on the apical dendritic tufts of pyramidal cells in prefrontal cortex of old animals (27–32 years old) compared with young ones (6–9 years old).
Neurofibrillary tangles
Age-related neuro-pathologies such as Alzheimers disease, Parkinsons disease, diabetes, hypertension and arteriosclerosis make it difficult to distinguish the normal patterns of aging. One of the important differences between normal aging and pathological aging is the location of neurofibrillary tangles. Neurofibrillary tangles are composed of paired helical filaments (PHF). In normal, non-demented aging, the number of tangles in each affected cell body is relatively low and restricted to the olfactory nucleus, parahippocampal gyrus, amygdala and entorhinal cortex. As the non-demented individual ages, there is a general increase in the density of tangles, but no significant difference in where tangles are found. The other main neurodegenerative contributor commonly found in the brain of patients with AD is amyloid plaques. However, unlike tangles, plaques have not been found to be a consistent feature of normal aging.
Role of oxidative stress
Cognitive impairment has been attributed to oxidative stress, inflammatory reactions and changes in the cerebral microvasculature. The exact impact of each of these mechanisms in affecting cognitive aging is unknown. Oxidative stress is the most controllable risk factor and is the best understood. The online Merriam-Webster Medical Dictionary defines oxidative stress as, "physiological stress on the body that is caused by the cumulative damage done by free radicals inadequately neutralized by antioxidants and that is to be associated with aging." Hence oxidative stress is the damage done to the cells by free radicals that have been released from the oxidation process.
Compared to other tissues in the body, the brain is deemed unusually sensitive to oxidative damage. Increased oxidative damage has been associated with neurodegenerative diseases, mild cognitive impairment and individual differences in cognition in healthy elderly people. In normal aging, the brain is undergoing oxidative stress in a multitude of ways. The main contributors include protein oxidation, lipid peroxidation and oxidative modifications in nuclear and mitochondrial DNA. Oxidative stress can damage DNA replication and inhibit repair through many complex processes, including telomere shortening in DNA components. Each time a somatic cell replicates, the telomeric DNA component shortens. As telomere length is partly inheritable, there are individual differences in the age of onset of cognitive decline.
DNA damage
At least 25 studies have demonstrated that DNA damage accumulates with age in the mammalian brain. This DNA damage includes the oxidized nucleoside 8-hydroxydeoxyguanosine (8-OHdG), single- and double-strand breaks, DNA-protein crosslinks and malondialdehyde adducts (reviewed in Bernstein et al.). Increasing DNA damage with age has been reported in the brains of the mouse, rat, gerbil, rabbit, dog, and human. Young 4-day-old rats have about 3,000 single-strand breaks and 156 double-strand breaks per neuron, whereas in rats older than 2 years the level of damage increases to about 7,400 single-strand breaks and 600 double-strand breaks per neuron.Lu et al. studied the transcriptional profiles of the human frontal cortex of individuals ranging from 26 to 106 years of age. This led to the identification of a set of genes whose expression was altered after age 40. They further found that the promoter sequences of these particular genes accumulated oxidative DNA damage, including 8-OHdG, with age (see DNA damage theory of aging). They concluded that DNA damage may reduce the expression of selectively vulnerable genes involved in learning, memory and neuronal survival, initiating a pattern of brain aging that starts early in life.
Chemical changes
In addition to the structural changes that the brain incurs with age, the aging process also entails a broad range of biochemical changes. More specifically, neurons communicate with each other via specialized chemical messengers called neurotransmitters. Several studies have identified a number of these neurotransmitters, as well as their receptors, that exhibit a marked alteration in different regions of the brain as part of the normal aging process.
Dopamine
An overwhelming number of studies have reported age-related changes in dopamine synthesis, binding sites, and number of receptors. Studies using positron emission tomography (PET) in living human subjects have shown a significant age-related decline in dopamine synthesis, notably in the striatum and extrastriatal regions (excluding the midbrain). Significant age-related decreases in dopamine receptors D1, D2, and D3 have also been highly reported. A general decrease in D1 and D2 receptors has been shown, and more specifically a decrease of D1 and D2 receptor binding in the caudate nucleus and putamen. A general decrease in D1 receptor density has also been shown to occur with age. Significant age-related declines in dopamine receptors, D2 and D3 were detected in the anterior cingulate cortex, frontal cortex, lateral temporal cortex, hippocampus, medial temporal cortex, amygdala, medial thalamus, and lateral thalamus One study also indicated a significant inverse correlation between dopamine binding in the occipital cortex and age. Postmortem studies also show that the number of D1 and D2 receptors decline with age in both the caudate nucleus and the putamen, although the ratio of these receptors did not show age-related changes. The loss of dopamine with age is thought to be responsible for many neurological symptoms that increase in frequency with age, such as decreased arm swing and increased rigidity. Changes in dopamine levels may also cause age-related changes in cognitive flexibility.
Serotonin
Decreasing levels of different serotonin receptors and the serotonin transporter, 5-HTT, have also been shown to occur with age. Studies conducted using PET methods on humans, in vivo, show that levels of the 5-HT2 receptor in the caudate nucleus, putamen, and frontal cerebral cortex, decline with age. A decreased binding capacity of the 5-HT2 receptor in the frontal cortex was also found, as well as a decreased binding capacity of the serotonin transporter, 5-HHT, in the thalamus and the midbrain. Postmortem studies on humans have indicated decreased binding capacities of serotonin and a decrease in the number of S1 receptors in the frontal cortex and hippocampus as well as a decrease in affinity in the putamen.
Glutamate
Glutamate is another neurotransmitter that tends to decrease with age. Studies have shown older subjects to have lower glutamate concentration in the motor cortex compared to younger subjects A significant age-related decline especially in the parietal gray matter, basal ganglia, and to a lesser degree, the frontal white matter, has also been noted. Although these levels were studied in the normal human brain, the parietal and basal ganglia regions are often affected in degenerative brain diseases associated with aging and it has therefore been suggested that brain glutamate may be useful as a marker of brain diseases that are affected by aging.
Neuropsychological changes
Changes in orientation
Orientation is defined as the awareness of self in relation to ones surroundings Often orientation is examined by distinguishing whether a person has a sense of time, place, and person. Deficits in orientation are one of the most common symptoms of brain disease, hence tests of orientation are included in almost all medical and neuropsychological evaluations. While research has primarily focused on levels of orientation among clinical populations, a small number of studies have examined whether there is a normal decline in orientation among healthy aging adults. Results have been somewhat inconclusive. Some studies suggest that orientation does not decline over the lifespan. For example, in one study 92% of normal elderly adults (65–84 years) presented with perfect or near perfect orientation. However some data suggest that mild changes in orientation may be a normal part of aging. For example, Sweet and colleagues concluded that "older persons with normal, healthy memory may have mild orientation difficulties. In contrast, younger people with normal memory have virtually no orientation problems" (p. 505). So although current research suggests that normal aging is not usually associated with significant declines in orientation, mild difficulties may be a part of normal aging and not necessarily a sign of pathology.
Changes in attention
Many older adults notice a decline in their attentional abilities. Attention is a broad construct that refers to "the cognitive ability that allows us to deal with the inherent processing limitations of the human brain by selecting information for further processing" (p. 334). Since the human brain has limited resources, people use their attention to zone in on specific stimuli and block out others.
If older adults have fewer attentional resources than younger adults, we would expect that when two tasks must be carried out at the same time, older adults performance will decline more than that of younger adults. However, a large review of studies on cognition and aging suggest that this hypothesis has not been wholly supported. While some studies have found that older adults have a more difficult time encoding and retrieving information when their attention is divided, other studies have not found meaningful differences from younger adults. Similarly, one might expect older adults to do poorly on tasks of sustained attention, which measure the ability to attend to and respond to stimuli for an extended period of time. However, studies suggest that sustained attention shows no decline with age. Results suggest that sustained attention increases in early adulthood and then remains relatively stable, at least through the seventh decade of life. More research is needed on how normal aging impacts attention after age eighty.
It is worth noting that there are factors other than true attentional abilities that might relate to difficulty paying attention. For example, it is possible that sensory deficits impact older adults attentional abilities. In other words, impaired hearing or vision may make it more difficult for older adults to do well on tasks of visual and verbal attention.
Changes in memory
Many different types of memory have been identified in humans, such as declarative memory (including episodic memory and semantic memory), working memory, spatial memory, and procedural memory. Studies done, have found that memory functions, more specifically those associated with the medial temporal lobe are especially vulnerable to age-related decline. A number of studies utilizing a variety of methods such as histological, structural imaging, functional imaging, and receptor binding have supplied converging evidence that the frontal lobes and frontal-striatal dopaminergic pathways are especially affected by age-related processes resulting in memory changes.
Changes in language
Changes in performance on verbal tasks, as well as the location, extent, and signal intensity of BOLD signal changes measured with functional MRI, vary in predictable patterns with age. For example, behavioral changes associated with age include compromised performance on tasks related to word retrieval, comprehension of sentences with high syntactic and/or working memory demands, and production of such sentences.
Genetic changes
Variation in the effects of aging among individuals can be attributed to both genetic, health, and environmental factors. As in so many other science disciplines, the nature and nurture debate is an ongoing conflict in the field of cognitive neuroscience. The search for genetic factors has always been an important aspect in trying to understand neuro-pathological processes. Research focused on discovering the genetic component in developing Autosomal Dominant (AD) has also contributed greatly to the understanding the genetics behind normal or "non-pathological" aging.
Autosomal Dominant (AD) - Autosomal dominant is a pattern of inheritance characteristic of some genetic disorders. "Autosomal" means that the gene in question is located on one of the numbered, or non-sex, chromosomes. "Dominant" means that a single copy of the mutated gene (from one parent) is enough to cause the disorder.The human brain shows a decline in function and a change in gene expression. This modulation in gene expression may be due to oxidative DNA damage at promoter regions in the genome. Genes that are down-regulated over the age of 40 include:
GluR1 AMPA receptor subunit
NMDA R2A receptor subunit (involved in learning)
Subunits of the GABA-A receptor
Genes involved in long-term potentiation e.g. calmodulin 1 and CAM kinase II alpha.
Calcium signaling genes
Synaptic plasticity genes
Synaptic vesicle release and recycling genesGenes that are upregulated include:
Genes associated with stress response and DNA repair
Antioxidant defence
Epigenetic age analysis of brain regions
The cerebellum is the youngest brain region (and probably body part) in centenarians according to an epigenetic biomarker of tissue age known as epigenetic clock: it is about 15 years younger than expected in a centenarian. By contrast, all brain regions and brain cells appear to have roughly the same epigenetic age in subjects who are younger than 80. These findings suggest that the cerebellum is protected from aging effects, which in turn could explain why the cerebellum exhibits fewer neuropathological hallmarks of age related dementias compared to other brain regions.
Delaying the effects of aging
The process of aging may be inevitable; however, one may potentially delay the effects and severity of this progression.
While there is no consensus of efficacy, the following are reported as delaying cognitive decline:
High level of education
Physical exercise
Staying intellectually engaged, i.e. reading and mental activities (such as crossword puzzles)
Maintaining social and friendship networks
Maintaining a healthy diet, including omega-3 fatty acids, and protective antioxidants.
"Super Agers"
Longitudinal research studies have recently conducted genetic analyses of centenarians and their offspring to identify biomarkers as protective factors against the negative effects of aging. In particular, the cholesteryl ester transfer protein (CETP) gene is linked to prevention of cognitive decline and Alzheimers disease. Specifically, valine CETP homozygotes but not heterozygotes experienced a relative 51% less decline in memory compared to a reference group after adjusting for demographic factors and APOE status.
Cognitive reserve
The ability of an individual to demonstrate no cognitive signs of aging despite an aging brain is called cognitive reserve. This hypothesis suggests that two patients might have the same brain pathology, with one person experiencing noticeable clinical symptoms, while the other continues to function relatively normally. Studies of cognitive reserve explore the specific biological, genetic and environmental differences which make one person susceptible to cognitive decline, and allow another to age more gracefully.
Nun Study
A study funded by the National Institute of Aging followed a group of 678 Roman Catholic sisters and recorded the effects of aging. The researchers used autobiographical essays collected as the nuns joined their Sisterhood. Findings suggest that early idea density, defined by number of ideas expressed and use of complex prepositions in these essays, was a significant predictor of lower risk for developing Alzheimers disease in old age. Lower idea density was found to be significantly associated with lower brain weight, higher brain atrophy, and more neurofibrillary tangles.
Hypothalamus inflammation and GnRH
In a recent study (published May 1, 2013), it is suggested that the inflammation of the hypothalamus may be connected to our overall aging bodies. They focused on the activation of the protein complex NF-κB in mice test subjects, which showed increased activation as mice test subjects aged in the study. This activation not only affects aging, but affects a hormone known as GnRH, which has shown new anti-aging properties when injected into mice outside the hypothalamus, while causing the opposite effect when injected into the hypothalamus. Itll be some time before this can be applied to humans in a meaningful way, as more studies on this pathway are necessary to understand the mechanics of GnRHs anti-aging properties.
Inflammation
A study found that myeloid cells are drivers of a maladaptive inflammation element of brain-ageing in mice and that this can be reversed or prevented via inhibition of their EP2 signalling.
Aging disparities
For certain demographics, the effects of normal cognitive aging are especially pronounced. Differences in cognitive aging might be tied to the lack of or reduced access to medical care and, as a result, suffer disproportionately from negative health outcomes. As the global population grows, diversifies, and grays, there is an increasing need to understand these inequities.
Race
African Americans
In the United States, Black and African American demographics disproportionately experience metabolic dysfunction with age. This has many downstream effects, but the most prominent of these is the toll on cardiovascular health. Metabolite profiles of the healthy aging index - a score that assesses neurocognitive function, among other correlates of health through the years - are associated with cardiovascular disease. Healthy cardiovascular function is critical for maintaining neurocognitive efficiency into old age. Attention, verbal learning, and cognitive set ability are related to diastolic blood pressure, triglyceride levels, and HDL cholesterol levels, respectively.
Latinos
The Latino demographic is most likely to develop metabolic syndrome - the combination of high blood pressure, high blood sugar, elevated triglyceride levels, and abdominal obesity - which not only increases the risk of cardiac events and type II diabetes but also is associated with lower neurocognitive function during midlife. Among different Latin heritages, frequency of the dementia-predisposing apoE4 allele was highest for Caribbean Latinos (Cubans, Dominicans, and Puerto Ricans) and lowest among mainland Latinos (Mexicans, Central Americans, and South Americans). Conversely, frequency of the neuroprotective apoE2 allele was highest for Caribbean Latinos and lowest for those of mainland heritage.
Indigenous Peoples
Indigenous populations are often understudied in research. Reviews of current literature studying natives in Australia, Brazil, Canada, and the United States from participants aged 45 to 94 years old reveal varied prevalence rates for cognitive impairment not related to dementia, from 4.4% to 17.7%. These results can be interpreted in the context of culturally biased neurocognitive tests, preexisting health conditions, poor access to healthcare, lower educational attainment, and/or old age.
Sex
Women
Compared to their male counterparts, womens scores on the Mini Mental State Exam (MMSE) tend to decline at slightly faster rates with age. Males with mild cognitive impairment tend to show more microstructural damage than females with MCI, but seem to have a greater cognitive reserve due to larger absolute brain size and neuronal density. As a result, women tend to manifest symptoms of cognitive decline at lower thresholds than men do. This effect seems to be moderated by educational attainment - higher education is associated with later diagnosis of mild cognitive impairment as neuropathological load increases.
Transgender Individuals
LGBT elders face numerous disparities as they approach end-of-life. The transgender community fears the risk of hate crime, elder abuse, homelessness, loss of identity, and loss of independence as they age. As a result, depression and suicidality are particularly high within the demographic. Intersectionality - the overlap of several minority identities - can play a major role in health outcomes, as transgender people can be discriminated against for their race, sexuality, gender identity, and age. In the oldest old, these considerations are especially important - as members of this generation have survived through systematic prejudice and discrimination in a time where their identity was outlawed and labeled by the Diagnostic and Statistical Manual of Mental Disorders as a mental illness.
Socioeconomic status
Socioeconomic status is the interaction between social and economic factors. It has been demonstrated that sociodemographic factors can be used to predict cognitive profiles within older individuals to some extent. This may be because families of higher socioeconomic status are equipped to provide their children with resources early on to facilitate cognitive development. For children in families of low SES, relatively small changes in parental income were associated with large changes in brain surface area; these losses were seen in areas associated with language, reading, executive functions, and spatial skills. Meanwhile, for children in families of high SES, small changes in parental income were associated with small changes in surface area within these regions. With respect to global cortical thickness, low SES children showed a curvilinear decrease in thickness with age while those of high SES demonstrated a steeper linear decline, suggesting that synaptic pruning is more efficient in the latter group. This trend was especially evident in the left fusiform and left superior temporal gyri - critical language and literacy supporting areas.
See also
References
External links
National Institute on Aging: Instruments to Detect Cognitive Impairment in Older Adults. |
Or | Or or OR may refer to:
Arts and entertainment
Film and television
"O.R.", a 1974 episode of M*A*S*H
Or (My Treasure), a 2004 movie from Israel (Or means "light" in Hebrew)
Music
Or (album), a 2002 album by Golden Boy with Miss Kittin
O*R, the original title of Olivia Rodrigos album Sour, 2021
"Or", a song by Israeli singer Chen Aharoni in Kdam Eurovision 2011
Or Records, a record label
Organized Rhyme, a Canadian hip-hop group featuring Tom Green
Businesses and organizations
Or (political party) (lit. light), Israel
OR Books, an American publisher
Owasco River Railway, Auburn, New York, U.S. (by reporting mark)
TUI fly Netherlands, formerly Arke, a Dutch charter airline (by IATA designator)
Language and linguistics
Or (digraph), in the Uzbek alphabet
Or (letter) (or forfeda), in Ogham, the Celtic tree alphabet
Odia language, an ancient Indo-Aryan tongue spoken in East India (ISO 639)
Or, an English grammatical conjunction
-or, an English agent noun suffix
Or, a digraph in Taiwaneses Daī-ghî tōng-iōng pīng-im phonetic transcription
Places
Europe
Or (Crimea), an isthmus of the Black Sea
Or (river), a tributary of the Ural
Province of Oristano, Italy (by vehicle code)
United States
Oregon, a U.S. state (by postal abbreviation)
Science, technology, and mathematics
Computing and mathematics
Or (logic), logical disjunction
Exclusive or (XOR), a logical operation
Bitwise OR, an operator in computer programming, typically notated as or or |
The short-circuit operator or, notated or, ||, or or else
Elvis operator, an operator in computer programming that returns its first operand if its value is considered true, and its right operand if not
Null coalescing operator, an operator in computer programming
Onion routing, anonymous networking technique (also Onion Router)
OR gate, an integrated circuit in electronics
Object-relational mapping
Other uses in science and technology
Odds ratio, a measure of effect size in statistics
OR, a previous title of the Journal of the Operational Research Society
Operating room, in medicine
Operations research, or operational research, in British English
Operations readiness
Titles and ranks
Official receiver, a statutory office holder in England and Wales
Order of Roraima of Guyana, an award of the Republic of Guyana
Other ranks, Denmark (disambiguation), military personnel in all branches of the Danish military that are not officers by the NATO system of ranks and insignia
Other ranks (UK), personnel who are not commissioned officers, usually including non-commissioned officers (NCOs), in militaries of many Commonwealth countries
Other uses
Or (name), Hebrew given name and surname
Official Records of the American Civil War
Olympic record, a term for the best performances in Olympic Games
Or (heraldry), a gold or yellow tincture (from the French word for "gold")
Own Recognizance, the basis for releasing someone awaiting trial without bail
See also
0r (zero r), meaning "no roods", in old measurements of land area
And (disambiguation)
OAR (disambiguation)
Ore (disambiguation)
Either/Or (disambiguation) |
Sandfly fever | Sandfly fever can refer to:
Visceral leishmaniasis, or kala-azar
Pappataci fever, or Papatasi fever, an acute febrile arboviral infection (most commonly referred to if not otherwise specified) |
Cephalohematoma | A cephalohaematoma is a hemorrhage of blood between the skull and the periosteum of any age human, including a newborn baby secondary to rupture of blood vessels crossing the periosteum. Because the swelling is subperiosteal, its boundaries are limited by the individual bones, in contrast to a caput succedaneum.
Symptoms and signs
Swelling appears after 2-3 days after birth. If severe the child may develop jaundice, anemia or hypotension. In some cases it may be an indication of a linear skull fracture or be at risk of an infection leading to osteomyelitis or meningitis. The swelling of a cephalohematoma takes weeks to resolve as the blood clot is slowly absorbed from the periphery towards the centre. In time the swelling hardens (calcification) leaving a relatively softer centre so that it appears as a depressed fracture. Cephalohematoma should be distinguished from another scalp bleeding called subgaleal hemorrhage (also called subaponeurotic hemorrhage), which is blood between the scalp and skull bone (above the periosteum) and is more extensive. It is more prone to complications, especially anemia and bruising.
Causes
The usual causes of a cephalohematoma are a prolonged second stage of labor or instrumental delivery, particularly forceps delivery. Ventouse application does not increase the incidence of cephalhematoma. Vitamin C deficiency has been reported to possibly be associated with development of cephalhematomas.
Management
Skull x-ray or CT scanning is used if neurological symptoms appear. These measurements are also used if concomitant depressed skull fracture is a possibility. Usual management is mainly observation. Phototherapy may be necessary if blood accumulation is significant leading to jaundice. Rarely, anaemia can develop needing blood transfusion. The presence of a bleeding disorder should be considered but is rare.Cephalohematomas typically resolve spontaneously within weeks or months of birth, however calcification can occur in 3-5% of cases. While aspiration to remove accumulated blood and prevent calcification has generally been recommended against due to risk of infection, modern surgical standards and antibiotics may make this concern unfounded, and needle aspiration can be considered a safe intervention for significantly-sized cephalohematomas that do not resolve spontaneously after one month.
See also
Caput succedaneum
Cephal
Chignon
Hematoma
Subgaleal hemorrhage
References
External links
Differentiating Cephalhematoma from Caput Succedaneum |
Retrograde ejaculation | Retrograde ejaculation occurs when semen which would be ejaculated via the urethra is redirected to the urinary bladder. Normally, the sphincter of the bladder contracts before ejaculation, sealing the bladder which besides inhibiting the release of urine also prevents a reflux of seminal fluids into the male bladder during ejaculation. The semen is forced to exit via the urethra, the path of least resistance. When the bladder sphincter does not function properly, retrograde ejaculation may occur. It can also be induced deliberately by a male as a primitive form of male birth control (known as coitus saxonicus) or as part of certain alternative medicine practices. The retrograde-ejaculated semen, which goes into the bladder, is excreted with the next urination.
Signs and symptoms
Retrograde ejaculation is sometimes referred to as a "dry orgasm." Retrograde ejaculation is one symptom of male infertility. A man may notice during masturbation that despite the occurrence of orgasm, no accompanying ejaculation was produced. Another underlying cause for this phenomenon may be ejaculatory duct obstruction.
During a male orgasm, sperm are released from the epididymis and travel via small tubes called the vas deferens. The sperm mix with seminal fluid in the seminal vesicles, prostate fluid from the prostate gland, and lubricants from the bulbourethral gland. During climax, muscles at the end of the bladder neck tighten to prevent retrograde flow of semen. In retrograde ejaculation, these bladder neck muscles are either very weak or the nerves controlling the muscles have been damaged.
Causes
A malfunctioning bladder sphincter, leading to retrograde ejaculation, may be a result either of:
Autonomic nervous system dysfunction. (Dysautonomia)
Operation on the prostate. It is a common complication of transurethral resection of the prostate, a procedure in which prostate tissue is removed, slice by slice, through a resectoscope passed along the urethra.It can also be caused by a retroperitoneal lymph node dissection for testicular cancer if nerve pathways to the bladder sphincter are damaged, with the resulting retrograde ejaculation being either temporary or permanent. Modern nerve-sparing techniques seek to reduce this risk; however, it may also occur as the result of Green Light Laser prostate surgery.
Surgery on the bladder neck accounted for about ten percent of the cases of retrograde ejaculation or anejaculation reported in a literature review.Retrograde ejaculation is a common side effect of medications, such as tamsulosin, that are used to relax the muscles of the urinary tract, treating conditions such as benign prostatic hyperplasia. By relaxing the bladder sphincter muscle, the likelihood of retrograde ejaculation is increased.
The medications that mostly cause it are antidepressant and antipsychotic medication, as well as NRIs such as atomoxetine; patients experiencing this phenomenon tend to quit the medications.Retrograde ejaculation can also be a complication of diabetes, especially in cases of diabetics with long term poor blood sugar control. This is due to neuropathy of the bladder sphincter. Post-pubertal males (aged 17 to 20 years) who experience repeated episodes of retrograde ejaculation are often diagnosed with urethral stricture disease shortly after the initial complaint arises. It is currently not known whether a congenital malformation of the bulbous urethra is responsible, or if pressure applied to the base of the penis or perineum immediately preceding ejaculatory inevitability may have inadvertently damaged the urethra. This damage is most often seen within 0.5 cm of the ejaculatory duct (usually distal to the duct).
Conditions which can affect bladder neck muscle
Medications to treat high blood pressure, benign prostate hyperplasia, mood disorders, surgery on the prostate and nerve injury (which may occur in multiple sclerosis, spinal cord injury or diabetes).
Diagnosis
Diagnosis is usually determined after a medical professional performs a urinalysis on a urine specimen that is obtained shortly after ejaculation. In cases of retrograde ejaculation, the specimen will contain an abnormal level of sperm. Especially in case of orgasmic anejaculation, anejaculation can often be confused with retrograde ejaculation, and they share some fundamental aspects of the cause. Urinalysis is used to distinguish between them.
Tests
The genitals are physically examined to ensure that there are no anatomical problems. The urine will be tested for the presence of semen. If there are no sperm in the urine, it may be due to damage to the prostate as a result of surgery or prior radiation therapy.
Treatments
The treatment depends on the cause. Medications may work for retrograde ejaculation but only in a few cases. Surgery rarely is the first option for retrograde ejaculation and the results have proven to be inconsistent. Medications do not help retrograde ejaculation if there has been permanent damage to the prostate or the testes from radiation. Medications also do not help if prostate surgery has resulted in damage to the muscles or nerves. Medications only work if there has been mild nerve damage caused by diabetes, multiple sclerosis, or mild spinal cord injury.
Medications
Tricyclic antidepressants like imipramine.
Antihistamines like chlorphenamine.
Decongestants like ephedrine and phenylephrine.These medications tighten the bladder neck muscles and prevent semen from going backwards into the bladder. However, the medications do have many side effects and they have to be taken at least 1–2 hours prior to sexual intercourse. In many cases, the medications fail to work at the right time because most men are not able to predict when they will have an orgasm.
Infertility treatments
If a couple is experiencing infertility as a result of retrograde ejaculation and medications are not helping, the collection of the semen collection may undergo a special procedure. First, the patient alkalinizes his urine by intake of sodium bicarbonate (3g dissolved in water in the evening before bed, and then another dose after complete bladder emptying right before going to the laboratory). Before semen collection the patient must empty his bladder. The patient then has to masturbate in one container and immediately after has to urinate in another container. The males ejaculate may be centrifuged from urine voided, and the isolated sperm injected directly into the woman through the use of intrauterine insemination. In more severe cases, in-vitro fertilization with intracytoplasmic sperm injection may be used.
Intentional induction
Retrograde ejaculation can be deliberately induced by squeezing the urethra at the base or applying pressure to the perineum during orgasm. The retrograde-ejaculated sperm goes into the bladder and is excreted with the next urination.
Contraception
In certain cultures, such as in the Oneida Community, retrograde ejaculation is performed as a form of primitive male birth control (coitus saxonicus). However, the practice is not considered a reliable method compared to most modern types of birth control. Besides the lack of protection from STDs, the technique itself can be hard to execute correctly during the act of coitus, especially if the male does not fully understand the anatomy involved. Many doctors also do not recommend coitus saxonicus due to the risk of putting pressure on the pudendal nerve, which can cause numbness in the penis.
Taoism
Taoists and some fields of alternative medicine recommend and teach deliberate retrograde ejaculation as a way of "conserving the bodys energy". It was believed that retrograde ejaculation caused the sperm to travel into the head and nourish the brain, or that energy is conserved physically by keeping the sperm (and thereby, the "intelligence" that created it) in the body. However, there are other Taoist perspectives on the general subject of ejaculation and techniques that do not involve retrograde ejaculation (see Taoist sexual practices).
See also
Aspermia
Ejaculatory duct obstruction
Hypospermia
Internal urethral sphincter
Spermaturia
Notes
== External links == |
Flushing | Flushing may refer to:
Places
Flushing, Cornwall, a village in the United Kingdom
Flushing, Queens, New York City
Flushing Bay, a bay off the north shore of Queens
Flushing Chinatown (法拉盛華埠), a community in Queens
Flushing Meadows, a park in Queens which includes multiple venue, such as the location of the US Open tennis tournament
Flushing River, in Queens
Flushing, Michigan, a city in Genesee County
Flushing, Netherlands, an English name for the city of Vlissingen, Netherlands
Flushing, Ohio, a village in Belmont County
The Flushing, a building in Suffolk, England
Flushing Township, Belmont County, Ohio
Flushing Township, Michigan
Other uses
Flushing (military tactic), related to skirmishing
Flushing (physiology), the warm, red condition of human skin
Flushing dog, a hunting dog
Flushing hydrant, a device to flush water mains
Flushing Remonstrance, a demand for religious liberty made to Peter Stuyvesant, the Governor of the Dutch colony of New Netherland, in 1657
See also
Vlissingen (disambiguation), also called "Flushing"
All pages with titles beginning with Flushing
All pages with titles containing Flushing
Flush (disambiguation) |
Agenesis | In medicine, agenesis () refers to the failure of an organ to develop during embryonic growth and development due to the absence of primordial tissue. Many forms of agenesis are referred to by individual names, depending on the organ affected:
Agenesis of the corpus callosum - failure of the Corpus callosum to develop
Renal agenesis - failure of one or both of the kidneys to develop
Amelia - failure of the arms or legs to develop
Penile agenesis - failure of penis to develop
Müllerian agenesis - failure of the uterus and part of the vagina to develop
Agenesis of the gallbladder - failure of the Gallbladder to develop. A person may not realize they have this condition unless they undergo surgery or medical imaging, since the gallbladder is neither externally visible nor essential.
Eye agenesis
Eye agenesis is a medical condition in which people are born with no eyes.
Dental & oral agenesis
Anodontia, absence of all primary or permanent teeth.
Aglossia, absence of the tongue.
Agnathia, absence of the jaw.
Wisdom tooth agenesis - most adult humans have three molars (on each upper/lower left/right side), with the third being referred to as the wisdom tooth. But many people have less than the four total. Agenesis of wisdom teeth is a normal condition that can differ widely by population, ranging from practically zero in Tasmanian Aborigines to nearly 100% in indigenous Mexicans. (See research paper with world map showing prevalence.)
Ear agenesis
Ear agenesis is a medical condition in which people are born without ears.
Because the middle and inner ears are necessary for hearing, people with complete agenesis of the ears are totally deaf. Minor agenesis that affects only the visible parts of the outer ear, which may be called microtia, typically produces cosmetic concerns and perhaps hearing impairment if the opening to the ear canal is blocked, but not deafness.
== References == |
Melasma | Melasma (also known as chloasma faciei,: 854 or the mask of pregnancy when present in pregnant women) is a tan or dark skin discoloration. Melasma is thought to be caused by sun exposure, genetic predisposition, hormone changes, and skin irritation. Although it can affect anyone, it is particularly common in women, especially pregnant women and those who are taking oral or patch contraceptives or hormone replacement therapy medications.
Signs and symptoms
The symptoms of melasma are dark, irregular, well-demarcated, hyperpigmented macules to patches. These patches often develop gradually over time. Melasma does not cause any other symptoms beyond the cosmetic discoloration. Patches can vary in size from 0.5 cm to larger than 10 cm depending on the person. Its location can be categorized as centrofacial, malar, or mandibular. The most common is centrofacial, in which patches appear on the cheeks, nose, upper lip, forehead, and chin. The mandibular category accounts for patches on the bilateral rami, while the malar location accounts for patches only on the nose and cheeks.
Cause
The exact cause of melasma is unknown.Melasma is thought to be the stimulation of melanocytes (cells in the dermal layer, which transfer the pigment melanin to the keratinocytes of skin) when the skin is exposed to ultraviolet light from the sun. Small amounts of sun exposure can make melasma return to the skin after it has faded, which is why people with melasma often get it again and again, particularly in the summer.Pregnant women often get melasma, or chloasma, known as the mask of pregnancy. Birth-control pills and hormone replacement therapy also can trigger melasma. The discoloration usually disappears spontaneously over a period of several months after giving birth or stopping the oral contraceptives or hormone treatment.Genetic predisposition is also a major factor in determining whether someone will develop melasma. People with the Fitzpatrick skin type III or greater from African, Asian, or Hispanic descent are at a much higher risk than others. In addition, women with a light brown skin type who are living in regions with intense sun exposure are particularly susceptible to developing this condition.The incidence of melasma also increases in patients with thyroid disease. It is thought that the overproduction of melanocyte-stimulating hormone brought on by stress can cause outbreaks of this condition. Other rare causes of melasma include allergic reaction to medications and cosmetics.
Addisons disease
Melasma suprarenale (Latin - above the kidneys) is a symptom of Addisons disease, particularly when caused by pressure or minor injury to the skin, as discovered by FJJ Schmidt of Rotterdam in 1859.
Diagnosis
Types
The two different kinds of melasma are epidermal and dermal.
Epidermal melasma results from melanin pigment that is elevated in the suprabasal layers of the epidermis.Dermal melasma occurs when the dermal macrophages have an elevated melanin level. Melasma is usually diagnosed visually or with assistance of a Woods lamp (340 - 400 nm wavelength). Under Woods lamp, excess melanin in the epidermis can be distinguished from that of the dermis. This is done by looking at how dark the melasma appears; dermal melasma appears darker than epidermal melasma under the Woods lamp.
Severity
The severity of facial melasma may be assessed by colorimetry, mexametry, and the melasma area and severity index (MASI) score.
Differential diagnoses
Melasma should be differentiated from freckles, solar lentigo, toxic melanoderma, Riehls melanosis, post-inflammatory hyperpigmentation, friction melanosis, ochronosis (endogenous and exogenous), cutaneous erythematosus lupus. Additionally, it should not be confused with phytophotodermatosis, pellagra, endogenous phototoxicity, nevus of Ota, café au lait macules, seborrheic keratosis, Civattes poikiloderma, acquired bilateral nevus of ota-like macules (Horis nevus), periorbital hyperpigmentation, erythrose pigmentaire peribuccale of Brocq, erythromelanosis follicularis faciei, facial acanthosis nigricans, and actinic lichen planus.Also, cases of drug-induced pigmentation have been reported, caused by amiodarone, or hydroquinone-induced exogenous ochronosis (see ochronosis treatment)
Treatment
Assessment by a dermatologist can help guide treatment. Treatments to hasten the fading of the discolored patches include:
Topical depigmenting agents, such as hydroquinone (HQ) either in over-the-counter (OTC - 2%) or prescription (4%) strength. HQ inhibits tyrosinase, an enzyme involved in the production of melanin.
Tretinoin, a retinoid, increases skin cell (keratinocyte) turnover. This treatment is not used during pregnancy due to risk of harm to the fetus.
Azelaic acid (20%) is thought to decrease the activity of melanocytes.
Tranexamic acid by mouth has shown to provide rapid and sustained lightening in melasma by decreasing melanogenesis in epidermal melanocytes.
Cysteamine hydrochloride (5%) over-the-counter Mechanism of action seems to involve inhibition of melanin synthesis pathway
Kojic acid (2%) OTC
Flutamide (1%)
Chemical peels
Microdermabrasion to dermabrasion (light to deep)
Galvanic or ultrasound facials with a combination of a topical crème/gel, either in an aestheticians office or as a home massager unit
Laser but not intense pulsed light (which can make the melasma darker)
Effectiveness
Evidence-based reviews found that the most effective therapy for melasma includes a combination of topical agents. Triple combination creams formulated with hydroquinone, tretinoin, and a steroid component have shown to be more effective than dual combination therapy or hydroquinone alone. More recently, a systematic review found that oral medications also have a role in melasma treatment, and have been shown to be efficacious with a minimal number and severity of adverse events. Oral medications and dietary supplements employed in the treatment of melasma include tranexamic acid, Polypodium leucotomos extract, beta‐carotenoid, melatonin, and procyanidin.Oral procyanidin combined with vitamins A, C, and E shows promise as safe and effective for epidermal melasma. In an 8-week randomized, double-blind, placebo-controlled trial in 56 Filipino women, treatment was associated with significant improvements in the left and right malar regions, and was safe and well tolerated.In all of these treatments, the effects are gradual and a strict avoidance of sunlight is required. The use of broad-spectrum sunscreens with physical blockers, such as titanium dioxide and zinc oxide, is preferred, because UV-A, UV-B, and visible lights are all capable of stimulating pigment production.
Many negative side effects can go along with these treatments, and treatments often are unsatisfying overall. Scarring, irritation, lighter patches of skin, and contact dermatitis are all commonly seen. Patients should avoid other precipitants, including hormonal triggers. Cosmetic camouflage can also be used to hide melasma.
See also
Linea nigra
List of cutaneous conditions
References
External links
DermNet colour/melasma |
Pneumocystosis | Pneumocystosis is a fungal infection that most often presents as Pneumocystis pneumonia in people with HIV/AIDS or poor immunity. It usually causes cough, difficulty breathing and fever, and can lead to respiratory failure. Involvement outside the lungs is rare but, can occur as a disseminated type affecting lymph nodes, spleen, liver, bone marrow, eyes, kidneys, thyroid, gastrointestinal tract or other organs. If occurring in the skin, it usually presents as nodular growths in the ear canals or underarms.It is caused by Pneumocystis jirovecii, a fungus which is usually breathed in and found in the lungs of healthy people without causing disease, until the persons immune system becomes weakened.Diagnosis is by identifying the organism from a sample of fluid from affected lungs or a biopsy. Prevention in high risk people, and treatment in those affected is usually with trimethoprim/sulfamethoxazole (co-trimoxazole).The prevalence is unknown. Less than 3% of cases do not involve the lungs. The first cases of pneumocystosis affecting lungs were described in premature infants in Europe following the Second World War.
Signs and symptoms
Pneumocystosis is generally an infection in the lungs. Involvement outside the lungs is rare but, can occur as a disseminated type affecting lymph nodes, bone marrow, liver or spleen. It may also affect skin, eyes, kidneys, thyroid, heart, adrenals and gastrointestinal tract.
Lungs
When the lungs are affected there is usually a dry cough, difficulty breathing and fever, usually present for longer than four weeks. There may be chest pain, shivering or tiredness. The oxygen saturation is low. The lungs may fail to function.
Eyes
Pneumocystosis in eyes may appear as a single or multiple (up to 50) yellow-white plaques in the eyes choroid layer or just beneath the retina. Vision is usually not affected and it is typically found by chance.
Skin
If occurring in the skin, pneumocystosis most often presents as nodular growths in the ear canals of a person with HIV/AIDS. There may be fluid in the ear. Skin involvement may appear outside the ear, usually palms, soles or underarms; as a rash, or small bumps with a dip. It can occur on the face as brownish bumps and plaques. The bumps may be tender and the ulcerate. Infection in the ear may result in a perforated ear drum or destruction of the mastoid bone. The nerves in the head may be affected.
Cause
Pneumocystosis is caused by Pneumocystis jirovecii, a fungus which is generally found in the lungs of healthy people, without causing disease until the persons immune system becomes weakened.
Risk factors
Pneumocystosis occurs predominantly in people with HIV/AIDS. Other risk factors include chronic lung disease, cancer, autoimmune diseases, organ transplant, or taking corticosteroids.
Diagnosis
Diagnosis of Pneumocystis pneumonia is by identifying the organism from a sample of sputum, fluid from affected lungs or a biopsy. A chest X-ray of affected lungs show widespread shadowing in both lungs, with a "bat-wing" pattern and ground glass appearance. Giemsa or silver stains can be used to identify the organism, as well as direct immunofluorescence of infected cells.Diagnosis in the eye involves fundoscopy. A biopsy of the retina and choroid layer may be performed. In affected liver, biopsy shows focal areas of necrosis and sinusoidal widening. H&E staining show extracellular frothy pink material. Typical cysts with a solid dark dot can be seen using a Grocott silver stain.
Differential diagnosis
Pneumocystosis may appear similar to pulmonary embolism or adult respiratory distress syndrome. Other infections can present similarly such as tuberculosis, Legionella, and severe flu.
Prevention
There is no vaccine that prevents pneumocystosis. Trimethoprim/sulfamethoxazole (co-trimoxazole) might be prescribed for people at high risk.
Treatment
Treatment is usually with co-trimoxazole. Other options include pentamidine, dapsone and atovaquone.
Outcomes
It is fatal in 10-20% of people with HIV/AIDS. Pneumocystosis in people without HIV/AIDS is frequently diagnosed late and the death rate is therefore higher; 30-50%.
Epidemiology
The exact number of people in the world affected is not known. Pneumocystosis affects lungs in around 97% of cases and is often fatal without treatment.
History
The first cases of pneumocystosis affecting lungs were described in premature infants in Europe following the Second World War. It was then known as plasma cellular interstitial pneumonitis of the newborn.Pneumocystis jirovecii (previously called Pneumocystis carinii) is named for Otto Jírovec, who first described it in 1952.
== References == |
Bladder exstrophy | Bladder exstrophy is a congenital anomaly that exists along the spectrum of the exstrophy-epispadias complex, and most notably involves protrusion of the urinary bladder through a defect in the abdominal wall. Its presentation is variable, often including abnormalities of the bony pelvis, pelvic floor, and genitalia. The underlying embryologic mechanism leading to bladder exstrophy is unknown, though it is thought to be in part due to failed reinforcement of the cloacal membrane by underlying mesoderm.
Exstrophy means the inversion of a hollow organ.
Signs and symptoms
The classic manifestation of bladder exstrophy presents with:
A defect in the abdominal wall occupied by both the exstrophied bladder as well as a portion of the urethra
A flattened puborectal sling
Separation of the pubic symphysis
Shortening of a pubic rami
External rotation of the pelvis.Females frequently have a displaced and narrowed vaginal orifice, a bifid clitoris, and divergent labia.
Cause
The cause is not yet clinically established but is thought to be in part due to failed reinforcement of the cloacal membrane by underlying mesoderm.
Diagnosis
In a small retrospective study of 25 pregnancies five factors were found to be strongly associated with a prenatal diagnosis of bladder exstrophy:
Inability to visualize the bladder on ultrasound
A lower abdominal bulge
A small penis with anteriorly displaced scrotum
A low set umbilical insertion
Abnormal widening of the iliac crestsWhile a diagnosis of bladder exstrophy was made retrospectively in a majority of pregnancies, in only three cases was a prenatal diagnosis made.
Management
The extreme rarity of the disease limits the surgical opportunities to practice the complex closure required in these patients. For this reason, patients have the best outcomes when the bladder closures are performed at high volume centers where surgical and nursing teams have extensive experience in caring for the disease. The highest volume center in the United States, and the world, is the Johns Hopkins Hospital in Baltimore, Maryland; they have seen over 1300 exstrophy patients in the past 50 years.Upon delivery, the exposed bladder is irrigated and a non-adherent film is placed to prevent as much contact with the external environment as possible. In the event the child was not born at a medical center with an appropriate exstrophy support team then transfer will likely follow. Upon transfer, or for those infants born at a medical center able to care for bladder exstrophy, imaging may take place in the first few hours of life prior to the child undergoing surgery.Primary (immediate) closure is indicated only in those patients with a bladder of appropriate size, elasticity, and contractility as those patients are most likely to develop a bladder of adequate capacity after early surgical intervention.Conditions that are absolute contraindications despite bladder adequacy include duplication of the penis or scrotum and significant bilateral hydronephrosis.
Surgery
Modern therapy is aimed at surgical reconstruction of the bladder and genitalia. Both males and females are born with this anomaly. Treatment is similar.
In males treatments have been:
In the modern staged repair of exstrophy (MSRE) the initial step is closure of the abdominal wall, often requiring a pelvic osteotomy. This leaves the patient with penile epispadias and urinary incontinence. At approximately 2–3 years of age, the patient then undergoes repair of the epispadias after testosterone stimulation. Finally, bladder neck repair usually occurs around the age of 4–5 years, though this is dependent upon a bladder with adequate capacity and, most importantly, an indication that the child is interested in becoming continent. In some of the bladder reconstructions, the bladder is augmented with the addition of a segment of the large intestines to increase the volume capacity of the reconstructed bladder.
In the complete primary repair of exstrophy (CPRE) the bladder closure is combined with an epispadias repair, in an effort to decrease costs and morbidity. This technique has, however, led to significant loss of penile and corporal tissue, particularly in younger patients.In females treatment has included:
Surgical reconstruction of the clitoris, which is separated into two distinct bodies. Surgical reconstruction to correct the split of the mons, redefine the structure of the bladder neck and urethra. Vaginoplasty will correct the anteriorly displaced vagina. If the anus is involved, it is also repaired. Fertility remains and women who were born with bladder exstrophy usually develop prolapse due to the weaker muscles of the pelvic floor.
Prognosis
The most important criterion for improving long-term prognosis is success of the initial closure. If a patient requires more than one closure their chance of continence drops off precipitously with each additional closure - at just two closures the chance of voiding continence is just 17%.Even with successful surgery, people may have long-term complications. Some of the most common include:
Vesicoureteral reflux
Bladder spasm
Bladder calculus
Urinary tract infections
Epidemiology
Occurring at a rate between 1 in 10,000 to 1 in 50,000 with a male-to-female ratio of 2.3-6:1, bladder exstrophy is relatively rare. For those individuals with bladder exstrophy who maintain their ability to reproduce, the risk of bladder exstrophy in their children is approximately 500-fold greater than the general population.
References
== External links == |
Myocardial infarction | A myocardial infarction (MI), commonly known as a heart attack, occurs when blood flow decreases or stops to the coronary artery of the heart, causing damage to the heart muscle. The most common symptom is chest pain or discomfort which may travel into the shoulder, arm, back, neck or jaw. Often it occurs in the center or left side of the chest and lasts for more than a few minutes. The discomfort may occasionally feel like heartburn. Other symptoms may include shortness of breath, nausea, feeling faint, a cold sweat or feeling tired. About 30% of people have atypical symptoms. Women more often present without chest pain and instead have neck pain, arm pain or feel tired. Among those over 75 years old, about 5% have had an MI with little or no history of symptoms. An MI may cause heart failure, an irregular heartbeat, cardiogenic shock or cardiac arrest.Most MIs occur due to coronary artery disease. Risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet and excessive alcohol intake. The complete blockage of a coronary artery caused by a rupture of an atherosclerotic plaque is usually the underlying mechanism of an MI. MIs are less commonly caused by coronary artery spasms, which may be due to cocaine, significant emotional stress (commonly known as Takotsubo syndrome or broken heart syndrome) and extreme cold, among others. A number of tests are useful to help with diagnosis, including electrocardiograms (ECGs), blood tests and coronary angiography. An ECG, which is a recording of the hearts electrical activity, may confirm an ST elevation MI (STEMI), if ST elevation is present. Commonly used blood tests include troponin and less often creatine kinase MB.Treatment of an MI is time-critical. Aspirin is an appropriate immediate treatment for a suspected MI. Nitroglycerin or opioids may be used to help with chest pain; however, they do not improve overall outcomes. Supplemental oxygen is recommended in those with low oxygen levels or shortness of breath. In a STEMI, treatments attempt to restore blood flow to the heart and include percutaneous coronary intervention (PCI), where the arteries are pushed open and may be stented, or thrombolysis, where the blockage is removed using medications. People who have a non-ST elevation myocardial infarction (NSTEMI) are often managed with the blood thinner heparin, with the additional use of PCI in those at high risk. In people with blockages of multiple coronary arteries and diabetes, coronary artery bypass surgery (CABG) may be recommended rather than angioplasty. After an MI, lifestyle modifications, along with long-term treatment with aspirin, beta blockers and statins, are typically recommended.Worldwide, about 15.9 million myocardial infarctions occurred in 2015. More than 3 million people had an ST elevation MI, and more than 4 million had an NSTEMI. STEMIs occur about twice as often in men as women. About one million people have an MI each year in the United States. In the developed world, the risk of death in those who have had an STEMI is about 10%. Rates of MI for a given age have decreased globally between 1990 and 2010. In 2011, an MI was one of the top five most expensive conditions during inpatient hospitalizations in the US, with a cost of about $11.5 billion for 612,000 hospital stays.
Terminology
Myocardial infarction (MI) refers to tissue death (infarction) of the heart muscle (myocardium) caused by ischaemia, the lack of oxygen delivery to myocardial tissue. It is a type of acute coronary syndrome, which describes a sudden or short-term change in symptoms related to blood flow to the heart. Unlike the other type of acute coronary syndrome, unstable angina, a myocardial infarction occurs when there is cell death, which can be estimated by measuring by a blood test for biomarkers (the cardiac protein troponin). When there is evidence of an MI, it may be classified as an ST elevation myocardial infarction (STEMI) or Non-ST elevation myocardial infarction (NSTEMI) based on the results of an ECG.The phrase "heart attack" is often used non-specifically to refer to myocardial infarction. An MI is different from—but can cause—cardiac arrest, where the heart is not contracting at all or so poorly that all vital organs cease to function, thus might lead to death. It is also distinct from heart failure, in which the pumping action of the heart is impaired. However, an MI may lead to heart failure.
Signs and symptoms
Chest pain that may or may not radiate to other parts of the body is the most typical and significant symptom of myocardial infarction. It might be accompanied by other symptoms such as sweating.
Pain
Chest pain is one of the most common symptoms of acute myocardial infarction and is often described as a sensation of tightness, pressure, or squeezing. Pain radiates most often to the left arm, but may also radiate to the lower jaw, neck, right arm, back, and upper abdomen. The pain most suggestive of an acute MI, with the highest likelihood ratio, is pain radiating to the right arm and shoulder. Similarly, chest pain similar to a previous heart attack is also suggestive. The pain associated with MI is usually diffuse, does not change with position, and lasts for more than 20 minutes. It might be described as pressure, tightness, knifelike, tearing, burning sensation (all these are also manifested during other diseases). It could be felt as an unexplained anxiety, and pain might be absent altogether. Levines sign, in which a person localizes the chest pain by clenching one or both fists over their sternum, has classically been thought to be predictive of cardiac chest pain, although a prospective observational study showed it had a poor positive predictive value.Typically, chest pain because of ischemia, be it unstable angina or myocardial infarction, lessens with the use of nitroglycerin, but nitroglycerin may also relieve chest pain arising from non-cardiac causes.
Other
Chest pain may be accompanied by sweating, nausea or vomiting, and fainting, and these symptoms may also occur without any pain at all. In women, the most common symptoms of myocardial infarction include shortness of breath, weakness, and fatigue. Women are more likely to have unusual or unexplained tiredness and nausea or vomiting as symptoms. Women having heart attacks are more likely to have palpitations, back pain, labored breath, vomiting, and left arm pain than men, although the studies showing these differences had high variability. Women are less likely to report chest pain during a heart attack and more likely to report nausea, jaw pain, neck pain, cough, and fatigue, although these findings are inconsistent across studies. Women with heart attacks also had more indigestion, dizziness, loss of appetite, and loss of consciousness. Shortness of breath is a common, and sometimes the only symptom, occurring when damage to the heart limits the output of the left ventricle, with breathlessness arising either from low oxygen in the blood, or pulmonary edema. Other less common symptoms include weakness, light-headedness, palpitations, and abnormalities in heart rate or blood pressure. These symptoms are likely induced by a massive surge of catecholamines from the sympathetic nervous system, which occurs in response to pain and, where present, low blood pressure. Loss of consciousness due to inadequate blood flow to the brain and cardiogenic shock, and sudden death, frequently due to the development of ventricular fibrillation, can occur in myocardial infarctions. Cardiac arrest, and atypical symptoms such as palpitations, occur more frequently in women, the elderly, those with diabetes, in people who have just had surgery, and in critically ill patients.
Absence
"Silent" myocardial infarctions can happen without any symptoms at all. These cases can be discovered later on electrocardiograms, using blood enzyme tests, or at autopsy after a person has died. Such silent myocardial infarctions represent between 22 and 64% of all infarctions, and are more common in the elderly, in those with diabetes mellitus and after heart transplantation. In people with diabetes, differences in pain threshold, autonomic neuropathy, and psychological factors have been cited as possible explanations for the lack of symptoms. In heart transplantation, the donor heart is not fully innervated by the nervous system of the recipient.
Risk factors
The most prominent risk factors for myocardial infarction are older age, actively smoking, high blood pressure, diabetes mellitus, and total cholesterol and high-density lipoprotein levels. Many risk factors of myocardial infarction are shared with coronary artery disease, the primary cause of myocardial infarction, with other risk factors including male sex, low levels of physical activity, a past family history, obesity, and alcohol use. Risk factors for myocardial disease are often included in risk factor stratification scores, such as the Framingham Risk Score. At any given age, men are more at risk than women for the development of cardiovascular disease. High levels of blood cholesterol is a known risk factor, particularly high low-density lipoprotein, low high-density lipoprotein, and high triglycerides.Many risk factors for myocardial infarction are potentially modifiable, with the most important being tobacco smoking (including secondhand smoke). Smoking appears to be the cause of about 36% and obesity the cause of 20% of coronary artery disease. Lack of physical activity has been linked to 7–12% of cases. Less common causes include stress-related causes such as job stress, which accounts for about 3% of cases, and chronic high stress levels.
Diet
There is varying evidence about the importance of saturated fat in the development of myocardial infarctions. Eating polyunsaturated fat instead of saturated fats has been shown in studies to be associated with a decreased risk of myocardial infarction, while other studies find little evidence that reducing dietary saturated fat or increasing polyunsaturated fat intake affects heart attack risk. Dietary cholesterol does not appear to have a significant effect on blood cholesterol and thus recommendations about its consumption may not be needed. Trans fats do appear to increase risk. Acute and prolonged intake of high quantities of alcoholic drinks (3–4 or more daily) increases the risk of a heart attack.
Genetics
Family history of ischemic heart disease or MI, particularly if one has a male first-degree relative (father, brother) who had a myocardial infarction before age 55 years, or a female first-degree relative (mother, sister) less than age 65 increases a persons risk of MI.Genome-wide association studies have found 27 genetic variants that are associated with an increased risk of myocardial infarction. The strongest association of MI has been found with chromosome 9 on the short arm p at locus 21, which contains genes CDKN2A and 2B, although the single nucleotide polymorphisms that are implicated are within a non-coding region. The majority of these variants are in regions that have not been previously implicated in coronary artery disease. The following genes have an association with MI: PCSK9, SORT1, MIA3, WDR12, MRAS, PHACTR1, LPA, TCF21, MTHFDSL, ZC3HC1, CDKN2A, 2B, ABO, PDGF0, APOA5, MNF1ASM283, COL4A1, HHIPC1, SMAD3, ADAMTS7, RAS1, SMG6, SNF8, LDLR, SLC5A3, MRPS6, KCNE2.
Other
The risk of having a myocardial infarction increases with older age, low physical activity, and low socioeconomic status. Heart attacks appear to occur more commonly in the morning hours, especially between 6AM and noon. Evidence suggests that heart attacks are at least three times more likely to occur in the morning than in the late evening. Shift work is also associated with a higher risk of MI. And one analysis has found an increase in heart attacks immediately following the start of daylight saving time.Women who use combined oral contraceptive pills have a modestly increased risk of myocardial infarction, especially in the presence of other risk factors. The use of non-steroidal anti inflammatory drugs (NSAIDs), even for as short as a week, increases risk.Endometriosis in women under the age of 40 is an identified risk factor.Air pollution is also an important modifiable risk. Short-term exposure to air pollution such as carbon monoxide, nitrogen dioxide, and sulfur dioxide (but not ozone) have been associated with MI and other acute cardiovascular events. For sudden cardiac deaths, every increment of 30 units in Pollutant Standards Index correlated with an 8% increased risk of out-of-hospital cardiac arrest on the day of exposure. Extremes of temperature are also associated.A number of acute and chronic infections including Chlamydophila pneumoniae, influenza, Helicobacter pylori, and Porphyromonas gingivalis among others have been linked to atherosclerosis and myocardial infarction. As of 2013, there is no evidence of benefit from antibiotics or vaccination, however, calling the association into question. Myocardial infarction can also occur as a late consequence of Kawasaki disease.Calcium deposits in the coronary arteries can be detected with CT scans. Calcium seen in coronary arteries can provide predictive information beyond that of classical risk factors. High blood levels of the amino acid homocysteine is associated with premature atherosclerosis; whether elevated homocysteine in the normal range is causal is controversial.In people without evident coronary artery disease, possible causes for the myocardial infarction are coronary spasm or coronary artery dissection.
Mechanism
Atherosclerosis
The most common cause of a myocardial infarction is the rupture of an atherosclerotic plaque on an artery supplying heart muscle. Plaques can become unstable, rupture, and additionally promote the formation of a blood clot that blocks the artery; this can occur in minutes. Blockage of an artery can lead to tissue death in tissue being supplied by that artery. Atherosclerotic plaques are often present for decades before they result in symptoms.The gradual buildup of cholesterol and fibrous tissue in plaques in the wall of the coronary arteries or other arteries, typically over decades, is termed atherosclerosis. Atherosclerosis is characterized by progressive inflammation of the walls of the arteries. Inflammatory cells, particularly macrophages, move into affected arterial walls. Over time, they become laden with cholesterol products, particularly LDL, and become foam cells. A cholesterol core forms as foam cells die. In response to growth factors secreted by macrophages, smooth muscle and other cells move into the plaque and act to stabilize it. A stable plaque may have a thick fibrous cap with calcification. If there is ongoing inflammation, the cap may be thin or ulcerate. Exposed to the pressure associated with blood flow, plaques, especially those with a thin lining, may rupture and trigger the formation of a blood clot (thrombus). The cholesterol crystals have been associated with plaque rupture through mechanical injury and inflammation.
Other causes
Atherosclerotic disease is not the only cause of myocardial infarction, but it may exacerbate or contribute to other causes. A myocardial infarction may result from a heart with a limited blood supply subject to increased oxygen demands, such as in fever, a fast heart rate, hyperthyroidism, too few red blood cells in the bloodstream, or low blood pressure. Damage or failure of procedures such as percutaneous coronary intervention or coronary artery bypass grafts may cause a myocardial infarction. Spasm of coronary arteries, such as Prinzmetals angina may cause blockage.
Tissue death
If impaired blood flow to the heart lasts long enough, it triggers a process called the ischemic cascade; the heart cells in the territory of the blocked coronary artery die (infarction), chiefly through necrosis, and do not grow back. A collagen scar forms in their place. When an artery is blocked, cells lack oxygen, needed to produce ATP in mitochondria. ATP is required for the maintenance of electrolyte balance, particularly through the Na/K ATPase. This leads to an ischemic cascade of intracellular changes, necrosis and apoptosis of affected cells.Cells in the area with the worst blood supply, just below the inner surface of the heart (endocardium), are most susceptible to damage. Ischemia first affects this region, the subendocardial region, and tissue begins to die within 15–30 minutes of loss of blood supply. The dead tissue is surrounded by a zone of potentially reversible ischemia that progresses to become a full-thickness transmural infarct. The initial "wave" of infarction can take place over 3–4 hours. These changes are seen on gross pathology and cannot be predicted by the presence or absence of Q waves on an ECG. The position, size and extent of an infarct depends on the affected artery, totality of the blockage, duration of the blockage, the presence of collateral blood vessels, oxygen demand, and success of interventional procedures.Tissue death and myocardial scarring alter the normal conduction pathways of the heart, and weaken affected areas. The size and location puts a person at risk of abnormal heart rhythms (arrhythmias) or heart block, aneurysm of the heart ventricles, inflammation of the heart wall following infarction, and rupture of the heart wall that can have catastrophic consequences.Injury to the myocardium also occurs during re-perfusion. This might manifest as ventricular arrhythmia. The re-perfusion injury is a consequence of the calcium and sodium uptake from the cardiac cells and the release of oxygen radicals during reperfusion. No-reflow phenomenon—when blood is still unable to be distributed to the affected myocardium despite clearing the occlusion—also contributes to myocardial injury. Topical endothelial swelling is one of many factors contributing to this phenomenon.
Diagnosis
Criteria
A myocardial infarction, according to current consensus, is defined by elevated cardiac biomarkers with a rising or falling trend and at least one of the following:
Symptoms relating to ischemia
Changes on an electrocardiogram (ECG), such as ST segment changes, new left bundle branch block, or pathologic Q waves
Changes in the motion of the heart wall on imaging
Demonstration of a thrombus on angiogram or at autopsy.
Types
A myocardial infarction is usually clinically classified as an ST-elevation MI (STEMI) or a non-ST elevation MI (NSTEMI). These are based on ST elevation, a portion of a heartbeat graphically recorded on an ECG. STEMIs make up about 25–40% of myocardial infarctions. A more explicit classification system, based on international consensus in 2012, also exists. This classifies myocardial infarctions into five types:
Spontaneous MI related to plaque erosion and/or rupture fissuring, or dissection
MI related to ischemia, such as from increased oxygen demand or decreased supply, e.g. coronary artery spasm, coronary embolism, anemia, arrhythmias, high blood pressure, or low blood pressure
Sudden unexpected cardiac death, including cardiac arrest, where symptoms may suggest MI, an ECG may be taken with suggestive changes, or a blood clot is found in a coronary artery by angiography and/or at autopsy, but where blood samples could not be obtained, or at a time before the appearance of cardiac biomarkers in the blood
Associated with coronary angioplasty or stents
Associated with percutaneous coronary intervention (PCI)
Associated with stent thrombosis as documented by angiography or at autopsy
Associated with CABG
Associated with spontaneous coronary artery dissection in young, fit women
Cardiac biomarkers
There are many different biomarkers used to determine the presence of cardiac muscle damage. Troponins, measured through a blood test, are considered to be the best, and are preferred because they have greater sensitivity and specificity for measuring injury to the heart muscle than other tests. A rise in troponin occurs within 2–3 hours of injury to the heart muscle, and peaks within 1–2 days. The level of the troponin, as well as a change over time, are useful in measuring and diagnosing or excluding myocardial infarctions, and the diagnostic accuracy of troponin testing is improving over time. One high-sensitivity cardiac troponin can rule out a heart attack as long as the ECG is normal.Other tests, such as CK-MB or myoglobin, are discouraged. CK-MB is not as specific as troponins for acute myocardial injury, and may be elevated with past cardiac surgery, inflammation or electrical cardioversion; it rises within 4–8 hours and returns to normal within 2–3 days. Copeptin may be useful to rule out MI rapidly when used along with troponin.
Electrocardiogram
Electrocardiograms (ECGs) are a series of leads placed on a persons chest that measure electrical activity associated with contraction of the heart muscle. The taking of an ECG is an important part of the workup of an AMI, and ECGs are often not just taken once but may be repeated over minutes to hours, or in response to changes in signs or symptoms.ECG readouts product a waveform with different labelled features. In addition to a rise in biomarkers, a rise in the ST segment, changes in the shape or flipping of T waves, new Q waves, or a new left bundle branch block can be used to diagnose an AMI. In addition, ST elevation can be used to diagnose an ST segment myocardial infarction (STEMI). A rise must be new in V2 and V3 ≥2 mm (0,2 mV) for males or ≥1.5 mm (0.15 mV) for females or ≥1 mm (0.1 mV) in two other adjacent chest or limb leads. ST elevation is associated with infarction, and may be preceded by changes indicating ischemia, such as ST depression or inversion of the T waves. Abnormalities can help differentiate the location of an infarct, based on the leads that are affected by changes. Early STEMIs may be preceded by peaked T waves. Other ECG abnormalities relating to complications of acute myocardial infarctions may also be evident, such as atrial or ventricular fibrillation.
Imaging
Noninvasive imaging plays an important role in the diagnosis and characterisation of myocardial infarction. Tests such as chest X-rays can be used to explore and exclude alternate causes of a persons symptoms. Echocardiography may assist in modifying clinical suspicion of ongoing myocardial infarction in patients that cant be ruled out or ruled in following initial ECG and Troponin testing. Myocardial perfusion imaging has no role in the acute diagnostic algorithm, however it can confirm a clinical suspicion of Chronic Coronary Syndrome when the patients history, physical examination (including cardiac examination) ECG, and cardiac biomarkers suggest coronary artery disease.Echocardiography, an ultrasound scan of the heart, is able to visualize the heart, its size, shape, and any abnormal motion of the heart walls as they beat that may indicate a myocardial infarction. The flow of blood can be imaged, and contrast dyes may be given to improve image. Other scans using radioactive contrast include SPECT CT-scans using thallium, sestamibi (MIBI scans) or tetrofosmin; or a PET scan using Fludeoxyglucose or rubidium-82. These nuclear medicine scans can visualize the perfusion of heart muscle. SPECT may also be used to determine viability of tissue, and whether areas of ischemia are inducible.Medical societies and professional guidelines recommend that the physician confirm a person is at high risk for Chronic Coronary Syndrome before conducting diagnostic non-invasive imaging tests to make a diagnosis, as such tests are unlikely to change management and result in increased costs. Patients who have a normal ECG and who are able to exercise, for example, most likely do not merit routine imaging.
Differential diagnosis
There are many causes of chest pain, which can originate from the heart, lungs, gastrointestinal tract, aorta, and other muscles, bones and nerves surrounding the chest. In addition to myocardial infarction, other causes include angina, insufficient blood supply (ischemia) to the heart muscles without evidence of cell death, gastroesophageal reflux disease; pulmonary embolism, tumors of the lungs, pneumonia, rib fracture, costochondritis, heart failure and other musculoskeletal injuries. Rarer severe differential diagnoses include aortic dissection, esophageal rupture, tension pneumothorax, and pericardial effusion causing cardiac tamponade. The chest pain in an MI may mimic heartburn. Causes of sudden-onset breathlessness generally involve the lungs or heart – including pulmonary edema, pneumonia, allergic reactions and asthma, and pulmonary embolus, acute respiratory distress syndrome and metabolic acidosis. There are many different causes of fatigue, and myocardial infarction is not a common cause.
Prevention
There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarction, because of shared risk factors and an aim to reduce atherosclerosis affecting heart vessels. The influenza vaccine also appear to protect against myocardial infarction with a benefit of 15 to 45%.
Primary prevention
Lifestyle
Physical activity can reduce the risk of cardiovascular disease, and people at risk are advised to engage in 150 minutes of moderate or 75 minutes of vigorous-intensity aerobic exercise a week. Keeping a healthy weight, drinking alcohol within the recommended limits, and quitting smoking reduce the risk of cardiovascular disease.Substituting unsaturated fats such as olive oil and rapeseed oil instead of saturated fats may reduce the risk of myocardial infarction, although there is not universal agreement. Dietary modifications are recommended by some national authorities, with recommendations including increasing the intake of wholegrain starch, reducing sugar intake (particularly of refined sugar), consuming five portions of fruit and vegetables daily, consuming two or more portions of fish per week, and consuming 4–5 portions of unsalted nuts, seeds, or legumes per week. The dietary pattern with the greatest support is the Mediterranean diet. Vitamins and mineral supplements are of no proven benefit, and neither are plant stanols or sterols.Public health measures may also act at a population level to reduce the risk of myocardial infarction, for example by reducing unhealthy diets (excessive salt, saturated fat, and trans fat) including food labeling and marketing requirements as well as requirements for catering and restaurants, and stimulating physical activity. This may be part of regional cardiovascular disease prevention programs or through the health impact assessment of regional and local plans and policies.Most guidelines recommend combining different preventive strategies. A 2015 Cochrane Review found some evidence that such an approach might help with blood pressure, body mass index and waist circumference. However, there was insufficient evidence to show an effect on mortality or actual cardio-vascular events.
Medication
Statins, drugs that act to lower blood cholesterol, decrease the incidence and mortality rates of myocardial infarctions. They are often recommended in those at an elevated risk of cardiovascular diseases.Aspirin has been studied extensively in people considered at increased risk of myocardial infarction. Based on numerous studies in different groups (e.g. people with or without diabetes), there does not appear to be a benefit strong enough to outweigh the risk of excessive bleeding. Nevertheless, many clinical practice guidelines continue to recommend aspirin for primary prevention, and some researchers feel that those with very high cardiovascular risk but low risk of bleeding should continue to receive aspirin.
Secondary prevention
There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarct. Recommendations include stopping smoking, a gradual return to exercise, eating a healthy diet, low in saturated fat and low in cholesterol, drinking alcohol within recommended limits, exercising, and trying to achieve a healthy weight. Exercise is both safe and effective even if people have had stents or heart failure, and is recommended to start gradually after 1–2 |
Myocardial infarction | weeks. Counselling should be provided relating to medications used, and for warning signs of depression. Previous studies suggested a benefit from omega-3 fatty acid supplementation but this has not been confirmed.
Medications
Following a heart attack, nitrates, when taken for two days, and ACE-inhibitors decrease the risk of death. Other medications include:
Aspirin is continued indefinitely, as well as another antiplatelet agent such as clopidogrel or ticagrelor ("dual antiplatelet therapy" or DAPT) for up to twelve months. If someone has another medical condition that requires anticoagulation (e.g. with warfarin) this may need to be adjusted based on risk of further cardiac events as well as bleeding risk. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.Beta blocker therapy such as metoprolol or carvedilol is recommended to be started within 24 hours, provided there is no acute heart failure or heart block. The dose should be increased to the highest tolerated. Contrary to what was long believed, the use of beta blockers does not appear to affect the risk of death, possibly because other treatments for MI have improved. When beta blocker medication is given within the first 24–72 hours of a STEMI no lives are saved. However, 1 in 200 people were prevented from a repeat heart attack, and another 1 in 200 from having an abnormal heart rhythm. Additionally, for 1 in 91 the medication causes a temporary decrease in the hearts ability to pump blood.ACE inhibitor therapy should be started within 24 hours, and continued indefinitely at the highest tolerated dose. This is provided there is no evidence of worsening kidney failure, high potassium, low blood pressure, or known narrowing of the renal arteries. Those who cannot tolerate ACE inhibitors may be treated with an angiotensin II receptor antagonist.Statin therapy has been shown to reduce mortality and subsequent cardiac events and should be commenced to lower LDL cholesterol. Other medications, such as ezetimibe, may also be added with this goal in mind.Aldosterone antagonists (spironolactone or eplerenone) may be used if there is evidence of left ventricular dysfunction after an MI, ideally after beginning treatment with an ACE inhibitor.
Other
A defibrillator, an electric device connected to the heart and surgically inserted under the skin, may be recommended. This is particularly if there are any ongoing signs of heart failure, with a low left ventricular ejection fraction and a New York Heart Association grade II or III after 40 days of the infarction. Defibrillators detect potentially fatal arrhythmia and deliver an electrical shock to the person to depolarize a critical mass of the heart muscle.
Management
A myocardial infarction requires immediate medical attention. Treatment aims to preserve as much heart muscle as possible, and to prevent further complications. Treatment depends on whether the myocardial infarction is a STEMI or NSTEMI. Treatment in general aims to unblock blood vessels, reduce blood clot enlargement, reduce ischemia, and modify risk factors with the aim of preventing future MIs. In addition, the main treatment for myocardial infarctions with ECG evidence of ST elevation (STEMI) include thrombolysis or percutaneous coronary intervention, although PCI is also ideally conducted within 1–3 days for NSTEMI. In addition to clinical judgement, risk stratification may be used to guide treatment, such as with the TIMI and GRACE scoring systems.
Pain
The pain associated with myocardial infarction is often treated with nitroglycerin, a vasodilator, or opioid medications such as morphine. Nitroglycerin (given under the tongue or injected into a vein) may improve blood supply to the heart. It is an important part of therapy for its pain relief effects, though there is no proven benefit to mortality. Morphine or other opioid medications may also be used, and are effective for the pain associated with STEMI. There is poor evidence that morphine shows any benefit to overall outcomes, and there is some evidence of potential harm.
Antithrombotics
Aspirin, an antiplatelet drug, is given as a loading dose to reduce the clot size and reduce further clotting in the affected artery. It is known to decrease mortality associated with acute myocardial infarction by at least 50%. P2Y12 inhibitors such as clopidogrel, prasugrel and ticagrelor are given concurrently, also as a loading dose, with the dose depending on whether further surgical management or fibrinolysis is planned. Prasugrel and ticagrelor are recommended in European and American guidelines, as they are active more quickly and consistently than clopidogrel. P2Y12 inhibitors are recommended in both NSTEMI and STEMI, including in PCI, with evidence also to suggest improved mortality. Heparins, particularly in the unfractionated form, act at several points in the clotting cascade, help to prevent the enlargement of a clot, and are also given in myocardial infarction, owing to evidence suggesting improved mortality rates. In very high-risk scenarios, inhibitors of the platelet glycoprotein αIIbβ3a receptor such as eptifibatide or tirofiban may be used.There is varying evidence on the mortality benefits in NSTEMI. A 2014 review of P2Y12 inhibitors such as clopidogrel found they do not change the risk of death when given to people with a suspected NSTEMI prior to PCI, nor do heparins change the risk of death. They do decrease the risk of having a further myocardial infarction.
Angiogram
Primary percutaneous coronary intervention (PCI) is the treatment of choice for STEMI if it can be performed in a timely manner, ideally within 90–120 minutes of contact with a medical provider. Some recommend it is also done in NSTEMI within 1–3 days, particularly when considered high-risk. A 2017 review, however, did not find a difference between early versus later PCI in NSTEMI.PCI involves small probes, inserted through peripheral blood vessels such as the femoral artery or radial artery into the blood vessels of the heart. The probes are then used to identify and clear blockages using small balloons, which are dragged through the blocked segment, dragging away the clot, or the insertion of stents. Coronary artery bypass grafting is only considered when the affected area of heart muscle is large, and PCI is unsuitable, for example with difficult cardiac anatomy. After PCI, people are generally placed on aspirin indefinitely and on dual antiplatelet therapy (generally aspirin and clopidogrel) for at least a year.
Fibrinolysis
If PCI cannot be performed within 90 to 120 minutes in STEMI then fibrinolysis, preferably within 30 minutes of arrival to hospital, is recommended. If a person has had symptoms for 12 to 24 hours evidence for effectiveness of thrombolysis is less and if they have had symptoms for more than 24 hours it is not recommended. Thrombolysis involves the administration of medication that activates the enzymes that normally dissolve blood clots. These medications include tissue plasminogen activator, reteplase, streptokinase, and tenecteplase. Thrombolysis is not recommended in a number of situations, particularly when associated with a high risk of bleeding or the potential for problematic bleeding, such as active bleeding, past strokes or bleeds into the brain, or severe hypertension. Situations in which thrombolysis may be considered, but with caution, include recent surgery, use of anticoagulants, pregnancy, and proclivity to bleeding. Major risks of thrombolysis are major bleeding and intracranial bleeding. Pre-hospital thrombolysis reduces time to thrombolytic treatment, based on studies conducted in higher income countries, however it is unclear whether this has an impact on mortality rates.
Other
In the past, high flow oxygen was recommended for everyone with a possible myocardial infarction. More recently, no evidence was found for routine use in those with normal oxygen levels and there is potential harm from the intervention. Therefore, oxygen is currently only recommended if oxygen levels are found to be low or if someone is in respiratory distress.If despite thrombolysis there is significant cardiogenic shock, continued severe chest pain, or less than a 50% improvement in ST elevation on the ECG recording after 90 minutes, then rescue PCI is indicated emergently.Those who have had cardiac arrest may benefit from targeted temperature management with evaluation for implementation of hypothermia protocols. Furthermore, those with cardiac arrest, and ST elevation at any time, should usually have angiography. Aldosterone antagonists appear to be useful in people who have had an STEMI and do not have heart failure.
Rehabilitation and exercise
Cardiac rehabilitation benefits many who have experienced myocardial infarction, even if there has been substantial heart damage and resultant left ventricular failure. It should start soon after discharge from the hospital. The program may include lifestyle advice, exercise, social support, as well as recommendations about driving, flying, sports participation, stress management, and sexual intercourse. Returning to sexual activity after myocardial infarction is a major concern for most patients, and is an important area to be discussed in the provision of holistic care.In the short-term, exercise-based cardiovascular rehabilitation programs may reduce the risk of a myocardial infarction, reduces a large number of hospitalizations from all causes, reduces hospital costs, improves health-related quality of life, and has a small effect on all-cause mortality. Longer-term studies indicate that exercise-based cardiovascular rehabilitation programs may reduce cardiovascular mortality and myocardial infarction.
Prognosis
The prognosis after myocardial infarction varies greatly depending on the extent and location of the affected heart muscle, and the development and management of complications. Prognosis is worse with older age and social isolation. Anterior infarcts, persistent ventricular tachycardia or fibrillation, development of heart blocks, and left ventricular impairment are all associated with poorer prognosis. Without treatment, about a quarter of those affected by MI die within minutes and about forty percent within the first month. Morbidity and mortality from myocardial infarction has however improved over the years due to earlier and better treatment: in those who have a STEMI in the United States, between 5 and 6 percent die before leaving the hospital and 7 to 18 percent die within a year.It is unusual for babies to experience a myocardial infarction, but when they do, about half die. In the short-term, neonatal survivors seem to have a normal quality of life.
Complications
Complications may occur immediately following the myocardial infarction or may take time to develop. Disturbances of heart rhythms, including atrial fibrillation, ventricular tachycardia and fibrillation and heart block can arise as a result of ischemia, cardiac scarring, and infarct location. Stroke is also a risk, either as a result of clots transmitted from the heart during PCI, as a result of bleeding following anticoagulation, or as a result of disturbances in the hearts ability to pump effectively as a result of the infarction. Regurgitation of blood through the mitral valve is possible, particularly if the infarction causes dysfunction of the papillary muscle. Cardiogenic shock as a result of the heart being unable to adequately pump blood may develop, dependent on infarct size, and is most likely to occur within the days following an acute myocardial infarction. Cardiogenic shock is the largest cause of in-hospital mortality. Rupture of the ventricular dividing wall or left ventricular wall may occur within the initial weeks. Dresslers syndrome, a reaction following larger infarcts and a cause of pericarditis is also possible.Heart failure may develop as a long-term consequence, with an impaired ability of heart muscle to pump, scarring, and an increase in the size of the existing muscle. Aneurysm of the left ventricle myocardium develops in about 10% of MI and is itself a risk factor for heart failure, ventricular arrhythmia, and the development of clots.Risk factors for complications and death include age, hemodynamic parameters (such as heart failure, cardiac arrest on admission, systolic blood pressure, or Killip class of two or greater), ST-segment deviation, diabetes, serum creatinine, peripheral vascular disease, and elevation of cardiac markers.
Epidemiology
Myocardial infarction is a common presentation of coronary artery disease. The World Health Organization estimated in 2004, that 12.2% of worldwide deaths were from ischemic heart disease; with it being the leading cause of death in high- or middle-income countries and second only to lower respiratory infections in lower-income countries. Worldwide, more than 3 million people have STEMIs and 4 million have NSTEMIs a year. STEMIs occur about twice as often in men as women.Rates of death from ischemic heart disease (IHD) have slowed or declined in most high-income countries, although cardiovascular disease still accounted for one in three of all deaths in the US in 2008. For example, rates of death from cardiovascular disease have decreased almost a third between 2001 and 2011 in the United States.In contrast, IHD is becoming a more common cause of death in the developing world. For example, in India, IHD had become the leading cause of death by 2004, accounting for 1.46 million deaths (14% of total deaths) and deaths due to IHD were expected to double during 1985–2015. Globally, disability adjusted life years (DALYs) lost to ischemic heart disease are predicted to account for 5.5% of total DALYs in 2030, making it the second-most-important cause of disability (after unipolar depressive disorder), as well as the leading cause of death by this date.
Social determinants of health
Social determinants such as neighborhood disadvantage, immigration status, lack of social support, social isolation, access to health services play an important role in myocardial infarction risk and survival. Studies have shown that low socioeconomic status is associated with an increased risk of poorer survival. There are well-documented disparities in myocardial infarction survival by socioeconomic status, race, education, and census-tract-level poverty.Race: In the U.S. African Americans have a greater burden of myocardial infarction and other cardiovascular events. On a population level, there is a higher overall prevalence of risk factors that are unrecognized and therefore not treated, which places these individuals at a greater likelihood of experiencing adverse outcomes and therefore potentially higher morbidity and mortality.Socioeconomic status: Among individuals who live in the low-socioeconomic (SES) areas, which is close to 25% of the US population, myocardial infarctions (MIs) occurred twice as often compared with people who lived in higher SES areas.Immigration status: In 2018 many lawfully present immigrants who are eligible for coverage remain uninsured because immigrant families face a range of enrollment barriers, including fear, confusion about eligibility policies, difficulty navigating the enrollment process, and language and literacy challenges. Uninsured undocumented immigrants are ineligible for coverage options due to their immigration status.Health care access: Lack of health insurance and financial concerns about accessing care were associated with delays in seeking emergency care for acute myocardial infarction which can have significant, adverse consequences on patient outcomes.Education: Researchers found that compared to people with graduate degrees, those with lower educational attainment appeared to have a higher risk of heart attack, dying from a cardiovascular event, and overall death.
Society and culture
Depictions of heart attacks in popular media often include collapsing or loss of consciousness which are not common symptoms; these depictions contribute to widespread misunderstanding about the symptoms of myocardial infarctions, which in turn contributes to people not getting care when they should.
Legal implications
At common law, in general, a myocardial infarction is a disease, but may sometimes be an injury. This can create coverage issues in the administration of no-fault insurance schemes such as workers compensation. In general, a heart attack is not covered; however, it may be a work-related injury if it results, for example, from unusual emotional stress or unusual exertion. In addition, in some jurisdictions, heart attacks had by persons in particular occupations such as police officers may be classified as line-of-duty injuries by statute or policy. In some countries or states, a person having had an MI may be prevented from participating in activity that puts other peoples lives at risk, for example driving a car or flying an airplane.
References
Sources
Allison TG (6 December 2012). "Stress Test Selection". In Margaret A. Lloyd (ed.). Mayo Clinic Cardiology: Concise Textbook. Joseph G. Murphy. OUP USA. pp. 196–202. ISBN 978-0-19-991571-2.
Blumenthal RS, Margolis S (2007). Heart Attack Prevention 2007. Johns Hopkins Health. ISBN 978-1-933087-47-4.
Dwight J (16 June 2016). "Chest pain, breathlessness, fatigue". In Timothy Cox (ed.). Oxford Textbook of Medicine: Cardiovascular Disorders. Jeremy Dwight. Oxford University Press. pp. 39–47. ISBN 978-0-19-871702-7.
Gaziano TA, Gaziano JM (15 September 2016). "Global Evolving Epidemiology, Natural History, and Treatment Trends of Myocardial Infarction". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 11–21. ISBN 978-0-323-35943-6.
Morrow DA, Bohula EA (15 September 2016). "Heart Failure and Cardiogenic Shock After Myocardial Infarction". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 295–313. ISBN 978-0-323-35943-6.
Morrow DA, Braunwald E (15 September 2016). "Classification and Diagnosis of Acute Coronary Syndromes". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 1–10. ISBN 978-0-323-35943-6.
Morrow DA (15 September 2016). "Clinical Approach to Suspected Acute Myocardial Infarction". In Morrow DA (ed.). Myocardial Infarction: A Companion to Braunwalds Heart Disease. Elsevier. pp. 55–65. ISBN 978-0-323-35943-6.
Further reading
External links
Myocardial infarction at Curlie
American Heart Associations Heart Attack web site — Information and resources for preventing, recognizing, and treating a heart attack.
TIMI Score for UA/NSTEMI Archived 2016-11-05 at the Wayback Machine and STEMI Archived 2009-03-19 at the Wayback Machine
HEART Score for Major Cardiac Events Archived 2016-10-28 at the Wayback Machine
"Heart Attack". MedlinePlus. U.S. National Library of Medicine. |
Urethrocele | A urethrocele is the prolapse of the female urethra into the vagina. Weakening of the tissues that hold the urethra in place may cause it to protrude into the vagina. Urethroceles often occur with cystoceles (involving the urinary bladder as well as the urethra). In this case, the term used is cystourethrocele.
Signs and symptoms
There are often no symptoms associated with a urethrocele. When present, symptoms include stress incontinence, increased urinary frequency, and urinary retention (difficulty in emptying the bladder). Pain during sexual intercourse may also occur.
Complications
Where a urethrocele causes difficulty in urinating, this can lead to cystitis.
Cause
Urethroceles can often result as a result of damage to the supporting structures of the pelvic floor. Urethroceles can form after treatment for gynecological cancers.
Urethroceles are often caused by childbirth, the movement of the baby through the vagina causing damage to the surrounding tissues. When they occur in women who have never had children, they may be the result of a congenital weakness in the tissues of the pelvic floor.
Diagnosis
Treatment
A urethrocele can be treated surgically.
See also
Rectocele
Urethropexy
Urethral bulking injections
References
== External links == |
Eosinophilic esophagitis | Eosinophilic esophagitis (EoE) is an allergic inflammatory condition of the esophagus that involves eosinophils, a type of white blood cell. In healthy individuals, the esophagus is typically devoid of eosinophils. In EoE, eosinophils migrate to the esophagus in large numbers. When a trigger food is eaten, the eosinophils contribute to tissue damage and inflammation. Symptoms include swallowing difficulty, food impaction, vomiting, and heartburn.Eosinophilic esophagitis was first described in children but also occurs in adults. The condition is not well understood, but food allergy may play a significant role. The treatment may consist of removal of known or suspected triggers and medication to suppress the immune response. In severe cases, it may be necessary to enlarge the esophagus with an endoscopy procedure.
While knowledge about EoE has been increasing rapidly, diagnosis of EoE can be challenging because the symptoms and histo-pathologic findings are not specific.
Epidemiology
The prevalence of eosinophilic esophagitis has increased over time and currently ranges from 1 to 6 per 10,000 persons. Gender and ethnic variations exist in the prevalence of EoE, with the majority of cases reported in Caucasian males.In addition to gender (male predominance) and race (mainly a disease of Caucasian individuals), established risk factors for EoE include atopy and other allergic conditions. Other recognized genetic and environmental risk factors for EoE include alterations in gut barrier function (e.g. GERD), variation in the nature and timing of oral antigen exposure, lack of early exposure to microbes, and an altered microbiome. A study comparing active EoE children to non EoE children found an altered microbiome due to a positive correlation between a relatively high abundance of Haemophilus and disease activity seen through an increasing Eosinophilic Esophagitis Endoscopic Reference Score and Eosinophilic Esophagitis Histologic Scoring System (q value = 5e-10). Measuring the relative abundance of specific taxa in children’s salivary microbiome could serve as a noninvasive marker for eosinophilic esophagitis.
Signs and symptoms
EoE often presents with difficulty swallowing, food impaction, stomach pains, regurgitation or vomiting, and decreased appetite. Although the typical onset of EoE is in childhood, the disease can be found in all age groups, and symptoms vary depending on the age of presentation. In addition, young children with EoE may present with feeding difficulties and poor weight gain. It is more common in males, and affects both adults and children.Predominant symptoms in school-aged children and adolescents include difficulty swallowing, food impaction, and choking/gagging with meals- particularly when eating foods with coarse textures. Other symptoms in this age group can include abdominal/chest pain, vomiting, and regurgitation. The predominant symptom in adults is difficulty swallowing; however, intractable heartburn and food avoidance may also be present. Due to the long-standing inflammation and possible resultant scarring that may have gone unrecognized, adults presenting with EoE tend to have more episodes of esophageal food impaction as well as other esophageal abnormalities such as Schatzki ring, esophageal webs, and in some cases, achalasia.Although many of these symptoms overlap with the symptoms of GERD, the majority of patients with EoE exhibit a poor response to acid-suppression therapy. Many people with EoE have other autoimmune and allergic diseases such as asthma and celiac disease. Mast cell disorders such as Mast Cell Activation Syndrome or Mastocytosis are also frequently associated with it.
Pathophysiology
Eosinophils are inflammatory cells that release a variety of chemical signals which inflame the surrounding esophageal tissue. This results in the signs and symptoms of pain, visible redness on endoscopy, and a natural history that may include stricturing. Eosinophils are normally present in other parts of a healthy gastrointestinal tract, these white blood cells are not normally found in the esophagus of a healthy individual. The reason for the migration of eosinophils to the tissue of the esophagus is not fully understood but is being studied extensively. It is thought the migration of eosinophils to the esophagus may be due to genetic, environmental, and host immune system factors.At a tissue level, EoE is characterized by a dense infiltrate with white blood cells of the eosinophil type into the epithelial lining of the esophagus. This is thought to be an allergic reaction against ingested food, based on the important role eosinophils play in allergic reactions. The eosinophils are recruited into the tissue in response to local production of eotaxin-3 by IL-13 stimulated esophageal epithelial cells.
Diagnosis
The diagnosis of EoE is typically made on the combination of symptoms and findings on diagnostic testing. To properly diagnose EoE, various diseases such as GERD, esophageal cancer, achalasia, hypereosinophilic syndrome, infection, Crohns disease, and drug allergies need to be ruled out.
Prior to the development of the EE Diagnostic Panel, EoE could only be diagnosed if gastroesophageal reflux did not respond to a six-week trial of twice-a-day high-dose proton-pump inhibitors (PPIs) or if a negative ambulatory pH study ruled out gastroesophageal reflux disease (GERD).Radiologically, the term "ringed esophagus" has been used for the appearance of eosinophilic esophagitis on barium swallow studies to contrast with the appearance of transient transverse folds sometimes seen with esophageal reflux (termed "feline esophagus").
Endoscopy
Endoscopically, ridges, furrows, or rings may be seen in the esophageal wall. Sometimes, multiple rings may occur in the esophagus, leading to the term "corrugated esophagus" or "feline esophagus" due to similarity of the rings to the cat esophagus. Presence of white exudates in esophagus is also suggestive of the diagnosis. On biopsy taken at the time of endoscopy, numerous eosinophils can be seen in the superficial epithelium. A minimum of 15 eosinophils per high-power field are required to make the diagnosis. Eosinophilic inflammation is not limited to the esophagus alone, and does extend through the whole gastrointestinal tract. Profoundly degranulated eosinophils may also be present, as may micro-abscesses and an expansion of the basal layer.Patients found to have signs of EoE on endoscopy should undergo an empiric 8-week trial of high-dose proton pump inhibitor therapy (twice daily) before repeat endoscopy in order to rule out GERD. Although endoscopic findings are helpful in identifying patients with EoE, they are not diagnostic of the disease if the patient has no clinical symptoms.
Esophageal mucosal biopsy
Currently, endoscopic mucosal biopsy remains the most important diagnostic test for EoE, and is required to confirm the diagnosis. Biopsy specimens from both the proximal/mid and distal esophagus should be obtained regardless of the gross appearance of the mucosa. Specimens should also be obtained from areas revealing endoscopic abnormalities. At least four biopsies are required to obtain adequate sensitivity for the detection of EoE. A definitive diagnosis of EoE is based on the presence of at least 15 eosinophils/HPF in the esophageal biopsies of patients despite treatment with high-dose PPI. GERD can increase eosinophilic infiltration in the distal esophagus, however, eosinophils associated with GERD generally occur at a lower density (i.e. < 15/HPF).
Allergy assessment
A thorough personal and family history of other atopic conditions is recommended in all patients with EoE. Testing for allergic sensitization may be considered using skin prick testing or blood testing for allergen-specific IgE. This is particularly important for the 10–20% of EoE patients who also have symptoms of immediate IgE-mediated food allergy. Atopy patch testing has been used in some cases for the potential identification of delayed, non-IgE (cell-mediated) reactions.
Diagnostic criteria
The diagnosis of eosinophilic esophagitis requires all of the following:
Symptoms related to esophageal dysfunction.
Eosinophil-predominant inflammation on esophageal biopsy, characteristically consisting of a peak value of ≥15 eosinophils per high power field (HPF).
Exclusion of other causes that may be responsible for symptoms and esophageal eosinophilia.
Treatment
The goal of EoE treatment is to control the symptoms by decreasing the number of eosinophils in the esophagus and, subsequently, reducing the esophageal inflammation. Management consists of dietary, pharmacological, and endoscopic treatment.
Dietary management
Dietary treatment can be effective, as there does appear to be a role of allergy in the development of EOE. Allergy testing is not particularly effective in predicting which foods are driving the disease process. If no specific allergenic food or agent is present, a trial of the six food elimination diet (SFED) can be pursued. Various approaches have been tried, where either six food groups (cows milk, wheat, egg, soy, nuts and fish/seafood), four groups (animal milk, gluten-containing cereals, egg, legumes) or two groups (animal milk and gluten-containing cereals) are excluded for a period of time, usually six weeks. A "top down" (starting with six foods, then reintroducing) approach may be very restrictive. Four- or even two-group exclusion diets may be less difficult to follow and reduce the need for many endoscopies if the response to the limited restriction is good.Alternative options to SFED includes the elemental diet, which is an amino acid based diet. The elemental diet demonstrates a high rate of response (almost 90% in children, 70% in adults), with a rapid relief of symptoms associated with histological remission. This diet involves using amino-acid based liquid formulas for 4-6 wk, followed by the histological evaluation of response. If remission is achieved, foods are slowly reintroduced.
Pharmacologic treatment
In patients diagnosed with EoE, a trial of proton-pump inhibitors (PPI), such as esomeprazole 20 mg to 40 mg oral daily or twice daily as a first line therapy is a reasonable option. Nexium®, brand name esomeprazole, may be preferred as these tablets can be dispersed in half a glass of water and drank for those with difficulty swallowing pills. Those who respond to PPI therapy with symptomatic improvement, should have endoscopy with esophageal biopsy should be repeated. If no eosinophils are present in the repeat biopsy, the diagnosis is either acid mediated GERD with eosinophilia or non GERD PPI responsive EoE with unknown mechanism. If both symptoms and eosinophils persists after treatment with PPI, the diagnosis is immune mediated EoE.Medical therapy for immune mediated EoE primarily involves using corticosteroids. Systemic (oral) corticosteroids were one of the first treatment options shown to be effective in patients with EoE. Both clinical and histologic improvement have been noted in approximately 95% of EoE patients using systemic corticosteroids. However, upon discontinuation of therapy, 90% of patients using corticosteroids experience a recurrence in symptoms. In May 2022, U.S. Food and Drug Administration approved dupilumab (Dupixent) to treat eosinophilic esophagitis (EoE) in adults and pediatric patients 12 years and older weighing at least 40 kilograms (which is about 88 pounds) making it the first US FDA approved treatment for EoE.
Endoscopic dilatation
In patients who present with food impaction, flexible upper endoscopy is recommended to remove impacted food. Dilation is deferred in EoE until patients are adequately treated with pharmacological or dietary therapy, and the result of a response to therapy is available. The goals of therapy for treating EoE is to improve the patients symptoms as well as to reduce the number of eosinophils on biopsy. This procedure is effective in 84% of people who require it.Esophageal strictures and rings can be safely dilated in EoE. It is recommended to use a graduated balloon catheter for gradual dilation. The patient should be informed that after dilation they might experience chest pain and in addition risk of esophageal perforation and bleeding.
Prognosis
The long-term prognosis for patients with EoE is unknown. Some patients may follow a “waxing and waning” course characterized by symptomatic episodes followed by periods of remission. There have also been reports of apparent spontaneous disease remission in some patients; however, the risk of recurrence in these patients is unknown. It is possible that long-standing, untreated disease may result in esophageal remodeling, leading to strictures, Schatzki ring and, eventually, achalasia.
Risk Factors
There are many environmental factors that can increase the risk of developing EoE along with genetic factors for the disorder. The prevalence of EoE seems to be trending and there are many ongoing studies to try and find out why this may be. Risk factors for EoE include autoimmune conditions such as, inflammatory bowel disease and rheumatoid arthritis. Those with celiac disease, another autoimmune condition, are at higher risk of developing EoE as well. Individuals living in dry or cold climates as well as those living in areas of low population density are associated with higher rates of EoE. Food allergens are a risk factor of EoE and can often be directly attributed to the disease. Often times removing these food allergens from the diet can resolve EoE symptoms.
History
The first case of eosinophilic esophagitis was reported in 1978. In the early 1990s, it became recognized as a distinct disease.
See also
Eosinophilic gastroenteritis
References
== External links == |
Overflow incontinence | Overflow incontinence is a concept of urinary incontinence, characterized by the involuntary release of urine from an overfull urinary bladder, often in the absence of any urge to urinate. This condition occurs in people who have a blockage of the bladder outlet (benign prostatic hyperplasia, prostate cancer, or narrowing of the urethra), or when the muscle that expels urine from the bladder is too weak to empty the bladder normally. Overflow incontinence may also be a side effect of certain medications.
Causes
Lesions affecting sacral segments or peripheral autonomic fibres result in atonic bladder with loss of sphincteric coordination. This results in loss of detrusor contraction, difficulty in initiating micturition and overflow incontinence. Anticholinergic side effects of certain medications (for example, certain antipsychotics and antidepressants) may cause urinary retention which may lead to overflow incontinence. Alpha-adrenergic agonists may cause urinary retention by stimulating the contraction of the urethral sphincter. Calcium channel blockers may decrease the contractility of the smooth muscle tissue in the urinary bladder, causing urinary retention with overflow incontinence. Epidural anesthesia and delivery also can cause the overflow incontinence.
Pathophysiology
Overflow incontinence occurs when the patients bladder is always full so that it frequently leaks urine. Weak bladder muscles, resulting in incomplete emptying of the bladder, or a blocked urethra can cause this type of incontinence. Autonomic neuropathy from diabetes or other diseases (e.g. Multiple sclerosis) can decrease neural signals from the bladder (allowing for overfilling) and may also decrease the expulsion of urine by the detrusor muscle (allowing for urinary retention). Additionally, tumors and kidney stones can block the urethra. Spinal cord injuries or nervous system disorders are additional causes of overflow incontinence. In men, benign prostatic hyperplasia (BPH) may also restrict the flow of urine. Overflow incontinence is rare in women, although sometimes it is caused by fibroid or ovarian tumors. Also overflow incontinence can be from increased outlet resistance from advanced vaginal prolapse causing a "kink" in the urethra or after an anti-incontinence procedure which has overcorrected the problem. Early symptoms include a hesitant or slow stream of urine during voluntary urination. Anticholinergic and NSAIDs medications may worsen overflow incontinence.
Criticism
The concept of overflow incontinence has been criticised, because it is difficult to define and because the definitions that have been proposed have little clinical significance. The concept is a purely theoretical one that is not based on evidence. Overflow incontinence cannot be measured and can therefore not be reliably diagnosed. In the urological literature and in medical care the concept is therefore of little importance, with the related concept of chronic urinary retention being the much more relevant and useful one.In 2017 the Quality Improvement and Patient Safety (QIPS) committee of the American Urological Association (AUA) published a definition of nonneurogenic chronic urinary retention as a post-void residual of greater than 300 mL that was measured at least twice and extended over a period at least six months. Measurement of post-void residual by medical ultrasound is an easy procedure that is sufficient in most cases.
Patients with this condition presenting additionally with hydronephrosis, stage 3 chronic kidney disease, or recurrent urinary tract infection or urosepsis were considered as high risk groups. For these patients catheterization is often mandatory as an immediate short-term management of chronic urinary retention.
See also
Bladder sphincter dyssynergia
Overactive bladder
References
== External links == |
Burn | A burn is an injury to skin, or other tissues, caused by heat, cold, electricity, chemicals, friction, or ultraviolet radiation (like sunburn). Most burns are due to heat from hot liquids (called scalding), solids, or fire. Burns occur mainly in the home or the workplace. In the home, risks are associated with domestic kitchens, including stoves, flames, and hot liquids. In the workplace, risks are associated with fire and chemical and electric burns. Alcoholism and smoking are other risk factors. Burns can also occur as a result of self-harm or violence between people (assault).Burns that affect only the superficial skin layers are known as superficial or first-degree burns. They appear red without blisters and pain typically lasts around three days. When the injury extends into some of the underlying skin layer, it is a partial-thickness or second-degree burn. Blisters are frequently present and they are often very painful. Healing can require up to eight weeks and scarring may occur. In a full-thickness or third-degree burn, the injury extends to all layers of the skin. Often there is no pain and the burnt area is stiff. Healing typically does not occur on its own. A fourth-degree burn additionally involves injury to deeper tissues, such as muscle, tendons, or bone. The burn is often black and frequently leads to loss of the burned part.Burns are generally preventable. Treatment depends on the severity of the burn. Superficial burns may be managed with little more than simple pain medication, while major burns may require prolonged treatment in specialized burn centers. Cooling with tap water may help pain and decrease damage; however, prolonged cooling may result in low body temperature. Partial-thickness burns may require cleaning with soap and water, followed by dressings. It is not clear how to manage blisters, but it is probably reasonable to leave them intact if small and drain them if large. Full-thickness burns usually require surgical treatments, such as skin grafting. Extensive burns often require large amounts of intravenous fluid, due to capillary fluid leakage and tissue swelling. The most common complications of burns involve infection. Tetanus toxoid should be given if not up to date.In 2015, fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 176,000 deaths. Among women in much of the world, burns are most commonly related to the use of open cooking fires or unsafe cook stoves. Among men, they are more likely a result of unsafe workplace conditions. Most deaths due to burns occur in the developing world, particularly in Southeast Asia. While large burns can be fatal, treatments developed since 1960 have improved outcomes, especially in children and young adults. In the United States, approximately 96% of those admitted to a burn center survive their injuries. The long-term outcome is related to the size of burn and the age of the person affected.
Signs and symptoms
The characteristics of a burn depend upon its depth. Superficial burns cause pain lasting two or three days, followed by peeling of the skin over the next few days. Individuals with more severe burns may indicate discomfort or complain of feeling pressure rather than pain. Full-thickness burns may be entirely insensitive to light touch or puncture. While superficial burns are typically red in color, severe burns may be pink, white or black. Burns around the mouth or singed hair inside the nose may indicate that burns to the airways have occurred, but these findings are not definitive. More worrisome signs include: shortness of breath, hoarseness, and stridor or wheezing. Itchiness is common during the healing process, occurring in up to 90% of adults and nearly all children. Numbness or tingling may persist for a prolonged period of time after an electrical injury. Burns may also produce emotional and psychological distress.
Cause
Burns are caused by a variety of external sources classified as thermal (heat-related), chemical, electrical, and radiation. In the United States, the most common causes of burns are: fire or flame (44%), scalds (33%), hot objects (9%), electricity (4%), and chemicals (3%). Most (69%) burn injuries occur at home or at work (9%), and most are accidental, with 2% due to assault by another, and 1–2% resulting from a suicide attempt. These sources can cause inhalation injury to the airway and/or lungs, occurring in about 6%.Burn injuries occur more commonly among the poor. Smoking and alcoholism are other risk factors. Fire-related burns are generally more common in colder climates. Specific risk factors in the developing world include cooking with open fires or on the floor as well as developmental disabilities in children and chronic diseases in adults.
Thermal
In the United States, fire and hot liquids are the most common causes of burns. Of house fires that result in death, smoking causes 25% and heating devices cause 22%. Almost half of injuries are due to efforts to fight a fire. Scalding is caused by hot liquids or gases and most commonly occurs from exposure to hot drinks, high temperature tap water in baths or showers, hot cooking oil, or steam. Scald injuries are most common in children under the age of five and, in the United States and Australia, this population makes up about two-thirds of all burns. Contact with hot objects is the cause of about 20–30% of burns in children. Generally, scalds are first- or second-degree burns, but third-degree burns may also result, especially with prolonged contact. Fireworks are a common cause of burns during holiday seasons in many countries. This is a particular risk for adolescent males. In the United States, for non-fatal burn injuries, white males, aged <6 comprise most cases. Thermal burns from grabbing/touching and spilling/splashing were the most common type of burn and mechanism, while the bodily areas most impacted were hands and fingers followed by head/neck.
Chemical
Chemical burns can be caused by over 25,000 substances, most of which are either a strong base (55%) or a strong acid (26%). Most chemical burn deaths are secondary to ingestion. Common agents include: sulfuric acid as found in toilet cleaners, sodium hypochlorite as found in bleach, and halogenated hydrocarbons as found in paint remover, among others. Hydrofluoric acid can cause particularly deep burns that may not become symptomatic until some time after exposure. Formic acid may cause the breakdown of significant numbers of red blood cells.
Electrical
Electrical burns or injuries are classified as high voltage (greater than or equal to 1000 volts), low voltage (less than 1000 volts), or as flash burns secondary to an electric arc. The most common causes of electrical burns in children are electrical cords (60%) followed by electrical outlets (14%). Lightning may also result in electrical burns. Risk factors for being struck include involvement in outdoor activities such as mountain climbing, golf and field sports, and working outside. Mortality from a lightning strike is about 10%.While electrical injuries primarily result in burns, they may also cause fractures or dislocations secondary to blunt force trauma or muscle contractions. In high voltage injuries, most damage may occur internally and thus the extent of the injury cannot be judged by examination of the skin alone. Contact with either low voltage or high voltage may produce cardiac arrhythmias or cardiac arrest.
Radiation
Radiation burns may be caused by protracted exposure to ultraviolet light (such as from the sun, tanning booths or arc welding) or from ionizing radiation (such as from radiation therapy, X-rays or radioactive fallout). Sun exposure is the most common cause of radiation burns and the most common cause of superficial burns overall. There is significant variation in how easily people sunburn based on their skin type. Skin effects from ionizing radiation depend on the amount of exposure to the area, with hair loss seen after 3 Gy, redness seen after 10 Gy, wet skin peeling after 20 Gy, and necrosis after 30 Gy. Redness, if it occurs, may not appear until some time after exposure. Radiation burns are treated the same as other burns. Microwave burns occur via thermal heating caused by the microwaves. While exposures as short as two seconds may cause injury, overall this is an uncommon occurrence.
Non-accidental
In those hospitalized from scalds or fire burns, 3–10% are from assault. Reasons include: child abuse, personal disputes, spousal abuse, elder abuse, and business disputes. An immersion injury or immersion scald may indicate child abuse. It is created when an extremity, or sometimes the buttocks are held under the surface of hot water. It typically produces a sharp upper border and is often symmetrical, known as "sock burns", "glove burns", or "zebra stripes" - where folds have prevented certain areas from burning. Deliberate cigarette burns most often found on the face, or the back of the hands and feet. Other high-risk signs of potential abuse include: circumferential burns, the absence of splash marks, a burn of uniform depth, and association with other signs of neglect or abuse.Bride burning, a form of domestic violence, occurs in some cultures, such as India where women have been burned in revenge for what the husband or his family consider an inadequate dowry. In Pakistan, acid burns represent 13% of intentional burns, and are frequently related to domestic violence. Self-immolation (setting oneself on fire) is also used as a form of protest in various parts of the world.
Pathophysiology
At temperatures greater than 44 °C (111 °F), proteins begin losing their three-dimensional shape and start breaking down. This results in cell and tissue damage. Many of the direct health effects of a burn are caused by failure of the skin to perform its normal functions, which include: protection from bacteria, skin sensation, body temperature regulation, and prevention of evaporation of the bodys water. Disruption of these functions can lead to infection, loss of skin sensation, hypothermia, and hypovolemic shock via dehydration (ie. water in the body evaporated away). Disruption of cell membranes causes cells to lose potassium to the spaces outside the cell and to take up water and sodium.In large burns (over 30% of the total body surface area), there is a significant inflammatory response. This results in increased leakage of fluid from the capillaries, and subsequent tissue edema. This causes overall blood volume loss, with the remaining blood suffering significant plasma loss, making the blood more concentrated. Poor blood flow to organs like the kidneys and gastrointestinal tract may result in kidney failure and stomach ulcers.Increased levels of catecholamines and cortisol can cause a hypermetabolic state that can last for years. This is associated with increased cardiac output, metabolism, a fast heart rate, and poor immune function.
Diagnosis
Burns can be classified by depth, mechanism of injury, extent, and associated injuries. The most commonly used classification is based on the depth of injury. The depth of a burn is usually determined via examination, although a biopsy may also be used. It may be difficult to accurately determine the depth of a burn on a single examination and repeated examinations over a few days may be necessary. In those who have a headache or are dizzy and have a fire-related burn, carbon monoxide poisoning should be considered. Cyanide poisoning should also be considered.
Size
The size of a burn is measured as a percentage of total body surface area (TBSA) affected by partial thickness or full thickness burns. First-degree burns that are only red in color and are not blistering are not included in this estimation. Most burns (70%) involve less than 10% of the TBSA.There are a number of methods to determine the TBSA, including the Wallace rule of nines, Lund and Browder chart, and estimations based on a persons palm size. The rule of nines is easy to remember but only accurate in people over 16 years of age. More accurate estimates can be made using Lund and Browder charts, which take into account the different proportions of body parts in adults and children. The size of a persons handprint (including the palm and fingers) is approximately 1% of their TBSA.
Severity
To determine the need for referral to a specialized burn unit, the American Burn Association devised a classification system. Under this system, burns can be classified as major, moderate, and minor. This is assessed based on a number of factors, including total body surface area affected, the involvement of specific anatomical zones, the age of the person, and associated injuries. Minor burns can typically be managed at home, moderate burns are often managed in a hospital, and major burns are managed by a burn center. Severe burn injury represents one of the most devastating forms of trauma. Despite improvements in burn care, patients can be left to suffer for as many as three years post-injury.
Signs of smoke inhalation
Signs of smoke inhalation includes hoarse voice, dyspnea, facial burns, singed nasal hairs, sputum which contains carbonaceous materials, Stridor and wheezing may be present in later stages.
Prevention
Historically, about half of all burns were deemed preventable. Burn prevention programs have significantly decreased rates of serious burns. Preventive measures include: limiting hot water temperatures, smoke alarms, sprinkler systems, proper construction of buildings, and fire-resistant clothing. Experts recommend setting water heaters below 48.8 °C (119.8 °F). Other measures to prevent scalds include using a thermometer to measure bath water temperatures, and splash guards on stoves. While the effect of the regulation of fireworks is unclear, there is tentative evidence of benefit with recommendations including the limitation of the sale of fireworks to children.
Management
Resuscitation begins with the assessment and stabilization of the persons airway, breathing and circulation. If inhalation injury is suspected, early intubation may be required. This is followed by care of the burn wound itself. People with extensive burns may be wrapped in clean sheets until they arrive at a hospital. As burn wounds are prone to infection, a tetanus booster shot should be given if an individual has not been immunized within the last five years. In the United States, 95% of burns that present to the emergency department are treated and discharged; 5% require hospital admission. With major burns, early feeding is important. Protein intake should also be increased, and trace elements and vitamins are often required. Hyperbaric oxygenation may be useful in addition to traditional treatments.
Intravenous fluids
In those with poor tissue perfusion, boluses of isotonic crystalloid solution should be given. In children with more than 10–20% TBSA (Total Body Surface Area) burns, and adults with more than 15% TBSA burns, formal fluid resuscitation and monitoring should follow. This should be begun pre-hospital if possible in those with burns greater than 25% TBSA. The Parkland formula can help determine the volume of intravenous fluids required over the first 24 hours. The formula is based on the affected individuals TBSA and weight. Half of the fluid is administered over the first 8 hours, and the remainder over the following 16 hours. The time is calculated from when the burn occurred, and not from the time that fluid resuscitation began. Children require additional maintenance fluid that includes glucose. Additionally, those with inhalation injuries require more fluid. While inadequate fluid resuscitation may cause problems, over-resuscitation can also be detrimental. The formulas are only a guide, with infusions ideally tailored to a urinary output of >30 mL/h in adults or >1mL/kg in children and mean arterial pressure greater than 60 mmHg.While lactated Ringers solution is often used, there is no evidence that it is superior to normal saline. Crystalloid fluids appear just as good as colloid fluids, and as colloids are more expensive they are not recommended. Blood transfusions are rarely required. They are typically only recommended when the hemoglobin level falls below 60-80 g/L (6-8 g/dL) due to the associated risk of complications. Intravenous catheters may be placed through burned skin if needed or intraosseous infusions may be used.
Wound care
Early cooling (within 30 minutes of the burn) reduces burn depth and pain, but care must be taken as over-cooling can result in hypothermia. It should be performed with cool water 10–25 °C (50.0–77.0 °F) and not ice water as the latter can cause further injury. Chemical burns may require extensive irrigation. Cleaning with soap and water, removal of dead tissue, and application of dressings are important aspects of wound care. If intact blisters are present, it is not clear what should be done with them. Some tentative evidence supports leaving them intact. Second-degree burns should be re-evaluated after two days.In the management of first and second-degree burns, little quality evidence exists to determine which dressing type to use. It is reasonable to manage first-degree burns without dressings. While topical antibiotics are often recommended, there is little evidence to support their use. Silver sulfadiazine (a type of antibiotic) is not recommended as it potentially prolongs healing time. There is insufficient evidence to support the use of dressings containing silver or negative-pressure wound therapy. Silver sulfadiazine does not appear to differ from silver containing foam dressings with respect to healing.
Medications
Burns can be very painful and a number of different options may be used for pain management. These include simple analgesics (such as ibuprofen and acetaminophen) and opioids such as morphine. Benzodiazepines may be used in addition to analgesics to help with anxiety. During the healing process, antihistamines, massage, or transcutaneous nerve stimulation may be used to aid with itching. Antihistamines, however, are only effective for this purpose in 20% of people. There is tentative evidence supporting the use of gabapentin and its use may be reasonable in those who do not improve with antihistamines. Intravenous lidocaine requires more study before it can be recommended for pain.Intravenous antibiotics are recommended before surgery for those with extensive burns (>60% TBSA). As of 2008, guidelines do not recommend their general use due to concerns regarding antibiotic resistance and the increased risk of fungal infections. Tentative evidence, however, shows that they may improve survival rates in those with large and severe burns. Erythropoietin has not been found effective to prevent or treat anemia in burn cases. In burns caused by hydrofluoric acid, calcium gluconate is a specific antidote and may be used intravenously and/or topically. Recombinant human growth hormone (rhGH) in those with burns that involve more than 40% of their body appears to speed healing without affecting the risk of death. The use of steroids is of unclear evidence.
Surgery
Wounds requiring surgical closure with skin grafts or flaps (typically anything more than a small full thickness burn) should be dealt with as early as possible. Circumferential burns of the limbs or chest may need urgent surgical release of the skin, known as an escharotomy. This is done to treat or prevent problems with distal circulation, or ventilation. It is uncertain if it is useful for neck or digit burns. Fasciotomies may be required for electrical burns.Skin grafts can involve temporary skin substitutes, derived from animal (human donor or pig) skin or synthesized. They are used to cover the wound as a dressing, preventing infection and fluid loss, but will eventually need to be removed. Alternatively, human skin can be treated to be left on permanently without rejection.There is no evidence that the use of copper sulphate to visualise phosphorus particles for removal can help with wound healing due to phosphorus burns. Meanwhile, absorption of copper sulphate into the blood circulation can be harmful.
Alternative medicine
Honey has been used since ancient times to aid wound healing and may be beneficial in first- and second-degree burns. There is moderate evidence that honey helps heal partial thickness burns. The evidence for aloe vera is of poor quality. While it might be beneficial in reducing pain, and a review from 2007 found tentative evidence of improved healing times, a subsequent review from 2012 did not find improved healing over silver sulfadiazine. There were only three randomized controlled trials for the use of plants for burns, two for aloe vera and one for oatmeal.There is little evidence that vitamin E helps with keloids or scarring. Butter is not recommended. In low income countries, burns are treated up to one-third of the time with traditional medicine, which may include applications of eggs, mud, leaves or cow dung. Surgical management is limited in some cases due to insufficient financial resources and availability. There are a number of other methods that may be used in addition to medications to reduce procedural pain and anxiety including: virtual reality therapy, hypnosis, and behavioral approaches such as distraction techniques.
Patient support
Burn patients require support and care – both physiological and psychological. Respiratory failure, sepsis, and multi-organ system failure are common in hospitalized burn patients. To prevent hypothermia and maintain normal body temperature, burn patients with over 20% of burn injuries should be kept in an environment with the temperature at or above 30 degree Celsius.Metabolism in burn patients proceeds at a higher than normal speed due to the whole-body process and rapid fatty acid substrate cycles, which can be countered with an adequate supply of energy, nutrients, and antioxidants. Enteral feeding a day after resuscitation is required to reduce risk of infection, recovery time, non-infectious complications, hospital stay, long-term damage, and mortality. Controlling blood glucose levels can have an impact on liver function and survival.
Risk of thromboembolism is high and acute respiratory distress syndrome (ARDS) that does not resolve with maximal ventilator use is also a common complication. Scars are long-term after-effects of a burn injury. Psychological support is required to cope with the aftermath of a fire accident, while to prevent scars and long-term damage to the skin and other body structures consulting with burn specialists, preventing infections, consuming nutritious foods, early and aggressive rehabilitation, and using compressive clothing are recommended.
Prognosis
The prognosis is worse in those with larger burns, those who are older, and females. The presence of a smoke inhalation injury, other significant injuries such as long bone fractures, and serious co-morbidities (e.g. heart disease, diabetes, psychiatric illness, and suicidal intent) also influence prognosis. On average, of those admitted to the United States burn centers, 4% die, with the outcome for individuals dependent on the extent of the burn injury. For example, admittees with burn areas less than 10% TBSA had a mortality rate of less than 1%, while admittees with over 90% TBSA had a mortality rate of 85%. In Afghanistan, people with more than 60% TBSA burns rarely survive. The Baux score has historically been used to determine prognosis of major burns. However, with improved care, it is no longer very accurate. The score is determined by adding the size of the burn (% TBSA) to the age of the person and taking that to be more or less equal to the risk of death. Burns in 2013 resulted in 1.2 million years lived with disability and 12.3 million disability adjusted life years.
Complications
A number of complications may occur, with infections being the most common. In order of frequency, potential complications include: pneumonia, cellulitis, urinary tract infections and respiratory failure. Risk factors for infection include: burns of more than 30% TBSA, full-thickness burns, extremes of age (young or old), or burns involving the legs or perineum. Pneumonia occurs particularly commonly in those with inhalation injuries.Anemia secondary to full thickness burns of greater than 10% TBSA is common. Electrical burns may lead to compartment syndrome or rhabdomyolysis due to muscle breakdown. Blood clotting in the veins of the legs is estimated to occur in 6 to 25% of people. The hypermetabolic state that may persist for years after a major burn can result in a decrease in bone density and a loss of muscle mass. Keloids may form subsequent to a burn, particularly in those who are young and dark skinned. Following a burn, children may have significant psychological trauma and experience post-traumatic stress disorder. Scarring may also result in a disturbance in body image. In the developing world, significant burns may result in social isolation, extreme poverty and child abandonment.
Epidemiology
In 2015 fire and heat resulted in 67 million injuries. This resulted in about 2.9 million hospitalizations and 238,000 dying. This is down from 300,000 deaths in 1990. This makes it the fourth leading cause of injuries after motor vehicle collisions, falls, and violence. About 90% of burns occur in the developing world. This has been attributed partly to overcrowding and an unsafe cooking situation. Overall, nearly 60% of fatal burns occur in Southeast Asia with a rate of 11.6 per 100,000. The number of fatal burns has changed from 280,000 in 1990 to 176,000 in 2015.In the developed world, adult males have twice the mortality as females from burns. This is most probably due to their higher risk occupations and greater risk-taking activities. In many countries in the developing world, however, females have twice the risk of males. This is often related to accidents in the kitchen or domestic violence. In children, deaths from burns occur at more than ten times the rate in the developing than the developed world. Overall, in children it is one of the top fifteen leading causes of death. From the 1980s to 2004, many countries have seen both a decrease in the rates of fatal burns and in burns generally.
Developed countries
An estimated 500,000 burn injuries receive medical treatment yearly in the United States. They resulted in about 3,300 deaths in 2008. Most burns (70%) and deaths from burns occur in males. The highest incidence of fire burns occurs in those 18–35 years old, while the highest incidence of scalds occurs in children less than five years old and adults over 65. Electrical burns result in about 1,000 deaths per year. Lightning results in the death of about 60 people a year. In Europe, intentional burns occur most commonly in middle aged men.
Developing countries
In India, about 700,000 to 800,000 people per year sustain significant burns, though very few are looked after in specialist burn units. The highest rates occur in women 16–35 years of age. Part of this high rate is related to unsafe kitchens and loose-fitting clothing typical to India. It is estimated that one-third of all burns in India are due to clothing catching fire from open flames. Intentional burns are also a common cause and occur at high rates in young women, secondary to domestic violence and self-harm.
History
Cave paintings from more than 3,500 years ago document burns and their management. The earliest Egyptian records on treating burns describes dressings prepared with milk from mothers of baby boys, and the 1500 BCE Edwin Smith Papyrus describes treatments using honey and the salve of resin. Many other treatments have been used over the ages, including the use of tea leaves by the Chinese documented to 600 BCE, pig fat and vinegar by Hippocrates documented to 400 BCE, and wine and myrrh by Celsus documented to 100 CE. French barber-surgeon Ambroise Paré was the first to describe different degrees of burns in the 1500s. Guillaume Dupuytren expanded these degrees into six different severities in 1832.The first hospital to treat burns opened in 1843 in London, England, and the development of modern burn care began in the late 1800s and early 1900s. During World War I, Henry D. Dakin and Alexis Carrel developed standards for the cleaning and disinfecting of burns and wounds using sodium hypochlorite solutions, which significantly reduced mortality. In the 1940s, the importance of early excision and skin grafting was acknowledged, and around the same time, fluid resuscitation and formulas to guide it were developed. In the 1970s, researchers demonstrated the significance of the hypermetabolic state that follows large burns.
See also
Blister
Frostbite
|
Burn | Scalding
References
Citations
General and cited references
National Burn Repository (PDF). American Burn Association. 2012. Archived from the original (PDF) on 3 March 2016. Retrieved 20 April 2013.
External links
Parkland Formula
"Burns". MedlinePlus. U.S. National Library of Medicine. |
Microtia | Microtia is a congenital deformity where the auricle (external ear) is underdeveloped. A completely undeveloped pinna is referred to as anotia. Because microtia and anotia have the same origin, it can be referred to as microtia-anotia. Microtia can be unilateral (one side only) or bilateral (affecting both sides). Microtia occurs in 1 out of about 8,000–10,000 births. In unilateral microtia, the right ear is most commonly affected. It may occur as a complication of taking Accutane (isotretinoin) during pregnancy.
Classification
According to the Altman-classification, there are four grades of microtia:
Grade I: A less than complete development of the external ear with identifiable structures and a small but present external ear canal
Grade II: A partially developed ear (usually the top portion is underdeveloped) with a closed stenotic external ear canal producing a conductive hearing loss.
Grade III: Absence of the external ear with a small peanut-like vestige structure and an absence of the external ear canal and ear drum. Grade III microtia is the most common form of microtia.
Grade IV: Absence of the total ear or anotia.
Causes and risk factors
The etiology of microtia in children remains uncertain but there are some cases that associate the cause of microtia with genetic defects in multiple or single genes, altitude, and gestational diabetes. Risk factors gathered from studies include infants born underweight, male sex, women gravidity and parity, and medication use while pregnant. Genetic inheritance has not been fully studied but in the few studies available, it has shown to occur during the early stages of pregnancy.
Diagnosis
At birth, lower grade microtia is difficult to visually diagnose with a physical exam. While higher grade microtia can be visually diagnosed due to noticeable abnormalities. Infants that have noticeable abnormalities are closely monitored by physicians and hearing specialists.
Treatment
The goal of medical intervention is to provide the best form and function to the underdeveloped ear.
Hearing
Typically, testing is first done to determine the quality of hearing. This can be done as early as in the first two weeks with a BAER test (Brain Stem Auditory Response Test). At age 5–6, CT or CAT scans of the middle ear can be done to elucidate its development and clarify which patients are appropriate candidates for surgery to improve hearing. For younger individuals, this is done under sedation.
The hearing loss associated with congenital aural atresia is a conductive hearing loss—hearing loss caused by inefficient conduction of sound to the inner ear. Essentially, children with aural atresia have hearing loss because the sound cannot travel into the (usually) healthy inner ear—there is no ear canal, no eardrum, and the small ear bones (malleus/hammer, incus/anvil, and stapes/stirrup) are underdeveloped. "Usually" is in parentheses because rarely, a child with atresia also has a malformation of the inner ear leading to a sensorineural hearing loss (as many as 19% in one study). Sensorineural hearing loss is caused by a problem in the inner ear, the cochlea. Sensorineural hearing loss is not correctable by surgery, but properly fitted and adjusted hearing amplification (hearing aids) generally provide excellent rehabilitation for this hearing loss. If the hearing loss is severe to profound in both ears, the child may be a candidate for a cochlear implant (beyond the scope of this discussion).
Unilateral sensorineural hearing loss was not generally considered a serious disability by the medical establishment before the nineties; it was thought that the afflicted person was able to adjust to it from birth. In general, there are exceptional advantages to gain from an intervention to enable hearing in the microtic ear, especially in bilateral microtia. Children with untreated unilateral sensorineural hearing loss are more likely to have to repeat a grade in school and/or need supplemental services (e.g., FM system – see below) than their peers.Children with unilateral sensorineural hearing loss often require years of speech therapy in order to learn how to enunciate and understand spoken language. What is truly unclear, and the subject of an ongoing research study, is the effect of unilateral conductive hearing loss (in children with unilateral aural atresia) on scholastic performance. If atresia surgery or some form of amplification is not used, special steps should be taken to ensure that the child is accessing and understanding all of the verbal information presented in school settings. Recommendations for improving a childs hearing in the academic setting include preferential seating in class, an FM system (the teacher wears a microphone, and the sound is transmitted to a speaker at the childs desk or to an ear bud or hearing aid the child wears), a bone-anchored hearing aid (BAHA), or conventional hearing aids. Age for BAHA implantation depends on whether the child is in Europe (18 months) or the US (age 5). Until then it is possible to fit a BAHA on a softbandIt is important to note that not all children with aural atresia are candidates for atresia repair. Candidacy for atresia surgery is based on the hearing test (audiogram) and CT scan imaging. If a canal is built where one does not exist, minor complications can arise from the bodys natural tendency to heal an open wound closed. Repairing aural atresia is a very detailed and complicated surgical procedure which requires an expert in atresia repair. While complications from this surgery can arise, the risk of complications is greatly reduced when using a highly experienced otologist. Atresia patients who opt for surgery will temporarily have the canal packed with gelatin sponge and silicone sheeting to prevent closure. The timing of ear canal reconstruction (canalplasty) depends on the type of external ear (Microtia) repair desired by the patient and family. Two surgical teams in the USA are currently able to reconstruct the canal at the same time as the external ear in a single surgical stage (one stage ear reconstruction).
In cases where a later surgical reconstruction of the external ear of the child might be possible, positioning of the BAHA implant is critical. It may be necessary to position the implant further back than usual to enable successful reconstructive surgery – but not so far as to compromise hearing performance. If the reconstruction is ultimately successful, it is easy to remove the percutaneous BAHA abutment. If the surgery is unsuccessful, the abutment can be replaced and the implant re-activated to restore hearing.
External ear
The age when outer ear surgery can be attempted depends upon the technique chosen. The earliest is 7 for Rib Cartilage Grafts. However, some surgeons recommend waiting until a later age, such as 8–10 when the ear is closer to adult size. External ear prostheses have been made for children as young as 5.
For auricular reconstruction, there are several different options:
Rib Cartilage Graft Reconstruction: This surgery may be performed by specialists in the technique. It involves sculpting the patients own rib cartilage into the form of an ear. Because the cartilage is the patients own living tissue, the reconstructed ear continues to grow as the child does. In order to be sure that the rib cage is large enough to provide the necessary donor tissue, some surgeons wait until the patient is 8 years of age; however, some surgeons with more experience with this technique may begin the surgery on a child aged six. The major advantage of this surgery is that the patients own tissue is used for the reconstruction. This surgery varies from two to four stages depending on the surgeons preferred method. A novel one stage ear reconstruction technique is performed by a few select surgeons. One team is able to reconstruct the entire external ear and ear canal in one operation.
Reconstruct the ear using a polyethylene plastic implant (also called Medpor): This is a 1–2 stage surgery that can start at age 3 and can be done as an outpatient without hospitalization. Using the porous framework, which allows the patients tissue to grow into the material and the patients own tissue flap, a new ear is constructed in a single surgery. A small second surgery is performed in 3–6 months if needed for minor adjustments. Medpor was developed by John Reinisch. This surgery should only be performed by experts in the techniques involved. The use of porous polyethylene implants for ear reconstruction was initiated in the 1980s by Alexander Berghaus.
Ear Prosthesis: An auricular (ear) prosthesis is custom made by an anaplastologist to mirror the other ear. Prosthetic ears can appear very realistic. They require a few minutes of daily care. They are typically made of silicone, which is colored to match the surrounding skin and can be attached using either adhesive or with titanium screws inserted into the skull to which the prosthetic is attached with a magnetic or bar/clip type system. These screws are the same as the BAHA (bone anchored hearing aid) screws and can be placed simultaneously. The biggest advantage over any surgery is having a prosthetic ear that allows the affected ear to appear as normal as possible to the natural ear. The biggest disadvantage is the daily care involved and knowing that the prosthesis is not real. In 2022, success of transplantation of a 3D bioprinted auricle made from the microtia patients own cells was reported, also achieving a first in 3D bioprinting for transplants.
Related conditions
Aural atresia is the underdevelopment of the middle ear and canal and usually occurs in conjunction with microtia. Atresia occurs because patients with microtia may not have an external opening to the ear canal, though. However, the cochlea and other inner ear structures are usually present. The grade of microtia usually correlates to the degree of development of the middle ear.
Microtia is usually isolated, but may occur in conjunction with hemifacial microsomia, Goldenhar Syndrome or Treacher-Collins Syndrome. It is also occasionally associated with kidney abnormalities (rarely life-threatening), and jaw problems, and more rarely, heart defects and vertebral deformities.
Notable cases
Paul Stanley, vocalist and rhythm guitarist of Kiss, was born with grade III microtia of his right ear.
References
Further reading
Bennun RD, Mulliken JB, Kaban LB, Murray JE (December 1985). "Microtia: a microform of hemifacial microsomia". Plast. Reconstr. Surg. 76 (6): 859–65. doi:10.1097/00006534-198512000-00010. PMID 4070453. S2CID 25652076.
Thorne, Charles (2013) " Ear Reconstruction: Microtia". Grabb & Smiths Plastic Surgery, 7th ed. Pages 283-294.
== External links == |
Liver failure | Liver failure is the inability of the liver to perform its normal synthetic and metabolic functions as part of normal physiology. Two forms are recognised, acute and chronic (cirrhosis). Recently, a third form of liver failure known as acute-on-chronic liver failure (ACLF) is increasingly being recognized.
Acute
Acute liver failure is defined as "the rapid development of hepatocellular dysfunction, specifically coagulopathy and mental status changes (encephalopathy) in a patient without known prior liver disease".:1557The disease process is associated with the development of a coagulopathy of liver aetiology, and clinically apparent altered level of consciousness due to hepatic encephalopathy. Several important measures are immediately necessary when the patient presents for medical attention. The diagnosis of acute liver failure is based on a physical exam, laboratory findings, patient history, and past medical history to establish mental status changes, coagulopathy, rapidity of onset, and absence of known prior liver disease respectively.:1557The exact definition of "rapid" is somewhat questionable, and different sub-divisions exist, which are based on the time from onset of first hepatic symptoms to onset of encephalopathy. One scheme defines "acute hepatic failure" as the development of encephalopathy within 26 weeks of the onset of any hepatic symptoms. This is sub-divided into "fulminant hepatic failure", which requires onset of encephalopathy within 8 weeks, and "subfulminant", which describes onset of encephalopathy after 8 weeks but before 26 weeks. Another scheme defines "hyperacute" as onset within 7 days, "acute" as onset between 7 and 28 days, and "subacute" as onset between 28 days and 24 weeks.:1557
Chronic
Chronic liver failure usually occurs in the context of cirrhosis, itself potentially the result of many possible causes, such as excessive alcohol intake, hepatitis B or C, autoimmune, hereditary and metabolic causes (such as iron or copper overload, steatohepatitis or non-alcoholic fatty liver disease).
Acute on chronic
"Acute on chronic liver failure (ACLF)" is said to exist when someone with chronic liver disease develops features of liver failure. A number of underlying causes may precipitate this, such as alcohol misuse or infection. People with ACLF can be critically ill and require intensive care treatment, and occasionally a liver transplant. Mortality with treatment is 50%.
References
== External links == |
Kayser–Fleischer ring | Kayser–Fleischer rings (KF rings) are dark rings that appear to encircle the cornea of the eye. They are due to copper deposition in the Descemets membrane as a result of particular liver diseases. They are named after German ophthalmologists Bernhard Kayser and Bruno Fleischer who first described them in 1902 and 1903. Initially thought to be due to the accumulation of silver, they were first demonstrated to contain copper in 1934.
Presentation
The rings, which consist of copper deposits where the cornea meets the sclera, in Descemets membrane, first appear as a crescent at the top of the cornea. Eventually, a second crescent forms below, at the "six oclock position", and ultimately completely encircles the cornea.
Associations
Kayser–Fleischer rings are a sign of Wilsons disease, which involves abnormal copper handling by the liver resulting in copper accumulation in the body and is characterised by abnormalities of the basal ganglia of the brain, liver cirrhosis, splenomegaly, involuntary movements, muscle rigidity, psychiatric disturbances, dystonia and dysphagia. The combination of neurological symptoms, a low blood ceruloplasmin level and KF rings is diagnostic of Wilsons disease.Other causes of KF rings are cholestasis (obstruction of the bile ducts), primary biliary cirrhosis and "cryptogenic" cirrhosis (cirrhosis in which no cause can be identified).
Diagnosis
As Kayser–Fleischer rings do not cause any symptoms, it is common for them to be identified during investigations for other medical conditions. In certain situations, they are actively sought; in that case, the early stages may be detected by slit lamp examination before they become visible to the naked eye.
See also
Fleischer ring
Hudson-Stahli line
Limbal ring
References
== External links == |
Chondrolysis | Chondrolysis [ICD Code M94.3] is the process of breakdown of cartilage. It can occur as a result of trauma (traumatic chondrolysis). Intra-articular infusions of certain local anesthetic agents such as bupivacaine, lidocaine, ropivacaine and levobupivacaine can also lead to this effect.
See also
Chondritis
Osteochondritis
Relapsing polychondritis
References
External links
Chondrolysis at Radiopedia |
Emotional lability | In medicine and psychology, emotional lability is a sign or symptom typified by exaggerated changes in mood or affect in quick succession. Sometimes the emotions expressed outwardly are very different from how the person feels on the inside. These strong emotions can be a disproportionate response to something that happened, but other times there might be no trigger at all. The person experiencing emotional lability usually feels like they do not have control over their emotions. For example, someone might cry uncontrollably in response to any strong emotion even if they do not feel sad or unhappy.Emotional lability is seen or reported in various conditions including borderline personality disorder, histrionic personality disorder, post-traumatic stress disorder, hypomanic or manic episodes of bipolar disorder, and neurological disorders or brain injury (where it is termed pseudobulbar affect), such as after a stroke. It has sometimes been found to have been a harbinger, or early warning, of certain forms of thyroid disease. Emotional lability also results from intoxication with certain substances, such as alcohol and benzodiazepines. It is also an associated feature of ADHD and autism.Children who display a high degree of emotional lability generally have low frustration tolerance and frequent crying spells or tantrums. During preschool, ADHD with emotional lability is associated with increased impairment and may be a sign of internalizing problems or multiple comorbid disorders. Children who are neglected are more likely to experience emotional dysregulation, including emotional lability.Potential triggers of emotional lability include excessive tiredness, stress or anxiety, overstimulated senses (too much noise, being in large crowds, etc.), being around others exhibiting strong emotions, very sad or funny situations (such as jokes, movies, certain stories or books), death of a loved one, or other situations that elicit stress or strong emotions.
== References == |
Pain | Pain is a distressing feeling often caused by intense or damaging stimuli. The International Association for the Study of Pain defines pain as "an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage." In medical diagnosis, pain is regarded as a symptom of an underlying condition.
Pain motivates the individual to withdraw from damaging situations, to protect a damaged body part while it heals, and to avoid similar experiences in the future. Most pain resolves once the noxious stimulus is removed and the body has healed, but it may persist despite removal of the stimulus and apparent healing of the body. Sometimes pain arises in the absence of any detectable stimulus, damage or disease.Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a persons quality of life and general functioning. Simple pain medications are useful in 20% to 70% of cases. Psychological factors such as social support, cognitive behavioral therapy, excitement, or distraction can affect pains intensity or unpleasantness.In some debates regarding physician-assisted suicide or euthanasia, pain has been used as an argument to permit people who are terminally ill to end their lives.
Etymology
First attested in English in 1297, the word peyn comes from the Old French peine, in turn from Latin poena meaning "punishment, penalty" (also meaning "torment, hardship, suffering" in Late Latin) and that from Greek ποινή (poine), generally meaning "price paid, penalty, punishment".
Classification
The International Association for the Study of Pain recommends using specific features to describe a patients pain:
region of the body involved (e.g. abdomen, lower limbs),
system whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal),
duration and pattern of occurrence,
intensity, and
cause
Chronic versus acute
Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain, may persist for years. Pain that lasts a long time is called "chronic" or "persistent", and pain that resolves quickly is called "acute". Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval of time between onset and resolution; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months.: 93 Others apply "acute" to pain that lasts less than 30 days, "chronic" to pain of more than six months duration, and "subacute" to pain that lasts from one to six months. A popular alternative definition of "chronic pain", involving no arbitrarily fixed duration, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as "cancer-related" or "benign."
Allodynia
Allodynia is pain experienced in response to a normally painless stimulus. It has no biological function and is classified by stimuli into dynamic mechanical, punctate and static.
Phantom
Phantom pain is pain felt in a part of the body that has been amputated, or from which the brain no longer receives signals. It is a type of neuropathic pain.The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72% of patients had phantom limb pain, and six months later, 67% reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts of pain per day, or it may reoccur less often. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb. Phantom limb pain may accompany urination or defecation.: 61–9 Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients.: 61–9 Mirror box therapy produces the illusion of movement and touch in a phantom limb which in turn may cause a reduction in pain.Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief.: 61–9
Breakthrough
Breakthrough pain is transitory pain that comes on suddenly and is not alleviated by the patients regular pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl.
Asymbolia and insensitivity
The ability to experience pain is essential for protection from injury, and recognition of the presence of injury. Episodic analgesia may occur under special circumstances, such as in the excitement of sport or war: a soldier on the battlefield may feel no pain for many hours from a traumatic amputation or other severe injury.Although unpleasantness is an essential part of the IASP definition of pain, it is possible to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.Insensitivity to pain may also result from abnormalities in the nervous system. This is usually the result of acquired damage to the nerves, such as spinal cord injury, diabetes mellitus (diabetic neuropathy), or leprosy in countries where that disease is prevalent. These individuals are at risk of tissue damage and infection due to undiscovered injuries. People with diabetes-related nerve damage, for instance, sustain poorly-healing foot ulcers as a result of decreased sensation.A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy. Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli.
Functional effects
Experimental subjects challenged by acute pain and patients in chronic pain experience impairments in attention control, working memory, mental flexibility, problem solving, and information processing speed. Acute and chronic pain are also associated with increased depression, anxiety, fear, and anger.
If I have matters right, the consequences of pain will include direct physical distress, unemployment, financial difficulties, marital disharmony, and difficulties in concentration and attention…
On subsequent negative emotion
Although pain is considered to be aversive and unpleasant and is therefore usually avoided, a meta-analysis which summarized and evaluated numerous studies from various psychological disciplines, found a reduction in negative affect. Across studies, participants that were subjected to acute physical pain in the laboratory subsequently reported feeling better than those in non-painful control conditions, a finding which was also reflected in physiological parameters. A potential mechanism to explain this effect is provided by the opponent-process theory.
Theory
Historical
Before the relatively recent discovery of neurons and their role in pain, various different body functions were proposed to account for pain. There were several competing early theories of pain among the ancient Greeks: Hippocrates believed that it was due to an imbalance in vital fluids. In the 11th century, Avicenna theorized that there were a number of feeling senses including touch, pain and titillation.
In 1644, René Descartes theorized that pain was a disturbance that passed along nerve fibers until the disturbance reached the brain. Descartess work, along with Avicennas, prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse, and by centurys end, most textbooks on physiology and psychology were presenting pain specificity as fact.
Modern
Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types of nociceptor ion channels have so far been identified, and their exact functions are still being determined.The pain signal travels from the periphery to the spinal cord along A-delta and C fibers. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the A-delta fibers is described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the C fibers. These A-delta and C fibers enter the spinal cord via Lissauers tract and connect with spinal cord nerve fibers in the central gelatinous substance of the spinal cord. These spinal cord fibers then cross the cord via the anterior white commissure and ascend in the spinothalamic tract. Before reaching the brain, the spinothalamic tract splits into the lateral, neospinothalamic tract and the medial, paleospinothalamic tract. The neospinothalamic tract carries the fast, sharp A-delta signal to the ventral posterolateral nucleus of the thalamus. The paleospinothalamic tract carries the slow, dull, C-fiber pain signal. Some of the paleospinothalamic fibers peel off in the brain stem, connecting with the reticular formation or midbrain periaqueductal gray, and the remainder terminate in the intralaminar nuclei of the thalamus.Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the affective/motivational element, the unpleasantness of pain), and pain that is distinctly located also activates primary and secondary somatosensory cortex.Spinal cord fibers dedicated to carrying A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals to the thalamus have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers, but also to the much larger, more heavily myelinated A-beta fibers that carry touch, pressure and vibration signals. Ronald Melzack and Patrick Wall introduced their gate control theory in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed that the thin C and A-delta (pain) and large diameter A-beta (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that A-beta fiber signals acting on inhibitory cells in the dorsal horn can reduce the intensity of pain signals sent to the brain.
Three dimensions of pain
In 1968 Ronald Melzack and Kenneth Casey described chronic pain in terms of its three dimensions:
"sensory-discriminative" (sense of the intensity, location, quality and duration of the pain),
"affective-motivational" (unpleasantness and urge to escape the unpleasantness), and
"cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion).They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both the sensory-discriminative and affective-motivational dimensions of pain, while suggestion and placebos may modulate only the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed. (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435)
Evolutionary and behavioral role
Pain is part of the bodys defense system, producing a reflexive retraction from the painful stimulus, and tendencies to protect the affected body part while it heals, and avoid that harmful situation in the future. It is an important part of animal life, vital to healthy survival. People with congenital insensitivity to pain have reduced life expectancy.In The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins addresses the question of why pain should have the quality of being painful. He describes the alternative as a mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins argues that drives must compete with one other within living beings. The most "fit" creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors. This resemblance will not be perfect, however, because natural selection can be a poor designer. This may have maladaptive results such as supernormal stimuli.Pain, however, does not only wave a "red flag" within living beings but may also act as a warning sign and a call for help to other living beings. Especially in humans who readily helped each other in case of sickness or injury throughout their evolutionary history, pain might be shaped by natural selection to be a credible and convincing signal of need for relief, help, and care.Idiopathic pain (pain that persists after the trauma or pathology has healed, or that arises without any apparent cause) may be an exception to the idea that pain is helpful to survival, although some psychodynamic psychologists argue that such pain is psychogenic, enlisted as a protective distraction to keep dangerous emotions unconscious.
Thresholds
In pain science, thresholds are measured by gradually increasing the intensity of a stimulus in a procedure called quantitative sensory testing which involves such stimuli as electric current, thermal (heat or cold), mechanical (pressure, touch, vibration), ischemic, or chemical stimuli applied to the subject to evoke a response. The "pain perception threshold" is the point at which the subject begins to feel pain, and the "pain threshold intensity" is the stimulus intensity at which the stimulus begins to hurt. The "pain tolerance threshold" is reached when the subject acts to stop the pain.
Assessment
A persons self-report is the most reliable measure of pain. Some health care professionals may underestimate pain severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain.
Visual analogue scale
The visual analogue scale is a common, reproducible tool in the assessment of pain and pain relief. The scale is a continuous line anchored by verbal descriptors, one for each extreme of pain where a higher score indicates greater pain intensity. It is usually 10 cm in length with no intermediate descriptors as to avoid marking of scores around a preferred numeric value. When applied as a pain descriptor, these anchors are often no pain and worst imaginable pain". Cut-offs for pain classification have been recommended as no pain (0-4mm), mild pain (5-44mm), moderate pain (45-74mm) and severe pain (75-100mm).
Multidimensional pain inventory
The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description.
Assessment in non-verbal people
Non-verbal people cannot use words to tell others that they are experiencing pain. However, they may be able to communicate through other means, such as blinking, pointing, or nodding.With a non-communicative person, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding (trying to protect part of the body from being bumped or touched) indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline, such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary. Changes in behavior may be noticed by caregivers who are familiar with the persons normal behavior.Infants do feel pain, but lack the language needed to report it, and so communicate distress by crying. A non-verbal pain assessment should be conducted involving the parents, who will notice changes in the infant which may not be obvious to the health care provider. Pre-term babies are more sensitive to painful stimuli than those carried to full term.Another approach, when pain is suspected, is to give the person treatment for pain, and then watch to see whether the suspected indicators of pain subside.
Other reporting barriers
The way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age. An aging adult may not respond to pain in the same way that a younger person might. Their ability to recognize pain may be blunted by illness or the use of medication. Depression may also keep older adult from reporting they are in pain. Decline in self-care may also indicate the older adult is experiencing pain. They may be reluctant to report pain because they do not want to be perceived as weak, or may feel it is impolite or shameful to complain, or they may feel the pain is a form of deserved punishment.Cultural barriers may also affect the likelihood of reporting pain. Patients may feel that certain treatments go against their religious beliefs. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction, and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain immediately to receive immediate relief.Gender can also be a perceived factor in reporting pain. Gender differences can be the result of social and cultural expectations, with women expected to be more emotional and show pain, and men more stoic. As a result, female pain is often stigmatized, leading to less urgent treatment of women based on social expectations of their ability to accurately report it. This leads to extended emergency room wait times for women and frequent dismissal of their ability to accurately report pain.
Diagnostic aid
Pain is a symptom of many medical conditions. Knowing the time of onset, location, intensity, pattern of occurrence (continuous, intermittent, etc.), exacerbating and relieving factors, and quality (burning, sharp, etc.) of the pain will help the examining physician to accurately diagnose the problem. For example, chest pain described as extreme heaviness may indicate myocardial infarction, while chest pain described as tearing may indicate aortic dissection.
Physiological measurement
Functional magnetic resonance imaging brain scanning has been used to measure pain, and correlates well with self-reported pain.
Mechanisms
Nociceptive
Nociceptive pain is caused by stimulation of sensory nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal.
Nociceptive pain may also be classed according to the site of origin and divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures (e.g., the heart, liver and intestines) are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. Superficial somatic pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns.
Neuropathic
Neuropathic pain is caused by damage or disease affecting any part of the nervous system involved in bodily feelings (the somatosensory system). Neuropathic pain may be divided into peripheral, central, or mixed (peripheral and central) neuropathic pain. Peripheral neuropathic pain is often described as "burning", "tingling", "electrical", "stabbing", or "pins and needles". Bumping the "funny bone" elicits acute peripheral neuropathic pain.
Some manifestations of neuropathic pain include: traumatic neuropathy, tic douloureux, painful diabetic neuropathy, and postherpetic neuralgia.
Nociplastic
Nociplastic pain is pain characterized by a changed nociception (but without evidence of real or threatened tissue damage, or without disease or damage in the somatosensory system). In some debates regarding physician-assisted suicide or euthanasia, pain has been used as an argument to permit people who are terminally ill to end their lives.
Psychogenic
Psychogenic pain, also called psychalgia or somatoform pain, is pain caused, increased or prolonged by mental, emotional or behavioral factors. Headache, back pain and stomach pain are sometimes diagnosed as psychogenic. Those affected are often stigmatized, because both medical professionals and the general public tend to think that pain from a psychological source is not "real". However, specialists consider that it is no less actual or hurtful than pain from any other source.People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other direction, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved.: 31–2
Management
Pain can be treated through a variety of methods. The most appropriate method depends upon the situation. Management of chronic pain can be difficult and may require the coordinated efforts of a pain management team, which typically includes medical practitioners, clinical pharmacists, clinical psychologists, physiotherapists, occupational therapists, physician assistants, and nurse practitioners.Inadequate treatment of pain is widespread throughout surgical wards, intensive care units, and accident and emergency departments, in general practice, in the management of all forms of chronic pain including cancer pain, and in end of life care. This neglect extends to all ages, from newborns to medically frail elderly. In the US, African and Hispanic Americans are more likely than others to suffer unnecessarily while in the care of a physician; and womens pain is more likely to be undertreated than mens.The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a medical specialty. It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain.
Medication
Acute pain is usually managed with medications such as analgesics and anesthetics. Caffeine when added to pain medications such as ibuprofen, may provide some additional benefit. Ketamine can be used instead of opioids for short-term pain. Pain medications can cause paradoxical side effects, such as opioid-induced hyperalgesia (severe pain caused by long-term opioid use).
Sugar (sucrose) when taken by mouth reduces pain in newborn babies undergoing some medical procedures (a lancing of the heel, venipuncture, and intramuscular injections). Sugar does not remove pain from circumcision, and it is unknown if sugar reduces pain for other procedures. Sugar did not affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. |
Pain | Sweet liquid by mouth moderately reduces the rate and duration of crying caused by immunization injection in children between one and twelve months of age.
Psychological
Individuals with more social support experience less cancer pain, take less pain medication, report less labor pain and are less likely to use epidural anesthesia during childbirth, or suffer from chest pain after coronary artery bypass surgery.Suggestion can significantly affect pain intensity. About 35% of people report marked relief after receiving a saline injection they believed to be morphine. This placebo effect is more pronounced in people who are prone to anxiety, and so anxiety reduction may account for some of the effect, but it does not account for all of it. Placebos are more effective for intense pain than mild pain; and they produce progressively weaker effects with repeated administration.: 26–8 It is possible for many with chronic pain to become so absorbed in an activity or entertainment that the pain is no longer felt, or is greatly diminished.: 22–3 A number of meta-analyses have found clinical hypnosis to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children, as well as pain associated with cancer and childbirth. A 2007 review of 13 studies found evidence for the efficacy of hypnosis in the reduction of chronic pain under some conditions, though the number of patients enrolled in the studies was low, raising issues related to the statistical power to detect group differences, and most lacked credible controls for placebo or expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis for different chronic-pain conditions."
Alternative medicine
An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January 2009, concluded there was little difference in the effect of real, faked and no acupuncture. However, more recent reviews have found some benefit. Additionally, there is tentative evidence for a few herbal medicines. There has been some interest in the relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship, other than in osteomalacia, is inconclusive.For chronic (long-term) lower back pain, spinal manipulation produces tiny, clinically insignificant, short-term improvements in pain and function, compared with sham therapy and other interventions. Spinal manipulation produces the same outcome as other treatments, such as general practitioner care, pain-relief drugs, physical therapy, and exercise, for acute (short-term) lower back pain.
Epidemiology
Pain is the main reason for visiting an emergency department in more than 50% of cases, and is present in 30% of family practice visits. Several epidemiological studies have reported widely varying prevalence rates for chronic pain, ranging from 12 to 80% of the population. It becomes more common as people approach death. A study of 4,703 patients found that 26% had pain in the last two years of life, increasing to 46% in the last month.A survey of 6,636 children (0–18 years of age) found that, of the 5,424 respondents, 54% had experienced pain in the preceding three months. A quarter reported having experienced recurrent or continuous pain for three months or more, and a third of these reported frequent and intense pain. The intensity of chronic pain was higher for girls, and girls reports of chronic pain increased markedly between ages 12 and 14.
Society and culture
Physical pain is a universal experience, and a strong motivator of human and animal behavior. As such, physical pain is used politically in relation to various issues such as pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance. The deliberate infliction of pain and the medical management of pain are both important aspects of biopower, a concept that encompasses the "set of mechanisms through which the basic biological features of the human species became the object of a political strategy".In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution for an offence, for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed unacceptable. In Western societies, the intentional infliction of severe pain (torture) was principally used to extract confession prior to its abolition in the latter part of the 19th century. Torture as a means to punish the citizen has been reserved for offences posing severe threat to the social fabric (for example, treason).The administration of torture on bodies othered by the cultural narrative, those observed as not full members of society : 101–121[AD1] met a resurgence in the 20th century, possibly due to the heightened warfare.: 101–121[AD2] Many cultures use painful ritual practices as a catalyst for psychological transformation. The use of pain to transition to a cleansed and purified state is seen in Catholic self-flagellation practices, or personal catharsis in neo-primitive body suspension experiences.Beliefs about pain play an important role in sporting cultures. Pain may be viewed positively, exemplified by the no pain, no gain attitude, with pain seen as an essential part of training. Sporting culture tends to normalise experiences of pain and injury and celebrate athletes who play hurt.Pain has psychological, social, and physical dimensions, and is greatly influenced by cultural factors.
Non-humans
René Descartes argued that animals lack consciousness and therefore do not experience pain and suffering in the way that humans do. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, wrote that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. The ability of invertebrate species of animals, such as insects, to feel pain and suffering is unclear.Specialists believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, may also. The presence of pain in animals is unknown, but can be inferred through physical and behavioral reactions, such as paw withdrawal from various noxious mechanical stimuli in rodents.
See also
Feeling, a perceptual state of conscious experience.
Hedonic adaptation, the tendency to quickly return to a relatively stable level of happiness despite major positive or negative events
Pain (philosophy), the branch of philosophy concerned with suffering and physical pain
Pain and suffering, the legal term for the physical and emotional stress caused from an injury
Explanatory notes
References
Casey K (2019). Chasing Pain: The Search for a Neurobiological Mechanism. New York: Oxford University Press. ISBN 978-0-19-088023-1.
External links
Pain at Curlie
"Pain", Stanford Encyclopedia of Philosophy |
Castleman disease | Castleman disease (CD) describes a group of rare lymphoproliferative disorders that involve enlarged lymph nodes, and a broad range of inflammatory symptoms and laboratory abnormalities. Whether Castleman disease should be considered an autoimmune disease, cancer, or infectious disease is currently unknown.
Castleman disease includes at least three distinct subtypes: unicentric Castleman disease (UCD), human herpesvirus 8 associated multicentric Castleman disease (HHV-8-associated MCD), and idiopathic multicentric Castleman disease (iMCD). These are differentiated by the number and location of affected lymph nodes and the presence of human herpesvirus 8, a known causative agent in a portion of cases. Correctly classifying the Castleman disease subtype is important, as the three subtypes vary significantly in symptoms, clinical findings, disease mechanism, treatment approach, and prognosis. All forms involve overproduction of cytokines and other inflammatory proteins by the bodys immune system as well as characteristic abnormal lymph node features that can be observed under the microscope. In the United States, approximately 4,300 to 5,200 new cases are diagnosed each year.Castleman disease is named after Benjamin Castleman, who first described the disease in 1956. The Castleman Disease Collaborative Network is the largest organization dedicated to accelerating research and treatment for Castleman disease as well as improving patient care.
Classification
Castleman disease (CD) can involve one or more enlarged lymph nodes in a single region of the body (unicentric CD, UCD) or it can involve multiple enlarged lymph node regions (multi centric CD, MCD). Doctors classify the disease into different categories based on the number of enlarged lymph node regions and the underlying cause. There are four established subtypes of Castleman disease:
Unicentric Castleman disease
Unicentric Castleman disease (UCD) involves a single enlarged lymph node or multiple enlarged lymph nodes within a single region of the body that display microscopic features consistent with Castleman disease. It is also sometimes called localized Castleman disease.The exact cause of UCD is unknown, but appears to be due to a genetic change that occurs in the lymph node tissue, most similar to a benign tumor. In most cases of UCD, individuals exhibit no symptoms (asymptomatic). UCD symptoms tend to be mild and occur secondary to compression of surrounding structures by rapidly enlarging lymph nodes. Less commonly, some UCD patients can experience systemic inflammatory symptoms such as fever, fatigue, excessive sweating, weight loss, and skin rash as well as laboratory abnormalities such as low hemoglobin and elevated C-reactive protein. These symptoms are typically seen in MCD.
Surgery is considered by experts to be the first-line treatment option for all cases of UCD. Sometimes, removing the enlarged lymph node(s) is not possible. If surgical excision is not possible, treatment is recommended for symptomatic patients. If symptoms are due to compression, then rituximab is recommended. If symptoms are due to an inflammatory syndrome, then anti-interleukin-6 (IL-6) therapy is recommended. If these treatments are not effective, then radiation may be needed.
Multicentric Castleman disease (MCD)
In this form, patients have multiple regions of enlarged lymph nodes with characteristic microscopic features, flu-like symptoms, and organ dysfunction due to excessive cytokines or inflammatory proteins. MCD is further classified into three categories based on underlying cause: POEMS-associated MCD, HHV-8-associated MCD, and idiopathic MCD (iMCD).
POEMS-associated MCD
A cancerous cell population found in patients with POEMS syndrome (polyneuropathy, organomegaly, endocrinopathy, monoclonal plasma cell disorder, and skin changes) can cause MCD in a fraction of patients by producing cytokines that initiate a cytokine storm. In patients who have both POEMS-associated MCD, treatment should be directed at the POEMS syndrome.
HHV-8-associated multicentric Castleman disease (HHV-8-MCD)
HHV-8-associated MCD patients have multiple regions of enlarged lymph nodes and episodic inflammatory symptoms due to uncontrolled infection with HHV-8. HHV-8-associated MCD is most commonly diagnosed in HIV infected or otherwise immunocompromised individuals that are not able to control HHV-8 infection. Thus, HHV-8-associated MCD patients may experience additional symptoms related to their HIV infection or other conditions. First-line treatment of HHV-8-associated MCD is rituximab, a drug used to eliminate a type of immune cell called the B lymphocyte. It is highly effective for HHV-8-associated MCD, but occasionally antivirals and/or cytotoxic chemotherapies are needed.
Idiopathic multicentric Castleman disease (iMCD)
Idiopathic multicentric Castleman disease (iMCD), which is the most common form of MCD, occurs for an unknown cause. There is no evidence of POEMS syndrome, HHV-8, or any other cancer or infectious disease. Though all forms of MCD involve excessive production of cytokines and a cytokine storm, iMCD has important differences in symptoms, disease course, and treatment from POEMS-associated MCD and HHV-8-associated MCD. First line treatment for iMCD is anti-IL-6 therapy with siltuximab (or tocilizumab, if siltuximab is not available). Siltuximab is the only FDA-approved treatment for iMCD and patients who respond to siltuximab tend to have long-term responses. In critically ill patients, chemotherapy and corticosteroids are recommended if the patient is demonstrating disease progression while on siltuximab. Approximately half of iMCD patients do not improve with anti-IL-6 therapy. In patients where siltuximab is not effective, other treatments such as rituximab and sirolimus can be used.iMCD can be further sub-classified into three clinical subgroups:
iMCD with TAFRO Syndrome (iMCD-TAFRO): characterized by acute episodes of Thrombocytopenia, Anasarca, Fever, Renal dysfunction or mylefibrosis, and Organomegaly.iMCD with idiopathic plasmacytic lymphadenopathy (iMCD-IPL): characterized by thrombocytosis, hypergammaglobulinemia, and a more chronic disease course.iMCD, not otherwise specified (iMCD-NOS): is diagnosed in iMCD patients who do not have iMCD-TAFRO or iMCD-IPL.
Pathology
Castleman disease is defined by a range of characteristic features seen on microscopic analysis (histology) of tissue from enlarged lymph nodes. Variations in the lymph node tissues of patients with CD have led to 4 histological classifications:
Plasmacytic: increased number of follicles with large hyperplastic germinal centers and sheetlike plasmacytosis (increased number plasma cells). Germinal centers may also show regressed features
Hyaline vascular: regressed germinal centers, follicular dendritic cell prominence or dysplasia, hypervascularity in interfollicular regions, sclerotic vessels, prominent mantle zones with an "onion-skin" appearance.
Hypervascular: similar to hyaline vascular features, but seen in iMCD rather than UCD. Includes regressed germinal centers, follicular dendritic cell prominence, hypervascularity in interfollicular regions, and prominent mantle zones with an "onion-skin" appearance.
Mixed: presence of a combination of hyaline vascular/hypervascular and plasmacytic features in the same lymph node.UCD most commonly demonstrates hyaline vascular features, but plasmacytic features or a mix of features may also be seen. iMCD more commonly demonstrates plasmacytic features, but hypervascular features or a mix of features are also seen. All cases of HHV-8-associated MCD are thought to demonstrate plasmablastic features—similar to plasmacytic features, but with plasmablasts present. The clinical utility of subtyping Castleman disease by histologic features is uncertain, as histologic subtypes do not consistently predict disease severity or treatment response. Guidelines recommend against using histologic subtype to guide treatment decisions. Staining with latency-associated nuclear antigen (LANA-1), a marker for HHV-8 infection, should be measured in all forms of Castleman disease but is positive only in HHV-8-associated MCD.Diseases other than Castleman disease can present with similar histologic findings in lymph node tissue, including:
Infectious causes: Epstein-Barr virus, human immunodeficiency virus, tuberculosis
Autoimmune diseases: Systemic lupus erythematosus, rheumatoid arthritis
Lymphoproliferative disorders: lymphoma, autoimmune lymphoproliferative syndrome
History
Unicentric Castleman disease was first described in a case series by Benjamin Castleman in 1956. By 1984, a number of case reports had been published describing a multicentric variant of the disease and with some reports describing an association with Kaposis sarcoma. In 1995, the association between HHV-8 and Castleman disease was described in patients with HIV. Formal diagnostic criteria and definition of the disease was established in 2016, which will allow for better understanding and the ability to appropriately track and research CD. In 2017, international consensus diagnostic criteria for idiopathic multicentric Castleman disease (iMCD) were established for the first time. In 2018, the first treatment guidelines for iMCD were established. In 2020 the first evidence based diagnostic criteria and treatment guidelines were established for unicentric Castleman disease.
World Castleman Disease Day was established in 2018 and is held every year on July 23. This date was chosen for Benjamin Castlemans initial case series describing Castleman disease, which was published in July 1956, and the diagnostic criteria for idiopathic multicentric Castleman disease, which were published in the journal Blood on March 23, 2017.
Castleman Disease Collaborative Network
The Castleman Disease Collaborative Network (CDCN) was founded in 2012 and is the largest organization focused on Castleman disease. It is a global initiative dedicated to research and treatment for Castleman disease (CD) and to improve survival for all patients with CD. The CDCN works to achieve this by facilitating collaboration among the global research community, mobilizing resources, strategically investing in high-impact research, and supporting patients and their loved ones.
References
Further reading
Fajgenbaum, David (2019). Chasing My Cure: A Doctors Race to Turn Hope into Action; a Memoir. New York: Ballantine Books. ISBN 9781524799618. OCLC 1144129598. Book by the founder of the Castleman Disease Collaborative Network. |
Encounter | Encounter or Encounters may refer to:
Film
Encounter, a 1997 Indian film by Nimmala Shankar
Encounter (2013 film), a Bengali film
Encounter (2018 film), an American sci-fi film
Encounter (2021 film), a British sci-fi film
Encounters, a section of the Berlin International Film Festival
Encounters (film), a 1993 Australian thriller
Music
Encounter!, a 1968 album by Pepper Adams
Coleman Hawkins Encounters Ben Webster or Encounters, an album by Coleman Hawkins and Ben Webster
Encounter (Mark Holden album) (1977)
Encounter (Michael Stearns album) (1988)
Place Vendôme (Swingle Singers with MJQ album) or Encounter
Encounter (Trio 3 album) (2000)
Encounters (album), a 1984 album by Mal Waldron
Encounters, an album by Sylvan
"Encounter", a 2016 song by Chris Quilala from Split the Sky
Encounter, a song in the video game Metal Gear Solid
Ships
HMS Encounter (1846)
HMS Encounter (1873), a wooden-screw corvette
HMS Encounter (H10), an E-class destroyer launched in 1934
HMAS Encounter (1902), a Challenger-class protected cruiser
HMAS Encounter (naval base), a former naval depot in South Australia
Television
Encounter (1958 TV series), a 1958 CBC/ABC anthology television series
Encounter (1960 TV program), a Canadian talk show television program
Encounter (1970 TV program), a Canadian political affairs television program
Encounters (TV series), a 1994 American television series
Encounter (Indian TV series) (2014)
Encounter (South Korean TV series) (2018)
Other uses
Encounter (psychology), an authentic, congruent meeting between individuals
Encounter (magazine), a literary magazine
Encounter Books, a book publisher in the United States, named after the magazine
Encounter (game), an international network of active urban games
Encounter (video game), a 1983 game by Novagen
Encounters (anthology), a 2004 anthology of speculative fiction
Encounter (sculpture), an bronze sculpture by Bruce Beasley
See also
Close encounter, a claimed UFO sighting
The Encounter (disambiguation)
Encounter Bay (disambiguation)
Police encounter
Encounter killings by police, killing in a gun fight with the police in the Indian subcontinent, sometimes an extrajudicial killing
HMS Encounter, a list of ships
All pages with titles containing Encounter |
Rickettsiosis | A rickettsiosis is a disease caused by intracellular bacteria.
Cause
Rickettsioses can be divided into a spotted fever group (SPG) and typhus group (TG).In the past, rickettsioses were considered to be caused by species of Rickettsia. However, scrub typhus is still considered a rickettsiosis, even though the causative organism has been reclassified from Rickettsia tsutsugamushi to Orientia tsutsugamushi.Examples of rickettsioses include typhus, both endemic and epidemic, Rocky Mountain spotted fever, and Rickettsialpox.
Organisms involved include Rickettsia parkeri.Many new causative organisms have been identified in the last few decades.
Most are in the genus Rickettsia, but scrub typhus is in the genus Orientia.
Diagnosis
No rapid laboratory tests are available to diagnose rickettsial diseases early in the course of illness, and serologic assays usually take 10–12 days to become positive. Research is indicating that swabs of eschars may be used for molecular detection of rickettsial infections.
Treatment
Doxycycline has been used in the treatment of rickettsial infection.
References
External links
Media related to Rickettsioses at Wikimedia Commons |
Pulmonary heart disease | Pulmonary heart disease, also known as cor pulmonale, is the enlargement and failure of the right ventricle of the heart as a response to increased vascular resistance (such as from pulmonic stenosis) or high blood pressure in the lungs.Chronic pulmonary heart disease usually results in right ventricular hypertrophy (RVH), whereas acute pulmonary heart disease usually results in dilatation. Hypertrophy is an adaptive response to a long-term increase in pressure. Individual muscle cells grow larger (in thickness) and change to drive the increased contractile force required to move the blood against greater resistance. Dilatation is a stretching (in length) of the ventricle in response to acute increased pressure.To be classified as pulmonary heart disease, the cause must originate in the pulmonary circulation system; RVH due to a systemic defect is not classified as pulmonary heart disease. Two causes are vascular changes as a result of tissue damage (e.g. disease, hypoxic injury), and chronic hypoxic pulmonary vasoconstriction. If left untreated, then death may result. The heart and lungs are intricately related; whenever the heart is affected by a disease, the lungs risk following and vice versa.
Signs and symptoms
The symptoms/signs of pulmonary heart disease (cor pulmonale) can be non-specific and depend on the stage of the disorder, and can include blood backing up into the systemic venous system, including the hepatic vein. As pulmonary heart disease progresses, most individuals will develop symptoms like:
Shortness of breath
Wheezing
Cyanosis
Ascites
Jaundice
Enlargement of the liver
Raised jugular venous pressure (JVP)
Third heart sound
Intercostal recession
Presence of abnormal heart sounds
Causes
The causes of pulmonary heart disease (cor pulmonale) are the following:
Acute respiratory distress syndrome (ARDS)
COPD
Primary pulmonary hypertension
Blood clots in lungs
Kyphoscoliosis
Interstitial lung disease
Cystic fibrosis
Sarcoidosis
Obstructive sleep apnea (untreated)
Sickle cell anemia
Bronchopulmonary dysplasia (in infants)
Pathophysiology
The pathophysiology of pulmonary heart disease (cor pulmonale) has always indicated that an increase in right ventricular afterload causes RV failure (pulmonary vasoconstriction, anatomic disruption/pulmonary vascular bed and increased blood viscosity are usually involved), however most of the time, the right ventricle adjusts to an overload in chronic pressure. According to Voelkel, et al., pressure overload is the initial step for changes in RV, other factors include:
Ischemia
Inflammation
Oxidative damage
Epigenetics
Abnormal cardiac energetics
Diagnosis
Investigations available to determine the cause of cor pulmonale include the following:
Chest x-ray – right ventricular hypertrophy, right atrial dilatation, prominent pulmonary artery
ECG – right ventricular hypertrophy, dysrhythmia, P pulmonale (characteristic peaked P wave)
Thrombophilia screen- to detect chronic venous thromboembolism (proteins C and S, antithrombin III, homocysteine levels)
Differential diagnosis
The diagnosis of pulmonary heart disease is not easy as both lung and heart disease can produce similar symptoms. Therefore, the differential diagnosis (DDx) should assess:
Atrial myxoma
Congestive heart failure
Constrictive pericarditis
Infiltrative cardiomyopathies
Right heart failure (right ventricular infarction)
Ventricular septal defect
Treatment
The treatment for cor pulmonale can include the following: antibiotics, expectorants, oxygen therapy, diuretics, digitalis, vasodilators, and anticoagulants. Some studies have indicated that Shenmai injection with conventional treatment is safe and effective for cor pulmonale (chronic).Treatment requires diuretics (to decrease strain on the heart). Oxygen is often required to resolve the shortness of breath. Additionally, oxygen to the lungs also helps relax the blood vessels and eases right heart failure. When wheezing is present, the majority of individuals require a bronchodilator. A variety of medications have been developed to relax the blood vessels in the lung, calcium channel blockers are used but only work in few cases and according to NICE are not recommended for use at all.Anticoagulants are used when venous thromboembolism is present. Venesection is used in severe secondary polycythemia (because of hypoxia), which improves symptoms though survival rate has not been proven to increase. Finally, transplantation of single/double lung in extreme cases of cor pulmonale is also an option.
Epidemiology
The epidemiology of pulmonary heart disease (cor pulmonale) accounts for 7% of all heart disease in the U.S According to Weitzenblum, et al., the mortality that is related to cor pulmonale is not easy to ascertain, as it is a complication of COPD.
See also
Bilharzial cor pulmonale.
References
Further reading
Forfia, Paul R.; Vaidya, Anjali; Wiegers, Susan E. (2013-01-01). "Pulmonary heart disease: The heart-lung interaction and its impact on patient phenotypes". Pulmonary Circulation. 3 (1): 5–19. doi:10.4103/2045-8932.109910. ISSN 2045-8932. PMC 3641739. PMID 23662171.
Taussig, Lynn M.; Landau, Louis I. (2008-04-09). Pediatric Respiratory Medicine. Elsevier Health Sciences. ISBN 978-0323070720.
Jamal, K.; Fleetham, J. A.; Thurlbeck, W. M. (1990-05-01). "Cor Pulmonale: Correlation with Central Airway Lesions, Peripheral Airway Lesions, Emphysema, and Control of Breathing". American Review of Respiratory Disease. 141 (5_pt_1): 1172–1177. doi:10.1164/ajrccm/141.5_Pt_1.1172. ISSN 0003-0805. PMID 2339840.
== External links == |
Interstitial lung disease | Interstitial lung disease (ILD), or diffuse parenchymal lung disease (DPLD), is a group of respiratory diseases affecting the interstitium (the tissue and space around the alveoli (air sacs)) of the lungs. It concerns alveolar epithelium, pulmonary capillary endothelium, basement membrane, and perivascular and perilymphatic tissues. It may occur when an injury to the lungs triggers an abnormal healing response. Ordinarily, the body generates just the right amount of tissue to repair damage, but in interstitial lung disease, the repair process is disrupted, and the tissue around the air sacs (alveoli) becomes scarred and thickened. This makes it more difficult for oxygen to pass into the bloodstream. The disease presents itself with the following symptoms: shortness of breath, nonproductive coughing, fatigue, and weight loss, which tend to develop slowly, over several months. The average rate of survival for someone with this disease is between three and five years. The term ILD is used to distinguish these diseases from obstructive airways diseases.
There are specific types in children, known as childrens interstitial lung diseases. The acronym ChILD is sometimes used for this group of diseases.Prolonged ILD may result in pulmonary fibrosis, but this is not always the case. Idiopathic pulmonary fibrosis is interstitial lung disease for which no obvious cause can be identified (idiopathic) and is associated with typical findings both radiographic (basal and pleural-based fibrosis with honeycombing) and pathologic (temporally and spatially heterogeneous fibrosis, histopathologic honeycombing, and fibroblastic foci).
In 2015, interstitial lung disease, together with pulmonary sarcoidosis, affected 1.9 million people. They resulted in 122,000 deaths.
Causes
An ILD may be classified as to whether its cause is not known (idiopathic) or known (secondary).
Idiopathic
Idiopathic interstitial pneumonia is the term given to ILDs with an unknown cause. They represent the majority of cases of interstitial lung diseases (up to two-thirds of cases). They were subclassified by the American Thoracic Society in 2002 into 7 subgroups:
Idiopathic pulmonary fibrosis (IPF): the most common subgroup
Desquamative interstitial pneumonia (DIP)
Acute interstitial pneumonia (AIP): also known as Hamman-Rich syndrome
Nonspecific interstitial pneumonia (NSIP)
Respiratory bronchiolitis-associated interstitial lung disease (RB-ILD)
Cryptogenic organizing pneumonia (COP): also known by the older name bronchiolitis obliterans organizing pneumonia (BOOP)
Lymphoid interstitial pneumonia (LIP)
Secondary
Secondary ILDs are those diseases with a known etiology, including:
Connective tissue and Autoimmune diseases
Sarcoidosis
Rheumatoid arthritis
Systemic lupus erythematosus
Systemic sclerosis
Polymyositis
Dermatomyositis
Antisynthetase syndrome
Inhaled substances (Pneumoconiosis)
Inorganic
Silicosis
Asbestosis
Berylliosis
Industrial printing chemicals (e.g. carbon black, ink mist)
Organic
Hypersensitivity pneumonitis (extrinisic allergic alveolitis)
Drug-induced
Antibiotics
Chemotherapeutic drugs
Antiarrhythmic agents
Cigarette smoking
Infection
Coronavirus disease 2019
Atypical pneumonia
Pneumocystis pneumonia (PCP)
Tuberculosis
Chlamydia trachomatis
Respiratory Syncytial Virus
Malignancy
Lymphangitic carcinomatosis
Predominately in children
Diffuse developmental disorders
Growth abnormalities deficient alveolarisation
Infant conditions of undefined cause
ILD related to alveolar surfactant region
Diagnosis
Investigation is tailored towards the symptoms and signs. A proper and detailed history looking for the occupational exposures, and for signs of conditions listed above is the first and probably the most important part of the workup in patients with interstitial lung disease. Pulmonary function tests usually show a restrictive defect with decreased diffusion capacity (DLCO).A lung biopsy is required if the clinical history and imaging are not clearly suggestive of a specific diagnosis or malignancy cannot otherwise be ruled out. In cases where a lung biopsy is indicated, a trans-bronchial biopsy is usually unhelpful, and a surgical lung biopsy is often required.
X-rays
Chest radiography is usually the first test to detect interstitial lung diseases, but the chest radiograph can be normal in up to 10% of patients, especially early in the disease process.High resolution CT of the chest is the preferred modality, and differs from routine CT of the chest. Conventional (regular) CT chest examines 7–10 mm slices obtained
at 10 mm intervals; high resolution CT examines 1–1.5 mm slices at 10 mm
intervals using a high spatial frequency reconstruction algorithm. The HRCT therefore provides approximately 10 times more resolution than the conventional CT chest, allowing the HRCT to elicit details that cannot otherwise be visualized.Radiologic appearance alone however is not adequate and should be interpreted in the clinical context, keeping in mind the temporal profile of the disease process.Interstitial lung diseases can be classified according to radiologic patterns.
Pattern of opacities
ConsolidationAcute: Alveolar hemorrhage syndromes, acute eosinophilic pneumonia, acute interstitial pneumonia, cryptogenic organizing pneumonia
Chronic: Chronic eosinophilic pneumonia, cryptogenic organizing pneumonia, lymphoproliferative disorders, pulmonary alveolar proteinosis, sarcoidosis
Linear or reticular opacitiesAcute: Pulmonary edema
Chronic: Idiopathic pulmonary fibrosis, connective tissue-associated interstitial lung diseases, asbestosis, sarcoidosis, hypersensitivity pneumonitis, drug-induced lung disease
Small nodulesAcute: Hypersensitivity pneumonitis
Chronic: Hypersensitivity pneumonitis, sarcoidosis, silicosis, coal workers pneumoconiosis, respiratory bronchiolitis, alveolar microlithiasis
Cystic airspacesChronic: Pulmonary langerhans cell histiocytosis, pulmonary lymphangioleiomyomatosis, honeycomb lung caused by IPF or other diseases
Ground glass opacities
Acute: Alveolar hemorrhage syndromes, pulmonary edema, hypersensitivity pneumonitis, acute inhalational exposures, drug-induced lung diseases, acute interstitial pneumonia
Chronic: Nonspecific interstitial pneumonia, respiratory bronchiolitis associated interstitial lung disease, desquamative interstitial pneumonia, drug-induced lung diseases, pulmonary alveolar proteinosis
Thickened alveolar septaAcute: Pulmonary edema
Chronic: Lymphangitic carcinomatosis, pulmonary alveolar proteinosis, sarcoidosis, pulmonary veno occlusive disease
Distribution
Upper lung predominancePulmonary Langerhans cell histiocytosis, silicosis, coal workers pneumoconiosis, carmustine related pulmonary fibrosis, respiratory broncholitis associated with interstitial lung disease.
Lower lung predominanceIdiopathic pulmonary fibrosis, pulmonary fibrosis associated with connective tissue diseases, asbestosis, chronic aspiration
Central predominance (perihilar)Sarcoidosis, berylliosis
Peripheral predominanceIdiopathic pulmonary fibrosis, chronic eosinophilic pneumonia, cryptogenic organizing pneumonia
Associated findings
Pleural effusion or thickeningPulmonary edema, connective tissue diseases, asbestosis, lymphangitic carcinomatosis, lymphoma, lymphangioleiomyomatosis, drug-induced lung diseases
LymphadenopathySarcoidosis, silicosis, berylliosis, lymphangitic carcinomatosis, lymphoma, lymphocytic interstitial pneumonia
Genetic testing
For some types of paediatric ILDs and few forms adult ILDs, genetic causes have been identified. These may be identified by blood tests. For a limited number of cases, this is a definite advantage, as a precise molecular diagnosis can be done; frequently then there is no need for a lung biopsy. Testing is available for
ILDs related to alveolar surfactant region
Surfactant-Protein-B Deficiency (Mutations in SFTPB)
Surfactant-Protein-C Deficiency (Mutations in SFTPC)
ABCA3-Deficiency (Mutations in ABCA3)
Brain Lung Thyroid Syndrome (Mutations in TTF1)
Congenital Pulmonary Alveolar Proteinosis (Mutations in CSFR2A, CSFR2B)
Diffuse developmental disorder
Alveolar Capillary Dysplasia (Mutations in FoxF1)
Idiopathic pulmonary fibrosis
Mutations in telomerase reverse transcriptase (TERT)
Mutations in telomerase RNA component (TERC)
Mutations in the regulator of telomere elongation helicase 1 (RTEL1)
Mutations in poly(A)-specific ribonuclease (PARN)
Treatment
ILD is not a single disease but encompasses many different pathological processes. Hence treatment is different for each disease. If a specific occupational exposure cause is found, the person should avoid that environment. If a drug cause is suspected, that drug should be discontinued.Many cases due to unknown or connective tissue-based causes are treated with corticosteroids, such as prednisolone. Some people respond to immunosuppressant treatment. Oxygen therapy at home is recommended in those with significantly low oxygen levels.Pulmonary rehabilitation appears to be useful with the benefits being sustainable longer term with improvement in exercise capacity, dyspnoea, and quality of life. Lung transplantation is an option if the ILD progresses despite therapy in appropriately selected patients with no other contraindications.On October 16, 2014, the Food and Drug Administration approved a new drug for the treatment of Idiopathic Pulmonary Fibrosis (IPF). This drug, Ofev (nintedanib), is marketed by Boehringer Ingelheim Pharmaceuticals, Inc. This drug has been shown to slow the decline of lung function although the drug has not been shown to reduce mortality or improve lung function. The estimated cost of the drug per year is approximately $94,000.
References
19. ^Health, St Vincent’s Heart. “Home.” St Vincents Lung Health, St Vincents Heart Health, www.svhlunghealth.com.au/conditions/ild-interstitial-lung-disease.20. ^“Interstitial Lung Disease: Symptoms, Causes, Tests and Treatment.” Cleveland Clinic, my.clevelandclinic.org/health/diseases/17809-interstitial-lung-disease.
External links
00736 at CHORUS |
Lactation failure | In breastfeeding, lactation failure may refer to:
Primary lactation failure, a cause of low milk supply in breastfeeding mothers
Cessation of breastfeeding before the mother had planned to stop, usually as a result of breastfeeding difficulties
Low milk supply in generalLactation failure can result in neonatal jaundice.
References
Lawrence, Ruth A. (1986). "Maternal Factors in Lactation Failure". Human Lactation 2. Boston, MA: Springer US. pp. 283–291. doi:10.1007/978-1-4615-7207-7_25. ISBN 978-1-4615-7209-1.
Lawrence, Ruth (2016). Breastfeeding : a guide for the medical profession, 8th edition. Philadelphia, PA: Elsevier. p. 836. ISBN 978-0-323-35776-0.
== External links == |
Pathogenic bacteria | Pathogenic bacteria are bacteria that can cause disease. This article focuses on the bacteria that are pathogenic to humans. Most species of bacteria are harmless and are often beneficial but others can cause infectious diseases. The number of these pathogenic species in humans is estimated to be fewer than a hundred. By contrast, several thousand species are part of the gut flora present in the digestive tract.
The body is continually exposed to many species of bacteria, including beneficial commensals, which grow on the skin and mucous membranes, and saprophytes, which grow mainly in the soil and in decaying matter. The blood and tissue fluids contain nutrients sufficient to sustain the growth of many bacteria. The body has defence mechanisms that enable it to resist microbial invasion of its tissues and give it a natural immunity or innate resistance against many microorganisms.
Pathogenic bacteria are specially adapted and endowed with mechanisms for overcoming the normal body defences, and can invade parts of the body, such as the blood, where bacteria are not normally found. Some pathogens invade only the surface epithelium, skin or mucous membrane, but many travel more deeply, spreading through the tissues and disseminating by the lymphatic and blood streams. In some rare cases a pathogenic microbe can infect an entirely healthy person, but infection usually occurs only if the bodys defence mechanisms are damaged by some local trauma or an underlying debilitating disease, such as wounding, intoxication, chilling, fatigue, and malnutrition. In many cases, it is important to differentiate infection and colonization, which is when the bacteria are causing little or no harm.
Caused by Mycobacterium tuberculosis bacteria, one of the diseases with the highest disease burden is tuberculosis, which killed 1.4 million people in 2019, mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia, which can be caused by bacteria such as Staphylococcus, Streptococcus and Pseudomonas, and foodborne illnesses, which can be caused by bacteria such as Shigella, Campylobacter, and Salmonella. Pathogenic bacteria also cause infections such as tetanus, typhoid fever, diphtheria, syphilis, and leprosy. Pathogenic bacteria are also the cause of high infant mortality rates in developing countries.Most pathogenic bacteria can be grown in cultures and identified by Gram stain and other methods. Bacteria grown in this way are often tested to find which antibiotics will be an effective treatment for the infection. For hitherto unknown pathogens, Kochs postulates are the standard to establish a causative relationship between a microbe and a disease.
Diseases
Each species has specific effect and causes symptoms in people who are infected. Some people who are infected with a pathogenic bacteria do not have symptoms. Immunocompromised individuals are more susceptible to pathogenic bacteria.
Pathogenic susceptibility
Some pathogenic bacteria cause disease under certain conditions, such as entry through the skin via a cut, through sexual activity or through a compromised immune function.
Some species of Streptococcus and Staphylococcus are part of the normal skin microbiota and typically reside on healthy skin or in the nasopharangeal region. Yet these species can potentially initiate skin infections. Streptoccal infections include sepsis, pneumonia, and meningitis. These infections can become serious creating a systemic inflammatory response resulting in massive vasodilation, shock, and death.Other bacteria are opportunistic pathogens and cause disease mainly in people with immunosuppression or cystic fibrosis. Examples of these opportunistic pathogens include Pseudomonas aeruginosa, Burkholderia cenocepacia, and Mycobacterium avium.
Intracellular
Obligate intracellular parasites (e.g. Chlamydophila, Ehrlichia, Rickettsia) have the ability to only grow and replicate inside other cells. Even these intracellular infections may be asymptomatic, requiring an incubation period. An example of this is Rickettsia which causes typhus. Another causes Rocky Mountain spotted fever.Chlamydia are intracellular parasites. These pathogens can cause pneumonia or urinary tract infection and may be involved in coronary heart disease.Other groups of intracellular bacterial pathogens include Salmonella, Neisseria, Brucella, Mycobacterium, Nocardia, Listeria, Francisella, Legionella, and Yersinia pestis. These can exist intracellularly, but can exist outside of host cells.
Infections in specific tissue
Bacterial pathogens often cause infection in specific areas of the body. Others are generalists.
Bacterial vaginosis is a condition of the vaginal microbiota in which an excessive growth of Gardnerella vaginalis and other mostly anaerobic bacteria displace the beneficial Lactobacilli species that maintain healthy vaginal microbial populations.
Bacterial meningitis is a bacterial inflammation of the meninges, which are the protective membranes covering the brain and spinal cord.
Bacterial pneumonia is a bacterial infection of the lungs.
Urinary tract infection is predominantly caused by bacteria. Symptoms include the strong and frequent sensation or urge to urinate, pain during urination, and urine that is cloudy. The most frequent cause is Escherichia coli. Urine is typically sterile but contains a variety of salts, and waste products. Bacteria can ascend into the bladder or kidney and causing cystitis and nephritis.
Bacterial gastroenteritis is caused by enteric, pathogenic bacteria. These pathogenic species are usually distinct from the usually harmless bacteria of the normal gut flora. But a different strain of the same species may be pathogenic. The distinction is sometimes difficult as in the case of Escherichia.
Bacterial skin infections include:
Impetigo is a highly contagious bacterial skin infection commonly seen in children. It is caused by Staphylococcus aureus, and Streptococcus pyogenes.
Erysipelas is an acute streptococcus bacterial infection of the deeper skin layers that spreads via with lymphatic system.
Cellulitis is a diffuse inflammation of connective tissue with severe inflammation of dermal and subcutaneous layers of the skin. Cellulitis can be caused by normal skin flora or by contagious contact, and usually occurs through open skin, cuts, blisters, cracks in the skin, insect bites, animal bites, burns, surgical wounds, intravenous drug injection, or sites of intravenous catheter insertion. In most cases it is the skin on the face or lower legs that is affected, though cellulitis can occur in other tissues.
Mechanisms of damage
The symptoms of disease appear as pathogenic bacteria damage host tissues or interfere with their function. The bacteria can damage host cells directly or indirectly by provoking an immune response that inadvertently damages host cells, or by releasing toxins.
Direct
Once pathogens attach to host cells, they can cause direct damage as the pathogens use the host cell for nutrients and produce waste products. For example, Streptococcus mutans, a component of dental plaque, metabolizes dietary sugar and produces acid as a waste product. The acid decalcifies the tooth surface to cause dental caries.
Toxin production
Endotoxins are the lipid portions of lipopolysaccharides that are part of the outer membrane of the cell wall of gram-negative bacteria. Endotoxins are released when the bacteria lyses, which is why after antibiotic treatment, symptoms can worsen at first as the bacteria are killed and they release their endotoxins. Exotoxins are secreted into the surrounding medium or released when the bacteria die and the cell wall breaks apart.
Indirect
An excessive or inappropriate immune response triggered by an infection may damage host cells.
Survival in host
Nutrients
Iron is required for humans, as well as the growth of most bacteria. To obtain free iron, some pathogens secrete proteins called siderophores, which take the iron away from iron-transport proteins by binding to the iron even more tightly. Once the iron-siderophore complex is formed, it is taken up by siderophore receptors on the bacterial surface and then that iron is brought into the bacterium.Bacterial pathogens also require access to carbon and energy sources for growth. To avoid competition with host cells for glucose which is the main energy source used by human cells, many pathogens including the respiratory pathogen Haemophilus influenzae specialise in using other carbon sources such as lactate that are abundant in the human body
Identification
Typically identification is done by growing the organism in a wide range of cultures which can take up to 48 hours. The growth is then visually or genomically identified. The cultured organism is then subjected to various assays to observe reactions to help further identify species and strain.
Treatment
Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. For example, the antibiotics chloramphenicol and tetracyclin inhibit the bacterial ribosome but not the structurally different eukaryotic ribosome, so they exhibit selective toxicity. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth. Both uses may be contributing to the rapid development of antibiotic resistance in bacterial populations. Phage therapy, using bacteriophages can also be used to treat certain bacterial infections.
Prevention
Infections can be prevented by antiseptic measures such as sterilizing the skin prior to piercing it with the needle of a syringe and by proper care of indwelling catheters. Surgical and dental instruments are also sterilized to prevent infection by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection. Bacteria in food are killed by cooking to temperatures above 73 °C (163 °F).
List of genera and microscopy features
Many genera contain pathogenic bacterial species. They often possess characteristics that help to classify and organize them into groups. The following is a partial listing.
List of species and clinical characteristics
This is description of the more common genera and species presented with their clinical characteristics and treatments.
Genetic transformation
Of the 59 species listed in the table with their clinical characteristics, 11 species (or 19%) are known to be capable of natural genetic transformation. Natural transformation is a bacterial adaptation for transferring DNA from one cell to another. This process includes the uptake of exogenous DNA from a donor cell by a recipient cell and its incorporation into the recipient cells genome by recombination. Transformation appears to be an adaptation for repairing damage in the recipient cells DNA. Among pathogenic bacteria, transformation capability likely serves as an adaptation that facilitates survival and infectivity. The pathogenic bacteria able to carry out natural genetic transformation (of those listed in the table) are Campylobacter jejuni, Enterococcus faecalis, Haemophilus influenzae, Helicobacter pylori, Klebsiella pneumoniae, Legionella pneumophila, Neisseria gonorrhoeae, Neisseria meningitidis, Staphylococcus aureus, Streptococcus pneumoniae and Vibrio cholerae.
See also
Human microbiome project
List of antibiotics
Pathogenic viruses
Notes
References
External links
Bacterial Pathogen Pronunciation by Neal R. Chamberlain, Ph.D. at A.T. Still University
Pathogenic bacteria genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID |
Pneumonitis | Pneumonitis describes general inflammation of lung tissue. Possible causative agents include radiation therapy of the chest, exposure to medications used during chemo-therapy, the inhalation of debris (e.g., animal dander), aspiration, herbicides or fluorocarbons and some systemic diseases. If unresolved, continued inflammation can result in irreparable damage such as pulmonary fibrosis.Pneumonitis is distinguished from pneumonia on the basis of causation as well as its manifestation. Pneumonia can be described as pneumonitis combined with consolidation and exudation of lung tissue due to infection with microorganisms. The distinction between Pneumonia and Pneumonitis can be further understood with Pneumonitis being the encapsulation of all respiratory infections (incorporating pneumonia and pulmonary fibrosis as major diseases), and pneumonia as a localized infection. For most infections, the immune response of the body is enough to control and apprehend the infection within a couple days, but if the tissue and the cells cant fight off the infection, the creation of pus will begin to form in the lungs which then hardens into lung abscess or suppurative pneumonitis. Patients that are immunodeficient and dont get treated immediately for any type of respiratory infection may lead to more severe infections and/or death.Pneumonitis can be classified into several different specific subcategories, including hypersensitivity pneumonitis, radiation pneumonitis, acute interstitial pneumonitis, and chemical pneumonitis. These all share similar symptoms, but differ in causative agents. Diagnosis of pneumonitis remains challenging, but several different treatment paths (corticosteroids, oxygen therapy, avoidance) have seen success.
Causes
Alveoli are the primary structure affected by pneumonitis. Any particles that are smaller than 5 microns can enter the alveoli of the lungs. These tiny air sacs facilitate the passage of oxygen from inhaled air to the bloodstream. In the case of pneumonitis, it is more difficult for this exchange of oxygen to occur since irritants have caused inflammation of the alveoli. Due to the lack of a definitive determination of a single irritant causing pneumonitis, there are several possible causes.
Viral infection. Measles can cause severe pneumonitis, and ribavirin has been proposed as a possible treatment. CMV is another cause.
Pneumonia
Radiation therapy
Inhaling chemicals, such as sodium hydroxide
Interstitial lung disease
Sepsis
Adverse reaction to medications
Hypersensitivity to inhaled agents
Inhalation of spores of some species of mushroom (bronchoalveolar allergic syndrome)
Mercury exposure
Smoking
Overexposure to chlorine
Bronchial obstruction (obstructive pneumonitis or post-obstructive pneumonitis)
Ascariasis (during parasite migration)
Aspirin overdose, some antibiotics, and chemotherapy drugs
“Farmer’s lung” and “hot tub lung” are common names for types of hypersensitivity pneumonitis that result from exposure to some types of thermophilic actinomyces, mycobacteria and molds.
Avian proteins in bird feces and feathers
Whole body or chest radiation therapy used for cancer treatment
Symptoms
Physical manifestations of Pneumonitis range from mild cold-like symptoms to respiratory failure. Most frequently, those with pneumonitis experience shortness of breath, and sometimes a dry cough. Symptoms usually appear a few hours after exposure and peak at approximately eighteen to twenty-four hours.Other symptoms may include:
Malaise
Fever
Dyspnea
Flushed and/or discolored skin
Sweating
Small and fast inhalationsWithout proper treatment, pneumonitis may become chronic pneumonitis, resulting in fibrosis of the lungs and its effects:
Difficulty breathing
Food aversion
LethargyEnd-stage fibrosis and respiratory failure eventually lead to death in cases without proper management of chronic pneumonitis.
Diagnosis
A chest X-ray or CT is necessary to differentiate between pneumonitis and pneumonia of an infectious etiology. Some degree of pulmonary fibrosis may be evident in a CT which is indicative of chronic pulmonary inflammatory processes. Diagnosis of Pneumonitis is often difficult as it depends on a high degree of clinical suspicion when evaluating a patient with a recent onset of a possible interstitial lung disease. In addition, interpreting pathologic and radiographic test results remains a challenge to clinicians. Pneumonitis is often difficult to recognize and discern from other interstitial lung diseases.Diagnostic procedures currently available include:
Evaluation of patient history and possible exposure to a known causative agent
High-Resolution Computed Tomography (HRCT) consistent with pneumonitis
Bronchoalveolar lavage with lymphocytosis
Lung biopsy consistent with pneumonitis histopathologyExposure to causative agents of pneumonitis in a specific environment can be confirmed through aero/microbiologic analysis to verify its presence. Subsequent testing of patient serum for evidence of serum specific IgG antibodies confirms patient exposure.Clinical tests include chest radiography or (HRCT) which may show centrilobular nodular and ground-glass opacities with air-trapping in the middle and upper lobes of the lungs. Fibrosis may also be evident. Bronchoalveolar Lavage (BAL) findings coinciding with pneumonitis typically include a lymphocytosis with a low CD4:CD8 ratio.Reticular or linear patterns may be observed in diagnostic imaging. Pneumonitis may cause subpleural honeycombing, changing the shape of the air spaces in an image, which may be used to identify the respiratory diseas.e The interlobular septa may also thicken and indicate pneumonitis when viewed on a scan.Histological samples of lung tissue with pneumonitis include the presence of poorly formed granulomas or mononuclear cell infiltrates. The presence of bronchocentric lymphohistiocytic interstitial pneumonia with chronic bronchiolitis and non-necrotising granulomas coincides with pneumonitis.Since pneumonitis manifests in all areas of the lungs, imaging such as chest x-rays and Computerized tomography (CT) scans are useful diagnostic tools. While pneumonia is a localized infection, pneumonitis is widespread. A spirometer may also be used to measure pulmonary function.
During external examination, clubbing (swelling of fingertip tissue and increase in angle at the nail bed), and basal crackles may be observed.
For hypersensitivity pneumonitis many diagnoses take place through the focus of blood test, chest x-rays, and depending on severity of infection doctors may recommend a bronchoscopy. Blood test are important to early detect for other causative substances that could eliminate possible causes of the hypersensitivity pneumonitis.
Classification
Pneumonitis can be separated into several distinct categories based upon causative agent.
Hypersensitivity Pneumonitis (Extrinsic Allergenic Alveolitis) describes the inflammation of alveoli which occurs after inhalation of organic dusts (oxford). These particles can be proteins, bacteria, or mold spores and are usually specific to an occupation.
Acute Interstitial Pneumonitis can result from many different irritants in the lungs and usually is resolved in under a month.
Chemical Pneumonitis is caused by toxic substances reaching the lower airways of the bronchial tree. This causes a chemical burn and severe inflammation. (oxford)
Radiation Pneumonitis, also known as Radiation Induced Lung Injury, describes the initial damage done to the lung tissue by ionization radiation. Radiation, used to treat cancer, can cause pneumonitis when applied to the chest or full body. Radiation pneumonitis occurs in approximately 30% of advanced lung cancer patients treated with radiation therapy.
Aspiration pneumonitis is caused by a chemical inhalation of harmful gastric contents which include causes such as:
Aspiration due to a drug overdose
A lung injury after the inhalation of habitual gastric contents.
The development of colonized oropharyngeal material after inhalation.
Bacteria entering the lungs
Treatment
Typical treatment for pneumonitis includes conservative use of corticosteroids such as a short course of oral prednisone or methylprednisolone. Inhaled corticosteroids such as fluticasone or budesonide may also be effective for reducing inflammation and preventing re-inflammation on a chronic level by suppressing inflammatory processes that may be triggered by environmental exposures such as allergens. Severe cases of pneumonitis may require corticosteroids and oxygen therapy, as well as elimination of exposure to known irritants.Corticosteroid dose and treatment duration vary from case to case. However, a common regimen beginning at 0.5 mg/kg per day for a couple of days before tapering to a smaller dose for several months to a year, has been used successfully.Corticosteroids effectively reduce inflammation by switching off several genes activated during an inflammatory reaction. The production of anti-inflammatory proteins, and the degeneration of mRNA encoding inflammatory proteins, can also be increased by a high concentration of corticosteroids. These responses can help mitigate the inflammation seen in pneumonitis and reduce symptoms.Certain immune-modulating treatments may be appropriate for patients with chronic pneumonitis. Azathioprine and mycophenolate are two particular treatments that have been associated with an improvement of gas exchange. Patients with chronic pneumonitis also may be evaluated for lung transplantation.
Images
See also
Hypersensitivity pneumonitis, also known as extrinsic allergic alveolitis (EAA)
Acute Interstitial Pneumonitis
Radiation Pneumonitis
Chemical Pneumonitis
References
== External links == |
Soft tissue | Soft tissue is all the tissue in the body that is not hardened by the processes of ossification or calcification such as bones and teeth. Soft tissue connects, surrounds or supports internal organs and bones, and includes muscle, tendons, ligaments, fat, fibrous tissue, lymph and blood vessels, fasciae, and synovial membranes.
It is sometimes defined by what it is not – such as "nonepithelial, extraskeletal mesenchyme exclusive of the reticuloendothelial system and glia".
Composition
The characteristic substances inside the extracellular matrix of soft tissue are the collagen, elastin and ground substance. Normally the soft tissue is very hydrated because of the ground substance. The fibroblasts are the most common cell responsible for the production of soft tissues fibers and ground substance. Variations of fibroblasts, like chondroblasts, may also produce these substances.
Mechanical characteristics
At small strains, elastin confers stiffness to the tissue and stores most of the strain energy. The collagen fibers are comparatively inextensible and are usually loose (wavy, crimped). With increasing tissue deformation the collagen is gradually stretched in the direction of deformation. When taut, these fibers produce a strong growth in tissue stiffness. The composite behavior is analogous to a nylon stocking, whose rubber band does the role of elastin as the nylon does the role of collagen. In soft tissues, the collagen limits the deformation and protects the tissues from injury.
Human soft tissue is highly deformable, and its mechanical properties vary significantly from one person to another. Impact testing results showed that the stiffness and the damping resistance of a test subject’s tissue are correlated with the mass, velocity, and size of the striking object. Such properties may be useful for forensics investigation when contusions were induced. When a solid object impacts a human soft tissue, the energy of the impact will be absorbed by the tissues to reduce the effect of the impact or the pain level; subjects with more soft tissue thickness tended to absorb the impacts with less aversion.
Soft tissues have the potential to undergo large deformations and still return to the initial configuration when unloaded, i.e. they are hyperelastic materials, and their stress-strain curve is nonlinear. The soft tissues are also viscoelastic, incompressible and usually anisotropic. Some viscoelastic properties observable in soft tissues are: relaxation, creep and hysteresis. In order to describe the mechanical response of soft tissues, several methods have been used. These methods include: hyperelastic macroscopic models based on strain energy, mathematical fits where nonlinear constitutive equations are used, and structurally based models where the response of a linear elastic material is modified by its geometric characteristics.
Pseudoelasticity
Even though soft tissues have viscoelastic properties, i.e. stress as function of strain rate, it can be approximated by a hyperelastic model after precondition to a load pattern. After some cycles of loading and unloading the material, the mechanical response becomes independent of strain rate.
S
=
S
(
E
,
E
˙
)
→
S
=
S
(
E
)
{\displaystyle \mathbf {S} =\mathbf {S} (\mathbf {E} ,{\dot {\mathbf {E} }})\quad \rightarrow \quad \mathbf {S} =\mathbf {S} (\mathbf {E} )}
Despite the independence of strain rate, preconditioned soft tissues still present hysteresis, so the mechanical response can be modeled as hyperelastic with different material constants at loading and unloading. By this method the elasticity theory is used to model an inelastic material. Fung has called this model as pseudoelastic to point out that the material is not truly elastic.
Residual stress
In physiological state soft tissues usually present residual stress that may be released when the tissue is excised. Physiologists and histologists must be aware of this fact to avoid mistakes when analyzing excised tissues. This retraction usually causes a visual artifact.
Fung-elastic material
Fung developed a constitutive equation for preconditioned soft tissues which is
W
=
1
2
[
q
+
c
(
e
Q
−
1
)
]
{\displaystyle W={\frac {1}{2}}\left[q+c\left(e^{Q}-1\right)\right]}
with
q
=
a
i
j
k
l
E
i
j
E
k
l
Q
=
b
i
j
k
l
E
i
j
E
k
l
{\displaystyle q=a_{ijkl}E_{ij}E_{kl}\qquad Q=b_{ijkl}E_{ij}E_{kl}}
quadratic forms of Green-Lagrange strains
E
i
j
{\displaystyle E_{ij}}
and
a
i
j
k
l
{\displaystyle a_{ijkl}}
,
b
i
j
k
l
{\displaystyle b_{ijkl}}
and
c
{\displaystyle c}
material constants.
W
{\displaystyle W}
is the strain energy function per volume unit, which is the mechanical strain energy for a given temperature.
Isotropic simplification
The Fung-model, simplified with isotropic hypothesis (same mechanical properties in all directions). This written in respect of the principal stretches (
λ
i
{\displaystyle \lambda _{i}}
):
W
=
1
2
[
a
(
λ
1
2
+
λ
2
2
+
λ
3
2
−
3
)
+
b
(
e
c
(
λ
1
2
+
λ
2
2
+
λ
3
2
−
3
)
−
1
)
]
{\displaystyle W={\frac {1}{2}}\left[a(\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}-3)+b\left(e^{c(\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}-3)}-1\right)\right]}
,where a, b and c are constants.
Simplification for small and big stretches
For small strains, the exponential term is very small, thus negligible.
W
=
1
2
a
i
j
k
l
E
i
j
E
k
l
{\displaystyle W={\frac {1}{2}}a_{ijkl}E_{ij}E_{kl}}
On the other hand, the linear term is negligible when the analysis rely only on big strains.
W
=
1
2
c
(
e
b
i
j
k
l
E
i
j
E
k
l
−
1
)
{\displaystyle W={\frac {1}{2}}c\left(e^{b_{ijkl}E_{ij}E_{kl}}-1\right)}
Gent-elastic material
W
=
−
μ
J
m
2
ln
(
1
−
(
λ
1
2
+
λ
2
2
+
λ
3
2
−
3
J
m
)
)
{\displaystyle W=-{\frac {\mu J_{m}}{2}}\ln \left(1-\left({\frac {\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}-3}{J_{m}}}\right)\right)}
where
μ
>
0
{\displaystyle \mu >0}
is the shear modulus for infinitesimal strains and
J
m
>
0
{\displaystyle J_{m}>0}
is a stiffening parameter, associated with limiting chain extensibility. This constitutive model cannot be stretched in uni-axial tension beyond a maximal stretch
J
m
{\displaystyle J_{m}}
, which is the positive root of
λ
m
2
+
2
λ
m
−
J
m
−
3
=
1
{\displaystyle \lambda _{m}^{2}+2\lambda _{m}-J_{m}-3=1}
Remodeling and growth
Soft tissues have the potential to grow and remodel reacting to chemical and mechanical long term changes. The rate the fibroblasts produce tropocollagen is proportional to these stimuli. Diseases, injuries and changes in the level of mechanical load may induce remodeling. An example of this phenomenon is the thickening of farmers hands. The remodeling of connective tissues is well known in bones by the Wolffs law (bone remodeling). Mechanobiology is the science that study the relation between stress and growth at cellular level.Growth and remodeling have a major role in the cause of some common soft tissue diseases, like arterial stenosis and aneurisms and any soft tissue fibrosis. Other instance of tissue remodeling is the thickening of the cardiac muscle in response to the growth of blood pressure detected by the arterial wall.
Imaging techniques
There are certain issues that have to be kept in mind when choosing an imaging technique for visualizing soft tissue extracellular matrix (ECM) components. The accuracy of the image analysis relies on the properties and the quality of the raw data and, therefore, the choice of the imaging technique must be based upon issues such as:
Having an optimal resolution for the components of interest;
Achieving high contrast of those components;
Keeping the artifact count low;
Having the option of volume data acquisition;
Keeping the data volume low;
Establishing an easy and reproducible setup for tissue analysis.The collagen fibers are approximately 1-2 μm thick. Thus, the resolution of the imaging technique needs to be approximately 0.5 μm. Some techniques allow the direct acquisition of volume data while other need the slicing of the specimen. In both cases, the volume that is extracted must be able to follow the fiber bundles across the volume. High contrast makes segmentation easier, especially when color information is available. In addition, the need for fixation must also be addressed. It has been shown that soft tissue fixation in formalin causes shrinkage, altering the structure of the original tissue. Some typical values of contraction for different fixation are: formalin (5% - 10%), alcohol (10%), bouin (<5%).Imaging methods used in ECM visualization and their properties.
Disorders
Soft tissue disorders are medical conditions affecting soft tissue.
Often soft tissue injuries are some of the most chronically painful and difficult to treat because it is very difficult to see what is going on under the skin with the soft connective tissues, fascia, joints, muscles and tendons.
Musculoskeletal specialists, manual therapists and neuromuscular physiologists and neurologists specialize in treating injuries and ailments in the soft tissue areas of the body. These specialized clinicians often develop innovative ways to manipulate the soft tissue to speed natural healing and relieve the mysterious pain that often accompanies soft tissue injuries. This area of expertise has become known as soft tissue therapy and is rapidly expanding as the technology continues to improve the ability of these specialists to identify problem areas more quickly.
A promising new method of treating wounds and soft tissue injuries is via platelet growth factor (PGF).There is a close overlap between the term "soft tissue disorder" and rheumatism. Sometimes the term "soft tissue rheumatic disorders" is used to describe these conditions.
See also
Biomaterial
Biomechanics
Daviss law
Rheology
Soft tissue sarcoma
References
External links
Media related to Soft tissues at Wikimedia Commons |
Father | A father is the male parent of a child. Besides the paternal bonds of a father to his children, the father may have a parental, legal, and social relationship with the child that carries with it certain rights and obligations. An adoptive father is a male who has become the childs parent through the legal process of adoption. A biological father is the male genetic contributor to the creation of the infant, through sexual intercourse or sperm donation. A biological father may have legal obligations to a child not raised by him, such as an obligation of monetary support. A putative father is a man whose biological relationship to a child is alleged but has not been established. A stepfather is a male who is the husband of a childs mother and they may form a family unit, but who generally does not have the legal rights and responsibilities of a parent in relation to the child.
The adjective "paternal" refers to a father and comparatively to "maternal" for a mother. The verb "to father" means to procreate or to sire a child from which also derives the noun "fathering". Biological fathers determine the sex of their child through a sperm cell which either contains an X chromosome (female), or Y chromosome (male). Related terms of endearment are dad (dada, daddy), baba, papa, pappa, papasita, (pa, pap) and pop. A male role model that children can look up to is sometimes referred to as a father-figure.
Paternal rights
The paternity rights of a father with regard to his children differ widely from country to country often reflecting the level of involvement and roles expected by that society.
Paternity leaveParental leave is when a father takes time off to support his newly born or adopted baby. Paid paternity leave first began in Sweden in 1976, and is paid in more than half of European Union countries. In the case of male same-sex couples the law often makes no provision for either one or both fathers to take paternity leave.
Child custodyFathers rights movements such as Fathers 4 Justice argue that family courts are biased against fathers.
Child supportChild support is an ongoing periodic payment made by one parent to the other; it is normally paid by the parent who does not have custody.
Paternity fraudAn estimated 2% of British fathers experiences paternity fraud during a non-paternity event, bringing up a child they wrongly believe to be their biological offspring.
Role of the father
In almost all cultures fathers are regarded as secondary caregivers. This perception is slowly changing with more and more fathers becoming primary caregivers, while mothers go to work, or in single parenting situations and male same-sex parenting couples.
Fatherhood in the Western World
In the West, the image of the married father as the primary wage-earner is changing. The social context of fatherhood plays an important part in the well-being of men and their children. In the United States 16% of single parents were men as of 2013.
Importance of father or father-figure
Involved fathers offer developmentally specific provisions to their children and are impacted themselves by doing so. Active father figures may play a role in reducing behavior and psychological problems in young adults. An increased amount of father–child involvement may help increase a childs social stability, educational achievement,: 5 and their potential to have a solid marriage as an adult. Their children may also be more curious about the world around them and develop greater problem solving skills. Children who were raised with fathers perceive themselves to be more cognitively and physically competent than their peers without a father. Mothers raising children together with a father reported less severe disputes with their child.The father-figure is not always a childs biological father and some children will have a biological father as well as a step- or nurturing father. When a child is conceived through sperm donation, the donor will be the "biological father" of the child.
Fatherhood as legitimate identity can be dependent on domestic factors and behaviors. For example, a study of the relationship between fathers, their sons, and home computers found that the construction of fatherhood and masculinity required that fathers display computer expertise.
Determination of parenthood
Roman law defined fatherhood as "Mater semper certa; pater est quem nuptiae demonstrant" ("The [identity of the] mother is always certain; the father is whom the marriage vows indicate"). The recent emergence of accurate scientific testing, particularly DNA testing, has resulted in the family law relating to fatherhood experiencing rapid changes.
History of fatherhood
Many male animals do not participate in the rearing of their young. The development of human men as creatures which are involved in their offsprings upbringing took place during the stone age.In medieval and most of modern European history, caring for children was predominantly the domain of mothers, whereas fathers in many societies provide for the family as a whole. Since the 1950s, social scientists and feminists have increasingly challenged gender roles in Western countries, including that of the male breadwinner. Policies are increasingly targeting fatherhood as a tool of changing gender relations. Research from various societies suggest that since the middle of the 20th century fathers have become increasingly involved in the care of their children.
Patricide
In early human history there have been notable instances of patricide. For example:
Tukulti-Ninurta I (r. 1243–1207 B.C.E.), Assyrian king, was killed by his own son after sacking Babylon.
Sennacherib (r. 704–681 B.C.E.), Assyrian king, was killed by two of his sons for his desecration of Babylon.
King Kassapa I (473 to 495 CE) creator of the Sigiriya citadel of ancient Sri Lanka killed his father king Dhatusena for the throne.
Emperor Yang of Sui in Chinese history allegedly killed his father, Emperor Wen of Sui.
Beatrice Cenci, Italian noblewoman who, according to legend, killed her father after he imprisoned and raped her. She was condemned and beheaded for the crime along with her brother and her stepmother in 1599.
Lizzie Borden (1860–1927) allegedly killed her father and her stepmother with an axe in Fall River, Massachusetts, in 1892. She was acquitted, but her innocence is still disputed.
Iyasus I of Ethiopia (1654–1706), one of the great warrior emperors of Ethiopia, was deposed by his son Tekle Haymanot in 1706 and subsequently assassinated.In more contemporary history there have also been instances of father–offspring conflicts, such as:
Chiyo Aizawa murdered her own father who had been raping her for fifteen years, on October 5, 1968, in Japan. The incident changed the Criminal Code of Japan regarding patricide.
Kip Kinkel (1982- ), an Oregon boy who was convicted of killing his parents at home and two fellow students at school on May 20, 1998.
Sarah Marie Johnson (1987- ), an Idaho girl who was convicted of killing both parents on the morning of September 2, 2003.
Dipendra of Nepal (1971–2001) reportedly massacred much of his family at a royal dinner on June 1, 2001, including his father King Birendra, mother, brother, and sister.
Christopher Porco (1983- ), was convicted on August 10, 2006, of the murder of his father and attempted murder of his mother with an axe.
Terminology
Biological fathers
Baby Daddy – A biological father who bears financial responsibility for a child, but with whom the mother has little or no contact.
Birth father – the biological father of a child who, due to adoption or parental separation, does not raise the child or cannot take care of one.
Biological father – or sometimes simply referred to as "Father" is the genetic father of a child.
Posthumous father – father died before children were born (or even conceived in the case of artificial insemination).
Putative father – unwed man whose legal relationship to a child has not been established but who is alleged to be or claims that he may be the biological father of a child.
Sperm donor – an anonymous or known biological father who provides his sperm to be used in artificial insemination or in vitro fertilisation in order to father a child for a third party female. Also used as a slang term meaning "baby daddy".
Surprise father – where the men did not know that there was a child until possibly years afterward
Teenage father/youthful father – Father who is still a teenager.
Non-biological (social and legal relationship)
Adoptive father – the father who has adopted a child
Cuckolded father – where the child is the product of the mothers adulterous relationship
DI Dad – social/legal father of children produced via Donor Insemination (where a donors sperm were used to impregnate the DI Dads spouse)
Father-in-law – the father of ones spouse
Foster father – child is raised by a man who is not the biological or adoptive father usually as part of a couple.
Mothers partner – assumption that current partner fills father role
Mothers husband – under some jurisdictions (e.g. in Quebec civil law), if the mother is married to another man, the latter will be defined as the father
Presumed father – Where a presumption of paternity has determined that a man is a childs father regardless of if he actually is or is not the biological father
Social father – where a man takes de facto responsibility for a child, such as caring for one who has been abandoned or orphaned (the child is known as a "child of the family" in English law)
Stepfather – a married non-biological father where the child is from a previous relationship
Fatherhood defined by contact level
Absent father – father who cannot or will not spend time with his child(ren)
Second father – a non-parent whose contact and support is robust enough that near parental bond occurs (often used for older male siblings who significantly aid in raising a child, sometimes for older men who took care of younger friends who have no families)
Stay-at-home dad – the male equivalent of a housewife with child, where his spouse is breadwinner
Weekend/holiday father – where child(ren) only stay(s) with father on weekends, holidays, etc.
Non-human fatherhood
For some animals, it is the fathers who take care of the young.
Darwins frog (Rhinoderma darwini) fathers carry eggs in the vocal pouch.
Most male waterfowl are very protective in raising their offspring, sharing scout duties with the female. Examples are the geese, swans, gulls, loons, and a few species of ducks. When the families of most of these waterfowl travel, they usually travel in a line and the fathers are usually the ones guarding the offspring at the end of the line while the mothers lead the way.
The female seahorse (Hippocampus) deposits eggs into the pouch on the males abdomen. The male releases sperm into the pouch, fertilizing the eggs. The embryos develop within the males pouch, nourished by their individual yolk sacs.
Male catfish keep their eggs in their mouth, foregoing eating until they hatch.
Male emperor penguins alone incubate their eggs; females do no incubation. Rather than building a nest, each male protects his egg by balancing it on the tops of his feet, enclosed in a special brood pouch. Once the eggs are hatched however, the females will rejoin the family.
Male beavers secure their offspring along with the females during their first few hours of their lives. As the young beavers mature, their fathers will teach them how to search for materials to build and repair their own dams, before they disperse to find their own mates.
Wolf fathers help feed, protect, and play with their pups. In some cases, several generations of wolves live in the pack, giving pups the care of grandparents, aunts/uncles, and siblings, in addition to parents. The father wolf is also the one who does most of the hunting when the females are securing their newborn pups.
Coyotes are monogamous and male coyotes hunt and bring food to their young.
Dolphin fathers help in the care of the young. Newborns are held on the surface of the water by both parents until they are ready to swim on their own.
A number of bird species have active, caring fathers who assist the mothers, such as the waterfowls mentioned above.
Apart from humans, fathers in few primate species care for their young. Those that do are tamarins and marmosets. Particularly strong care is also shown by siamangs where fathers carry infants after their second year. In titi and owl monkeys fathers carry their infants 90% of the time with "titi monkey infants developing a preference for their fathers over their mothers". Silverback gorillas have less role in the families but most of them serve as an extra protecting the families from harm and sometimes approaching enemies to distract them so that his family can escape unnoticed.Many species, though, display little or no paternal role in caring for offspring. The male leaves the female soon after mating and long before any offspring are born. It is the females who must do all the work of caring for the young.
A male bear leaves the female shortly after mating and will kill and sometimes eat any bear cub he comes across, even if the cub is his. Bear mothers spend much of their cubs early life protecting them from males. (Many artistic works, such as advertisements and cartoons, depict kindly "papa bears" when this is the exact opposite of reality.)
Domesticated dog fathers show little interest in their offspring, and unlike wolves, are not monogamous with their mates and are thus likely to leave them after mating.
Male lions will tolerate cubs, but only allow them to eat meat from dead prey after they have had their fill. A few are quite cruel towards their young and may hurt or kill them with little provocation. A male who kills another male to take control of his pride will also usually kill any cubs belonging to that competing male. However, it is also the males who are responsible for guarding the pride while the females hunt. However the male lions are the only felines that actually have a role in fatherhood.
Male rabbits generally tolerate kits but unlike the females, they often show little interest in the kits and are known to play rough with their offspring when they are mature, especially towards their sons. This behaviour may also be part of an instinct to drive the young males away to prevent incest matings between the siblings. The females will eventually disperse from the warren as soon as they mature but the father does not drive them off like he normally does to the males.
Horse stallions and pig boars have little to no role in parenting, nor are they monogamous with their mates. They will tolerate young to a certain extent, but due to their aggressive male nature, they are generally annoyed by the energetic exuberance of the young, and may hurt or even kill the young. Thus, stud stallions and boars are not kept in the same pen as their young or other females.Finally, in some species neither the father nor the mother provides any care.
This is true for most insects, reptiles, and fish.
See also
Father complex
Fathers rights movement
Paternal age effect
Patricide
Paternal bond
Putative father
Putative father registry
Responsible fatherhood
Shared Earning/Shared Parenting Marriage
Sociology of fatherhood
"Father" can also refer metaphorically to a person who is considered the founder of a body of knowledge or of an institution. In such context the meaning of "father" is similar to that of "founder". See List of persons considered father or mother of a field.
Further reading
Elizabeth Preston (27 Jun 2021). "The riddle of how humans evolved to have fathers". Knowable Magazine / BBC.com.
References
Bibliography
Inhorn, Marcia C.; Chavkin, Wendy; Navarro, José-Alberto, eds. (2015). Globalized fatherhood. New York: Berghahn. ISBN 9781782384373. Studies by anthropologists, sociologists, and cultural geographers -
Kraemer, Sebastian (1991). "The Origins of Fatherhood: An Ancient Family Process". Family Process. 30 (4): 377–392. doi:10.1111/j.1545-5300.1991.00377.x. PMID 1790784.
Diamond, Michael J. (2007). My father before me : how fathers and sons influence each other throughout their lives. New York: W.W. Norton. ISBN 9780393060607.
Collier, Richard (2013). "Rethinking men and masculinities in the contemporary legal profession: the example of fatherhood, transnational business masculinities, and work-life balance in large law firms". Nevada Law Journal. 13 (2): 7. |
Myopia | Myopia, also known as near-sightedness and short-sightedness, is an eye disease where light focuses in front of, instead of on, the retina. As a result, distant objects appear blurry while close objects appear normal. Other symptoms may include headaches and eye strain. Severe near-sightedness is associated with an increased risk of retinal detachment, cataracts, and glaucoma.The underlying mechanism involves the length of the eyeball growing too long or less commonly the lens being too strong. It is a type of refractive error. Diagnosis is by eye examination.Tentative evidence indicates that the risk of near-sightedness can be decreased by having young children spend more time outside. This decrease in risk may be related to natural light exposure. Near-sightedness can be corrected with eyeglasses, contact lenses, or a refractive surgery. Eyeglasses are the easiest and safest method of correction. Contact lenses can provide a wider field of vision, but are associated with a risk of infection. Refractive surgery permanently changes the shape of the cornea.Near-sightedness is the most common eye problem and is estimated to affect 1.5 billion people (22% of the world population). Rates vary significantly in different areas of the world. Rates among adults are between 15% to 49%. Among children, it affects 1% of rural Nepalese, 4% of South Africans, 12% of people in the US, and 37% in some large Chinese cities. In China the proportion of girls is slightly higher than boys. Rates have increased since the 1950s. Uncorrected near-sightedness is one of the most common causes of vision impairment globally along with cataracts, macular degeneration, and vitamin A deficiency.
Etymology
The term myopia is of Koine Greek origin: μυωπία myōpia (or μυωπίασις myōpiasis) "short-sight(-ness)", from Ancient Greek μύωψ myōps "short-sighted (man), (man) with eyes getting shut", from μύειν myein "to shut the eyes" and ὤψ ōps "eye, look, sight" (GEN ὠπός ōpos). The opposite of myopia in English is hyperopia (long-sightedness).
Signs and symptoms
A myopic individual can see clearly out to a certain distance (the far point of the eye), but objects placed beyond this distance appear blurred. If the extent of the myopia is great enough, even standard reading distances can be affected. Upon routine examination of the eyes, the vast majority of myopic eyes appear structurally identical to nonmyopic eyes.Onset is often in school children, with worsening between the ages of 8 and 15.
Causes
The underlying cause is believed to be a combination of genetic and environmental factors. Risk factors include doing work that involves focusing on close objects, greater time spent indoors, urbanization, and a family history of the condition. It is also associated with a high socioeconomic class and higher level of education.A 2012 review could not find strong evidence for any single cause, although many theories have been discredited. Twin studies indicate that at least some genetic factors are involved. Myopia has been increasing rapidly throughout the developed world, suggesting environmental factors are involved.A single-author literature review in 2021 proposed that myopia is the result of corrective lenses interfering with emmetropization.
Genetics
A risk for myopia may be inherited from ones parents. Genetic linkage studies have identified 18 possible loci on 15 different chromosomes that are associated with myopia, but none of these loci is part of the candidate genes that cause myopia. Instead of a simple one-gene locus controlling the onset of myopia, a complex interaction of many mutated proteins acting in concert may be the cause. Instead of myopia being caused by a defect in a structural protein, defects in the control of these structural proteins might be the actual cause of myopia. A collaboration of all myopia studies worldwide identified 16 new loci for refractive error in individuals of European ancestry, of which 8 were shared with Asians. The new loci include candidate genes with functions in neurotransmission, ion transport, retinoic acid metabolism, extracellular matrix remodeling and eye development. The carriers of the high-risk genes have a tenfold increased risk of myopia. Aberrant genetic recombination and gene splicing in the OPNLW1 and OPNMW1 genes that code for two retinal cone photopigment proteins can produce high myopia by interfering with refractive development of the eye.Human population studies suggest that contribution of genetic factors accounts for 60–90% of variance in refraction. However, the currently identified variants account for only a small fraction of myopia cases, suggesting the existence of a large number of yet unidentified low-frequency or small-effect variants, which underlie the majority of myopia cases.
Environmental factors
Environmental factors which increase the risk of nearsightedness include insufficient light exposure, low physical activity, near work, and increased year of education.One hypothesis is that a lack of normal visual stimuli causes improper development of the eyeball. Under this hypothesis, "normal" refers to the environmental stimuli that the eyeball evolved to. Modern humans who spend most of their time indoors, in dimly or fluorescently lit buildings may be at risk of development of myopia.People, and children especially, who spend more time doing physical exercise and outdoor play have lower rates of myopia, suggesting the increased magnitude and complexity of the visual stimuli encountered during these types of activities decrease myopic progression. There is preliminary evidence that the protective effect of outdoor activities on the development of myopia is due, at least in part, to the effect of long hours of exposure to daylight on the production and the release of retinal dopamine.Myopia can be induced with minus spherical lenses, and overminus in prescription lenses can induce myopia progression. Overminus during refraction can be avoided through various techniques and tests, such as fogging, plus to blur, and the duochrome test.The near work hypothesis, also referred to as the "use-abuse theory" states that spending time involved in near work strains the intraocular and extraocular muscles. Some studies support the hypothesis, while other studies do not. While an association is present, it is not clearly causal.Nearsightedness is also more common in children with diabetes, childhood arthritis, uveitis, and systemic lupus erythematosus.
Mechanism
Because myopia is a refractive error, the physical cause of myopia is comparable to any optical system that is out of focus. Borish and Duke-Elder classified myopia by these physical causes:
Axial myopia is attributed to an increase in the eyes axial length
Refractive myopia is attributed to the condition of the refractive elements of the eye. Borish further subclassified refractive myopia:Curvature myopia is attributed to excessive, or increased, curvature of one or more of the refractive surfaces of the eye, especially the cornea. In those with Cohen syndrome, myopia appears to result from high corneal and lenticular power.
Index myopia is attributed to variation in the index of refraction of one or more of the ocular media.As with any optical system experiencing a defocus aberration, the effect can be exaggerated or masked by changing the aperture size. In the case of the eye, a large pupil emphasizes refractive error and a small pupil masks it. This phenomenon can cause a condition in which an individual has a greater difficulty seeing in low-illumination areas, even though there are no symptoms in bright light, such as daylight.Under rare conditions, edema of the ciliary body can cause an anterior displacement of the lens, inducing a myopia shift in refractive error.
Diagnosis
A diagnosis of myopia is typically made by an eye care professional, usually an optometrist or ophthalmologist. During a refraction, an autorefractor or retinoscope is used to give an initial objective assessment of the refractive status of each eye, then a phoropter is used to subjectively refine the patients eyeglass prescription. Other types of refractive error are hyperopia, astigmatism, and presbyopia.
Types
Various forms of myopia have been described by their clinical appearance:
Simple myopia: Myopia in an otherwise normal eye, typically less than 4.00 to 6.00 diopters. This is the most common form of myopia.
Degenerative myopia, also known as malignant, pathological, or progressive myopia, is characterized by marked fundus changes, such as posterior staphyloma, and associated with a high refractive error and subnormal visual acuity after correction. This form of myopia gets progressively worse over time. Degenerative myopia has been reported as one of the main causes of visual impairment.
Pseudomyopia is the blurring of distance vision brought about by spasm of the accommodation system.
Nocturnal myopia: Without adequate stimulus for accurate accommodation, the accommodation system partially engages, pushing distance objects out of focus.
Nearwork-induced transient myopia (NITM): short-term myopic far point shift immediately following a sustained near visual task. Some authors argue for a link between NITM and the development of permanent myopia.
Instrument myopia: over-accommodation when looking into an instrument such as a microscope.
Induced myopia, also known as acquired myopia, results from various medications, increases in glucose levels, nuclear sclerosis, oxygen toxicity (e.g., from diving or from oxygen and hyperbaric therapy) or other anomalous conditions. Sulphonamide therapy can cause ciliary body edema, resulting in anterior displacement of the lens, pushing the eye out of focus. Elevation of blood-glucose levels can also cause edema (swelling) of the crystalline lens as a result of sorbitol accumulating in the lens. This edema often causes temporary myopia. Scleral buckles, used in the repair of retinal detachments may induce myopia by increasing the axial length of the eye.
Index myopia is attributed to variation in the index of refraction of one or more of the ocular media. Cataracts may lead to index myopia.
Form deprivation myopia occurs when the eyesight is deprived by limited illumination and vision range, or the eye is modified with artificial lenses or deprived of clear form vision. In lower vertebrates, this kind of myopia seems to be reversible within short periods of time. Myopia is often induced this way in various animal models to study the pathogenesis and mechanism of myopia development.
Degree
The degree of myopia is described in terms of the power of the ideal correction, which is measured in diopters:
Myopia between −0.00 and −0.50 diopters is usually classified as emmetropia.
Low myopia usually describes myopia between −0.50 and −3.00 diopters.
Moderate myopia usually describes myopia between −3.00 and −6.00 diopters. Those with moderate amounts of myopia are more likely to have pigment dispersion syndrome or pigmentary glaucoma.
High myopia usually describes myopia of −6.00 or more. People with high myopia are more likely to have retinal detachments and primary open angle glaucoma. They are also more likely to experience floaters, shadow-like shapes which appear in the field of vision. In addition to this, high myopia is linked to macular degeneration, cataracts, and significant visual impairment.
Age at onset
Myopia is sometimes classified by the age at onset:
Congenital myopia, also known as infantile myopia, is present at birth and persists through infancy.
Youth onset myopia occurs in early childhood or teenage, and the ocular power can keep varying until the age of 21, before which any form of corrective surgery is usually not recommended by ophthalmic specialists around the world.
School myopia appears during childhood, particularly the school-age years. This form of myopia is attributed to the use of the eyes for close work during the school years. A 2004-2015 Singapore-Sydney study of children of Chinese descent found that time spent on outdoor activities was a factor.
Adult onset myopiaEarly adult onset myopia occurs between ages 20 and 40.
Late adult onset myopia occurs after age 40.
Prevention
Various methods have been employed in an attempt to decrease the progression of myopia, although studies show mixed results. Many myopia treatment studies have a number of design drawbacks: small numbers, lack of adequate control group, and failure to mask examiners from knowledge of treatments used. Among myopia specialists, mydriatic eyedrops are the most favored approach, applied by almost 75% in North America and more than 80% in Australia. A 2015 review suggested that increased outdoor time protects young children from myopia. A 2020 study of global practice patterns used by paediatric ophthalmologists to decrease the progression of myopia showed behavioral intervention (counseling to spend more time outdoors and less time with near-work) to be favored by 25% of specialists, usually in addition to medications.
Glasses and contacts
The use of reading glasses when doing close work may improve vision by reducing or eliminating the need to accommodate. Altering the use of eyeglasses between full-time, part-time, and not at all does not appear to alter myopia progression. The American Optometric Associations Clinical Practice Guidelines found evidence of effectiveness of bifocal lenses and recommends it as the method for "myopia control". In some studies, bifocal and progressive lenses have not shown differences in altering the progression of myopia compared to placebo.In 2019 contact lenses to prevent the worsening of nearsightedness in children were approved for use in the United States. This "MiSight" type claims to work by focusing peripheral light in front of the retina.
Medication
Anti-muscarinic topical medications in children under 18 years of age may slow the worsening of myopia. These treatments include pirenzepine gel, cyclopentolate eye drops, and atropine eye drops. While these treatments were shown to be effective in slowing the progression of myopia, side effects included light sensitivity and near blur.
Other methods
Scleral reinforcement surgery is aimed to cover the thinning posterior pole with a supportive material to withstand intraocular pressure and prevent further progression of the posterior staphyloma. The strain is reduced, although damage from the pathological process cannot be reversed. By stopping the progression of the disease, vision may be maintained or improved.
Treatment
The National Institutes of Health says there is no known way of preventing myopia, and the use of glasses or contact lenses does not affect its progression, unless the glasses or contact lenses are too strong of a prescription. There is no universally accepted method of preventing myopia and proposed methods need additional study to determine their effectiveness. Optical correction using glasses or contact lenses is the most common treatment; other approaches include orthokeratology, and refractive surgery.: 21–26 Medications (mostly atropine) and vision therapy can be effective in addressing the various forms of pseudomyopia.
Glasses and contacts
Corrective lenses bend the light entering the eye in a way that places a focused image accurately onto the retina. The power of any lens system can be expressed in diopters, the reciprocal of its focal length in meters. Corrective lenses for myopia have negative powers because a divergent lens is required to move the far point of focus out to the distance. More severe myopia needs lens powers further from zero (more negative). However, strong eyeglass prescriptions create distortions such as prismatic movement and chromatic aberration. Strongly near-sighted wearers of contact lenses do not experience these distortions because the lens moves with the cornea, keeping the optic axis in line with the visual axis and because the vertex distance has been reduced to zero.
Surgery
Refractive surgery includes procedures which alter the corneal curvature of some structure of the eye or which add additional refractive means inside the eye.
Photorefractive keratectomy
Photorefractive keratectomy (PRK) involves ablation of corneal tissue from the corneal surface using an excimer laser. The amount of tissue ablation corresponds to the amount of myopia. While PRK is a relatively safe procedure for up to 6 dioptres of myopia, the recovery phase post-surgery is usually painful.
LASIK
In a LASIK pre-procedure, a corneal flap is cut into the cornea and lifted to allow the excimer laser beam access to the exposed corneal tissue. After that, the excimer laser ablates the tissue according to the required correction. When the flap again covers the cornea, the change in curvature generated by the laser ablation proceeds to the corneal surface. Though LASIK is usually painless and involves a short rehabilitation period post-surgery, it can potentially result in flap complications and loss of corneal stability (post-LASIK keratectasia).
Phakic intra-ocular lens
Instead of modifying the corneal surface, as in laser vision correction (LVC), this procedure involves implanting an additional lens inside the eye (i.e., in addition to the already existing natural lens). While it usually results in good control of the refractive change, it can induce potential serious long-term complications such as glaucoma, cataract and endothelial decompensation.
Orthokeratology
Orthokeratology or simply Ortho-K is a temporary corneal reshaping process using rigid gas permeable (RGP) contact lenses. Overnight wearing of specially designed contact lenses will temporarily reshape cornea, so patients may see clearly without any lenses in daytime. Orthokeratology can correct myopia up to -6D. Several studies shown that Ortho-K can reduce myopia progression also. Risk factors of using Ortho-K lenses include microbial keratitis, corneal edema, etc. Other contact lens related complications like corneal aberration, photophobia, pain, irritation, redness etc. are usually temporary conditions, which may be eliminated by proper usage of lenses.
Intrastromal corneal ring segment
The Intrastromal corneal ring segment (ICRS), commonly used in keratoconus treatment now, was originally designed to correct mild to moderate myopia. The thickness is directly related to flattening and the diameter of the ring is proportionally inverse to the flattening of cornea. So, if diameter is smaller or thickness is greater, resulting myopia correction will be greater.
Alternative medicine
A number of alternative therapies have been claimed to improve myopia, including vision therapy, "behavioural optometry", various eye exercises and relaxation techniques, and the Bates method. Scientific reviews have concluded that there was "no clear scientific evidence" that eye exercises are effective in treating near-sightedness and as such they "cannot be advocated".
Epidemiology
Global refractive errors have been estimated to affect 800 million to 2.3 billion. The incidence of myopia within sampled population often varies with age, country, sex, race, ethnicity, occupation, environment, and other factors. Variability in testing and data collection methods makes comparisons of prevalence and progression difficult.The prevalence of myopia has been reported as high as 70–90% in some Asian countries, 30–40% in Europe and the United States, and 10–20% in Africa. Myopia is about twice as common in Jewish people than in people of non-Jewish ethnicity. Myopia is less common in African people and associated diaspora. In Americans between the ages of 12 and 54, myopia has been found to affect African Americans less than Caucasians.
Asia
In some parts of Asia, myopia is very common.
Singapore is believed to have the highest prevalence of myopia in the world; up to 80% of people there have myopia, but the accurate figure is unknown.
Chinas myopia rate is 31%: 400 million of its 1.3 billion people are myopic. The prevalence of myopia in high school in China is 77%, and in college is more than 80%.
In some areas, such as China and Malaysia, up to 41% of the adult population is myopic to 1.00 dpt, and up to 80% to 0.5 dpt.
A study of Jordanian adults aged 17 to 40 found over half (54%) were myopic.
Some research suggests the prevalence of myopia in Indian children is less than 15%.
Europe
In first-year undergraduate students in the United Kingdom 50% of British whites and 53% of British Asians were myopic.
A recent review found 27% of Western Europeans aged 40 or over have at least −1.00 diopters of myopia and 5% have at least −5.00 diopters.
North America
Myopia is common in the United States, with research suggesting this condition has increased dramatically in recent decades. In 1971–1972, the National Health and Nutrition Examination Survey provided the earliest nationally representative estimates for myopia prevalence in the U.S., and found the prevalence in persons aged 12–54 was 25%. Using the same method, in 1999–2004, myopia prevalence was estimated to have climbed to 42%.A study of 2,523 children in grades 1 to 8 (age, 5–17 years) found nearly one in 10 (9%) have at least −0.75 diopters of myopia. In this study, 13% had at least +1.25 D hyperopia (farsightedness), and 28% had at least 1.00-D difference between the two principal meridians (cycloplegic autorefraction) of astigmatism. For myopia, Asians had the highest prevalence (19%), followed by Hispanics (13%). Caucasian children had the lowest prevalence of myopia (4%), which was not significantly different from African Americans (7%).A recent review found 25% of Americans aged 40 or over have at least −1.00 diopters of myopia and 5% have at least −5.00 diopters.
Australia
In Australia, the overall prevalence of myopia (worse than −0.50 diopters) has been estimated to be 17%. In one recent study, less than one in 10 (8%) Australian children between the ages of four and 12 were found to have myopia greater than −0.50 diopters. A recent review found 16% of Australians aged 40 or over have at least −1.00 diopters of myopia and 3% have at least −5.00 diopters.
South America
In Brazil, a 2005 study estimated 6% of Brazilians between the ages of 12 and 59 had −1.00 diopter of myopia or more, compared with 3% of the indigenous people in northwestern Brazil. Another found nearly 1 in 8 (13%) of the students in the city of Natal were myopic.
History
The difference between the near-sighted and far-sighted people was noted already by Aristotle. Graeco-Roman physician Galen first used the term "myopia" (from Greek words "myein" meaning "to close or shut" and "ops" (gen. opos) meaning "eye") for near-sightedness. The first spectacles for correcting myopia were invented by a German cardinal in the year 1451. Johannes Kepler in his Clarification of Ophthalmic Dioptrics (1604) first demonstrated that near-sightedness was due to the incident light focusing in front of the retina. Kepler also showed that near-sightedness could be corrected by concave lenses. In 1632, Vopiscus Fortunatus Plempius examined a myopic eye and confirmed that myopia was due to a lengthening of its axial diameter.
Society and culture
The terms "myopia" and "myopic" (or the common terms "short-sightedness" or "short-sighted", respectively) have been used metaphorically to refer to cognitive thinking and decision making that is narrow in scope or lacking in foresight or in concern for wider interests or for longer-term consequences. It is often used to describe a decision that may be beneficial in the present, but detrimental in the future, or a viewpoint that fails to consider anything outside a very narrow and limited range. Hyperopia, the biological opposite of myopia, may also be used metaphorically for a value system or motivation that exhibits "farsighted" or possibly visionary thinking and behavior; that is, emphasizing long-term interests at the apparent expense of near-term benefit.
Correlations
Numerous studies have found correlations between myopia, on the one hand, and intelligence and academic achievement, on the other; it is not clear whether there is a causal relationship.
Myopia is also correlated with increased microsaccade amplitude, suggesting that blurred vision from myopia might cause instability in fixational eye movements.
See also
Myopia in animals
Myopic crescent
References
== External links == |
Traffic collision | A traffic collision, also called a motor vehicle collision, car accident or car crash, occurs when a vehicle collides with another vehicle, pedestrian, animal, road debris, or other stationary obstruction, such as a tree, pole or building. Traffic collisions often result in injury, disability, death, and property damage as well as financial costs to both society and the individuals involved. Road transport is the most dangerous situation people deal with on a daily basis, but casualty figures from such incidents attract less media attention than other, less frequent types of tragedy.A number of factors contribute to the risk of collisions, including vehicle design, speed of operation, road design, weather, road environment, driving skills, impairment due to alcohol or drugs, and behavior, notably aggressive driving, distracted driving, speeding and street racing.
In 2013, 54 million people worldwide sustained injuries from traffic collisions. This resulted in 1.4 million deaths in 2013, up from 1.1 million deaths in 1990. About 68,000 of these occurred with children less than five years old. Almost all high-income countries have decreasing death rates, while the majority of low-income countries have increasing death rates due to traffic collisions. Middle-income countries have the highest rate with 20 deaths per 100,000 inhabitants, accounting for 80% of all road fatalities with 52% of all vehicles. While the death rate in Africa is the highest (24.1 per 100,000 inhabitants), the lowest rate is to be found in Europe (10.3 per 100,000 inhabitants).
Terminology
Traffic collisions can be classified by general types. Types of collision include head-on, road departure, rear-end, side collisions, and rollovers.
Many different terms are commonly used to describe vehicle collisions. The World Health Organization uses the term road traffic injury, while the U.S. Census Bureau uses the term motor vehicle accidents (MVA), and Transport Canada uses the term "motor vehicle traffic collision" (MVTC). Other common terms include auto accident, car accident, car crash, car smash, car wreck, motor vehicle collision (MVC), personal injury collision (PIC), road accident, road traffic accident (RTA), road traffic collision (RTC), and road traffic incident (RTI) as well as more unofficial terms including smash-up, pile-up, and fender bender.
Some organizations have begun to avoid the term accident, instead preferring terms such as collision, crash or incident. This is because the term accident implies that there is no one to blame, whereas most traffic collisions are the result of driving under the influence, excessive speed, distractions such as mobile phones or other risky behavior.Historically, in the United States, the use of terms other than accident had been criticized for holding back safety improvements, based on the idea that a culture of blame may discourage the involved parties from fully disclosing the facts, and thus frustrate attempts to address the real root causes.
Health effects
Physical
A number of physical injuries can commonly result from the blunt force trauma caused by a collision, ranging from bruising and contusions to catastrophic physical injury (e.g., paralysis), traumatic or non-traumatic cardiac arrest and death.
Psychological
Following collisions, long-lasting psychological trauma may occur. These issues may make those who have been in a crash afraid to drive again. In some cases, psychological trauma may affect individuals lives, causing difficulty going to work, attending school, or performing family responsibilities.
Causes
Road incidents are caused by a large number of human factors such as failing to act according to weather conditions, road design, signage, speed limits, lighting conditions, pavement markings, and roadway obstacles. A 1985 study by K. Rumar, using British and American crash reports as data, suggested 57% of crashes were due solely to driver factors, 27% to combined roadway and driver factors, 6% to combined vehicle and driver factors, 3% solely to roadway factors, 3% to combined roadway, driver, and vehicle factors, 2% solely to vehicle factors, and 1% to combined roadway and vehicle factors. Reducing the severity of injury in crashes is more important than reducing incidence and ranking incidence by broad categories of causes is misleading regarding severe injury reduction. Vehicle and road modifications are generally more effective than behavioral change efforts with the exception of certain laws such as required use of seat belts, motorcycle helmets, and graduated licensing of teenagers.
Human factors
Human factors in vehicle collisions include anything related to drivers and other road users that may contribute to a collision. Examples include driver behavior, visual and auditory acuity, decision-making ability, and reaction speed.
A 1985 report based on British and American crash data found driver error, intoxication, and other human factors contribute wholly or partly to about 93% of crashes. A 2019 report from the U.S. National Highway Traffic Safety Administration found that leading contributing factors for fatal crashes included driving too fast for conditions or in excess of the speed limit, operating under the influence, failure to yield right of way, failure to keep within the proper lane, operating a vehicle in a careless manner, and distracted driving.Drivers distracted by mobile devices had nearly four times greater risk of crashing their cars than those who were not. Research from the Virginia Tech Transportation Institute has found that drivers who are texting while driving are 23 times more likely to be involved in a crash as non-texting drivers. Dialing a phone is the most dangerous distraction, increasing a drivers chance of crashing by 12 times, followed by reading or writing, which increased the risk by ten times.An RAC survey of British drivers found 78% of drivers thought they were highly skilled at driving, and most thought they were better than other drivers, a result suggesting overconfidence in their abilities. Nearly all drivers who had been in a crash did not believe themselves to be at fault. One survey of drivers reported that they thought the key elements of good driving were:
controlling a car including a good awareness of the cars size and capabilities
reading and reacting to road conditions, weather, road signs, and the environment
alertness, reading and anticipating the behavior of other drivers.Although proficiency in these skills is taught and tested as part of the driving exam, a "good" driver can still be at a high risk of crashing because:
the feeling of being confident in more and more challenging situations is experienced as evidence of driving ability, and that proven ability reinforces the feelings of confidence. Confidence feeds itself and grows unchecked until something happens – a near-miss or an accident.
An Axa survey concluded Irish drivers are very safety-conscious relative to other European drivers. However, this does not translate to significantly lower crash rates in Ireland.Accompanying changes to road designs have been wide-scale adoptions of rules of the road alongside law enforcement policies that included drink-driving laws, setting of speed limits, and speed enforcement systems such as speed cameras. Some countries driving tests have been expanded to test a new drivers behavior during emergencies, and their hazard perception.
There are demographic differences in crash rates. For example, although young people tend to have good reaction times, disproportionately more young male drivers feature in collisions, with researchers observing that many exhibit behaviors and attitudes to risk that can place them in more hazardous situations than other road users. This is reflected by actuaries when they set insurance rates for different age groups, partly based on their age, sex, and choice of vehicle. Older drivers with slower reactions might be expected to be involved in more collisions, but this has not been the case as they tend to drive less and, apparently, more cautiously. Attempts to impose traffic policies can be complicated by local circumstances and driver behavior. In 1969 Leeming warned that there is a balance to be struck when "improving" the safety of a road.Conversely, a location that does not look dangerous may have a high crash frequency. This is, in part, because if drivers perceive a location as hazardous, they take more care. Collisions may be more likely to happen when hazardous road or traffic conditions are not obvious at a glance, or where the conditions are too complicated for the limited human machine to perceive and react in the time and distance available. High incidence of crashes is not indicative of high injury risk. Crashes are common in areas of high vehicle congestion, but fatal crashes occur disproportionately on rural roads at night when traffic is relatively light.
This phenomenon has been observed in risk compensation research, where the predicted reductions in collision rates have not occurred after legislative or technical changes. One study observed that the introduction of improved brakes resulted in more aggressive driving, and another argued that compulsory seat belt laws have not been accompanied by a clearly attributed fall in overall fatalities. Most claims of risk compensation offsetting the effects of vehicle regulation and belt use laws have been discredited by research using more refined data.In the 1990s, Hans Mondermans studies of driver behavior led him to the realization that signs and regulations had an adverse effect on a drivers ability to interact safely with other road users. Monderman developed shared space principles, rooted in the principles of the woonerven of the 1970s. He concluded that the removal of highway clutter, while allowing drivers and other road users to mingle with equal priority, could help drivers recognize environmental clues. They relied on their cognitive skills alone, reducing traffic speeds radically and resulting in lower levels of road casualties and lower levels of congestion.Some crashes are intended; staged crashes, for example, involve at least one party who hopes to crash a vehicle in order to submit lucrative claims to an insurance company. In the United States during the 1990s, criminals recruited Latin American immigrants to deliberately crash cars, usually by cutting in front of another car and slamming on the brakes. It was an illegal and risky job, and they were typically paid only $100. Jose Luis Lopez Perez, a staged crash driver, died after one such maneuver, leading to an investigation that uncovered the increasing frequency of this type of crash.
Motor vehicle speed
The U.S. Department of Transportations Federal Highway Administration review research on traffic speed in 1998. The summary says:
The evidence shows the risk of having a crash is increased both for vehicles traveling slower than the average speed, and for those traveling above the average speed.
The risk of being injured increases exponentially with speeds much faster than the median speed.
The severity / lethality of a crash depends on the vehicle speed change at impact.
There is limited evidence suggesting lower speed limits result in lower speeds on a system-wide basis.
Most crashes related to speed involve speed too fast for the conditions.
More research is needed to determine the effectiveness of traffic calming.In the U.S. in 2018, 9,378 people were killed in motor vehicle crashes involving at least one speeding driver, which accounted for 26% of all traffic-related deaths for the year.In Michigan in 2019, excessive speed was a factor in 18.8% of the fatalities that resulted from fatal motor vehicle crashes and in 15.6% of the suspected serious injuries resulting from crashes.The Road and Traffic Authority (RTA) of the Australian state of New South Wales (NSW) asserts speeding (traveling too fast for the prevailing conditions or above the posted speed limit) is a factor in about 40 percent of road deaths. The RTA also say speeding increases the risk of a crash and its severity. On another web page, the RTA qualify their claims by referring to one specific piece of research from 1997, and writes "research has shown that the risk of a crash causing death or injury increases rapidly, even with small increases above an appropriately set speed limit."The contributory factor report in the official British road casualty statistics show for 2006, that "exceeding speed limit" was a contributory factor in 5% of all casualty crashes (14% of all fatal crashes), and "traveling too fast for conditions" was a contributory factor in 11% of all casualty crashes (18% of all fatal crashes).In France, in 2018, the speed limit was reduced from 90 km/h to 80 km/h on a large part of the local outside built-up area road network in the sole aim to reduce the number of road fatalities.
Assured clear distance ahead
A common cause of collisions is driving faster than one can stop within their field of vision. Such practice is illegal and is particularly responsible for an increase of fatalities at night – when it occurs most.
Driver impairment
Driver impairment describes factors that prevent the driver from driving at their normal level of skill. Common impairments include:
Alcohol
According to the Government of Canada, coroner reports from 2008 suggested almost 40% of fatally injured drivers consumed some quantity of alcohol before the collision.
Physical impairment
Poor eyesight and/or physical impairment, with many jurisdictions setting simple sight tests and/or requiring appropriate vehicle modifications before being allowed to drive.
Youth
Insurance statistics demonstrate a notably higher incidence of collisions and fatalities among drivers aged in their teens or early twenties, with insurance rates reflecting this data. These drivers have the highest incidence of both collisions and fatalities among all driver age groups, a fact that was observed well before the advent of mobile phones.Females in this age group exhibit somewhat lower collision and fatality rates than males but still register well above the median for drivers of all ages. Also within this group, the highest collision incidence rate occurs within the first year of licensed driving. For this reason, many US states have enacted a zero-tolerance policy wherein receiving a moving violation within the first six months to one year of obtaining a license results in automatic license suspension. South Dakota is the only state that allows fourteen year-olds to obtain drivers licenses.
Old age
Old age, with some jurisdictions requiring driver retesting for reaction speed and eyesight after a certain age.
Sleep deprivationVarious factors such as fatigue or sleep deprivation might increase the risk, or numbers of hours driving might increase the risk of an incident. 41% of drivers self report having fallen asleep at the wheel.: 41 It is estimated that 15% of fatal crashes involve drowsiness (10% of day time crashes, and 24% of night time crashes). Work factors can increase the risk of drowsy driving such as long or irregular hours or driving at night.
Drug use
Including some prescription drugs, over the counter drugs (notably antihistamines, opioids and muscarinic antagonists), and illegal drugs.
Distraction
Research suggests that the drivers attention is affected by distracting sounds such as conversations and operating a mobile phone while driving. Many jurisdictions now restrict or outlaw the use of some types of phone within the car. Recent research conducted by British scientists suggests that music can also have an effect; classical music is considered to be calming, yet too much could relax the driver to a condition of distraction. On the other hand, hard rock may encourage the driver to step on the acceleration pedal, thus creating a potentially dangerous situation on the road.Cell phone use is an increasingly significant problem on the roads and as the U.S. National Safety Council compiled more than 30 studies postulating that hands-free is not a safer option, because the brain remains distracted by the conversation and cannot focus solely on the task of driving.
Intent
Some traffic collisions are caused intentionally by a driver. For example, a collision may be caused by a driver who intends to commit vehicular suicide. Collisions may also be intentionally caused by people who hope to make an insurance claim against the other driver, or may be staged for such purposes as insurance fraud. Motor vehicles may also be involved in collisions as part of a deliberate effort to hurt other people, such as in a vehicle-ramming attack.
Combinations of factors
Several conditions can combine to create a much worse situation, for example:
Combining low doses of alcohol and cannabis has a more severe effect on driving performance than either cannabis or alcohol in isolation.
Taking recommended doses of several drugs together, which individually do not cause impairment, may combine to bring on drowsiness or other impairment. This could be more pronounced in an elderly person whose renal function is less efficient than a younger persons.Thus, there are situations when a person may be impaired, but still legally allowed to drive, and becomes a potential hazard to themselves and other road users. Pedestrians or cyclists are affected in the same way and can similarly jeopardize themselves or others when on the road.
Road design
A 1985 US study showed that about 34% of serious crashes had contributing factors related to the roadway or its environment. Most of these crashes also involved a human factor. The road or environmental factor was either noted as making a significant contribution to the circumstances of the crash, or did not allow room to recover. In these circumstances, it is frequently the driver who is blamed rather than the road; those reporting the collisions have a tendency to overlook the human factors involved, such as the subtleties of design and maintenance that a driver could fail to observe or inadequately compensate for.Research has shown that careful design and maintenance, with well-designed intersections, road surfaces, visibility and traffic control devices, can result in significant improvements in collision rates.
Individual roads also have widely differing performance in the event of an impact. In Europe, there are now EuroRAP tests that indicate how "self-explaining" and forgiving a particular road and its roadside would be in the event of a major incident.
In the UK, research has shown that investment in a safe road infrastructure program could yield a 1⁄3 reduction in road deaths, saving as much as £6 billion per year. A consortium of 13 major road safety stakeholders have formed the Campaign for Safe Road Design, which is calling on the UK Government to make safe road design a national transport priority.
Vehicle design and maintenance
Seat beltsResearch has shown that, across all collision types, it is less likely that seat belts were worn in collisions involving death or serious injury, rather than light injury; wearing a seat belt reduces the risk of death by about 45 percent. Seat belt use is controversial, with notable critics such as Professor John Adams suggesting that their use may lead to a net increase in road casualties due to a phenomenon known as risk compensation. However, actual observation of driver behaviors before and after seat belt laws does not support the risk compensation hypothesis.
Several important driving behaviors were observed on the road before and after the belt use law was enforced in Newfoundland, and in Nova Scotia during the same period without a law. Belt use increased from 16 percent to 77 percent in Newfoundland and remained virtually unchanged in Nova Scotia. Four driver behaviors (speed, stopping at intersections when the control light was amber, turning left in front of oncoming traffic, and gaps in following distance) were measured at various sites before and after the law. Changes in these behaviors in Newfoundland were similar to those in Nova Scotia, except that drivers in Newfoundland drove slower on expressways after the law, contrary to the risk compensation theory.
MaintenanceA well-designed and well-maintained vehicle, with good brakes, tires and well-adjusted suspension will be more controllable in an emergency and thus be better equipped to avoid collisions. Some mandatory vehicle inspection schemes include tests for some aspects of roadworthiness, such as the UKs MOT test or German TÜV conformance inspection.
The design of vehicles has also evolved to improve protection after collision, both for vehicle occupants and for those outside of the vehicle. Much of this work was led by automotive industry competition and technological innovation, leading to measures such as Saabs safety cage and reinforced roof pillars of 1946, Fords 1956 Lifeguard safety package, and Saab and Volvos introduction of standard fit seatbelts in 1959. Other initiatives were accelerated as a reaction to consumer pressure, after publications such as Ralph Naders 1965 book Unsafe at Any Speed accused motor manufacturers of indifference towards safety.
In the early 1970s, British Leyland started an intensive programme of vehicle safety research, producing a number of prototype experimental safety vehicles demonstrating various innovations for occupant and pedestrian protection such as air bags, anti-lock brakes, impact-absorbing side-panels, front and rear head restraints, run-flat tires, smooth and deformable front-ends, impact-absorbing bumpers, and retractable headlamps. Design has also been influenced by government legislation, such as the Euro NCAP impact test.
Common features designed to improve safety include thicker pillars, safety glass, interiors with no sharp edges, stronger bodies, other active or passive safety features, and smooth exteriors to reduce the consequences of an impact with pedestrians.
The UK Department for Transport publish road casualty statistics for each type of collision and vehicle through its Road Casualties Great Britain report.
These statistics show a ten to one ratio of in-vehicle fatalities between types of car. In most cars, occupants have a 2–8% chance of death in a two-car collision.
Center of gravitySome crash types tend to have more serious consequences. Rollovers have become more common in recent years, perhaps due to increased popularity of taller SUVs, people carriers, and minivans, which have a higher center of gravity than standard passenger cars. Rollovers can be fatal, especially if the occupants are ejected because they were not wearing seat belts (83% of ejections during rollovers were fatal when the driver did not wear a seat belt, compared to 25% when they did).
After a new design of Mercedes Benz notoriously failed a moose test (sudden swerving to avoid an obstacle), some manufacturers enhanced suspension using stability control linked to an anti-lock braking system to reduce the likelihood of rollover. After retrofitting these systems to its models in 1999–2000, Mercedes saw its models involved in fewer crashes.Now, about 40% of new US vehicles, mainly the SUVs, vans and pickup trucks that are more susceptible to rollover, are being produced with a lower center of gravity and enhanced suspension with stability control linked to its anti-lock braking system to reduce the risk of rollover and meet US federal requirements that mandate anti-rollover technology by September 2011.
MotorcyclesMotorcyclists and pillion-riders have little protection other than their clothing and helmets. This difference is reflected in the casualty statistics, where they are more than twice as likely to suffer severely after a collision. In 2005, there were 198,735 road crashes with 271,017 reported casualties on roads in Great Britain. This included 3,201 deaths (1.1%) and 28,954 serious injuries (10.7%) overall.
Of these casualties 178,302 (66%) were car users and 24,824 (9%) were motorcyclists, of whom 569 were killed (2.3%) and 5,939 seriously injured (24%).
Sociological factors
Studies in United States have shown that poor people have a greater risk of dying in a car crash than people who are well-off. Car deaths are also higher in poorer states.Similar studies in France or Israel have shown the same results. This may be due to working-class people having less access to secure equipment in cars, having older cars which are less protected against crash, and needing to cover more distance to go to work each day.
COVID-19 impact on traffic incidents
While the advent of the COVID lockdown meant a decrease in road traffic in the United States, the rates of incidents, speeding, and traffic fatalities rose in 2020 and 2021 (rate as measured against vehicle miles traveled). The traffic fatality rate jumped to 1.25 per 100 million vehicle miles traveled, up from 1.06 during the same period in 2019. Reasons cited for the increases are greater speeds, not wearing seatbelts, and driving while impaired.In their preliminary report covering the first six months of 2021, the US nonprofit public safety advocacy group, the National Safety Council (NSC) estimated of total motor-vehicle deaths for the first six months of 2021 were 21,450, up 16% from 2020 and up 17% from 18,384 in 2019. The estimated mileage death rate in 2021 was 1.43 deaths per 100 million vehicle miles traveled, up 3% from 1.39 in 2020 and up 24% from 1.15 in 2019.Preliminary data also show that even as traffic levels returned to normal after the onset of COVID in March–April 2020, drivers continued to drive at excessive speeds. A 2020 study conducted by INRIX, private company that analyzes traffic patters, behaviors and congestion, showed that as traffic levels returned to normal during the three-month period August to October 2020, growth in collisions (57%), outpaced the growth in miles traveled (22%) resulting in a higher than normal collision rate during this period.In France, the Ministry of Interior reported that traffic incidents, crash-related injuries, and fatalities dropped in 2020 compared with 2019. Fatalities dropped 21.4%, injuries dropped 20.9%, and incidents overall dropped 20%. In the same report, the ministry reported that the number of vehicles on the road dropped by 75%, which would indicate that the rate (incidents per vehicle-mile) in fact increased.
Other
Other possibly hazardous factors that may alter a drivers soundness on the road include:
Irritability
Following specifically distinct rules too bureaucratically, inflexibly or rigidly when unique circumstances might suggest otherwise
Sudden swerving into somebodys blind spot without first clearly making oneself visible through the wing mirror
Unfamiliarity with ones dashboard features, center console or other interior handling devices after a recent car purchase
Lack of visibility due to windshield design, dense fog or sun glare
People-watching.
Traffic safety culture, a variety of aspects of safety culture could impact on the number of crashes.
Prevention
A large body of knowledge has been amassed on how to prevent car crashes, and reduce the severity of those that do occur.
United Nations
Owing to the global and massive scale of the issue, with predictions that by 2020 road traffic deaths and injuries will exceed HIV/AIDS as a burden of death and disability, the United Nations and its subsidiary bodies have passed resolutions and held conferences on the issue. The first United Nations General Assembly resolution and debate was in 2003 The World Day of Remembrance for Road Traffic Victims was declared in 2005. In 2009 the first high level ministerial conference on road safety was held in Moscow.
The World Health Organization, a specialized agency of the United Nations Organization, in its Global Status Report on Road Safety 2009, estimates that over 90% of the worlds fatalities on the roads occur in low-income and middle-income countries, which have only 48% of the worlds registered vehicles, and predicts road traffic injuries will rise to become the fifth leading cause of death by 2030.The United Nations Sustainable Development Goal 3, target 3.6 is directed at reducing road injuries and deaths. February 2020 saw a global ministerial conference which brought the Stockholm Declaration, setting a target to reduce global traffic deaths and injuries by 50% within ten years. The decade of 2021-2030 was declared the second decade of road safety.
Collision migration
Collisions migration refers to a situation where action to reduce road traffic collisions in one place may result in those collisions resurfacing elsewhere. For example, an accident blackspot may occur at a dangerous bend. The treatment for this may be to increase signage, post an advisory speed limit, apply a high-friction road surface, add crash barriers or any one of a number of other visible interventions. The immediate result may be to reduce collisions at the bend, but the subconscious relaxation on leaving the "dangerous" bend may cause drivers to act with fractionally less care on the rest of the road, resulting in an increase in collisions elsewhere on the road, and no overall improvement over the area. In the same way, increasing familiarity with the treated area will often result in a reduction over time to the previous level of care and may result in faster speeds around the bend due to perceived increased safety (risk compensation).
Epidemiology
In 2004 50 million more were injured in motor vehicle collisions. In 2013, between 1.25 million and 1.4 million people were killed in traffic collisions, up from 1.1 million deaths in 1990. That number represents about 2.5% of all deaths. Approximately 50 million additional people were injured in traffic collisions, a number unchanged from 2004.India recorded 105,000 traffic deaths in a year, followed by China with over 96,000 deaths. This makes motor vehicle collisions the leading cause of injury and death among children worldwide 10–19 years old (260,000 children die a year, 10 million are injured) and the sixth leading preventable cause of death in the United States. In 2019, there were 36,096 people killed and 2.74 million people injured in motor vehicle traffic crashes on roadways in the United States. In the state of Texas alone, there were a total of 415,892 traffic collisions, including 3, |
Traffic collision | 005 fatal crashes in 2012. In Canada, they are the cause of 48% of severe injuries.
Crash rates
The safety performance of roadways is almost always reported as a rate. That is, some measure of harm (deaths, injuries, or number of crashes) divided by some measure of exposure to the risk of this harm. Rates are used so the safety performance of different locations can be compared, and to prioritize safety improvements.
Common rates related to road traffic fatalities include the number of deaths per capita, per registered vehicle, per licensed driver, or per vehicle mile or kilometer traveled. Simple counts are almost never used. The annual count of fatalities is a rate, namely, the number of fatalities per year.
There is no one rate that is superior to others in any general sense. The rate to be selected depends on the question being asked – and often also on what data are available. What is important is to specify exactly what rate is measured and how it relates to the problem being addressed. Some agencies concentrate on crashes per total vehicle distance traveled. Others combine rates. The U.S. state of Iowa, for example, selects high collision locations based on a combination of crashes per million miles traveled, crashes per mile per year, and value loss (crash severity).
Fatality
The definition of a road-traffic fatality varies from country to country. In the United States, the definition used in the Fatality Analysis Reporting System (FARS) run by the National Highway Traffic Safety Administration (NHTSA) is a person who dies within 30 days of a crash on a US public road involving a vehicle with an engine, the death being the result of the crash. In the U.S., therefore, if a driver has a non-fatal heart attack that leads to a road-traffic crash that causes death, that is a road-traffic fatality. However, if the heart attack causes death prior to the crash, then that is not a road-traffic fatality.
The definition of a road-traffic fatality can change with time in the same country. For example, fatality was defined in France as a person who dies in the six days (pre 2005) after the collision and was subsequently changed to the 30 days (post 2005) after the collision.
History
The worlds first recorded road traffic death involving a motor vehicle occurred on 31 August 1869. Irish scientist Mary Ward died when she fell out of her cousins steam car and was run over by it.The British road engineer J. J. Leeming, compared the statistics for fatality rates in Great Britain, for transport-related incidents both before and after the introduction of the motor vehicle, for journeys, including those once by water that now are undertaken by motor vehicle: For the period 1863–1870 there were: 470 fatalities per million of population (76 on railways, 143 on roads, 251 on water); for the period 1891–1900 the corresponding figures were: 348 (63, 107, 178); for the period 1931–1938: 403 (22, 311, 70) and for the year 1963: 325 (10, 278, 37). Leeming concluded that the data showed that "travel accidents may even have been more frequent a century ago than they are now, at least for men".
He also compared the circumstances around road deaths as reported in various American states before the widespread introduction of 55 mph (89 km/h) speed limits and drunk-driving laws.
They took into account thirty factors which it was thought might affect the death rate. Among these were included the annual consumption of wine, of spirits and of malt beverages—taken individually—the amount spent on road maintenance, the minimum temperature, certain of the legal measures such as the amount spent on police, the number of police per 100,000 inhabitants, the follow-up programme on dangerous drivers, the quality of driver testing, and so on. The thirty factors were finally reduced to six by eliminating those found to have small or negligible effect. The final six were:
(a) The percentage of the total state highway mileage that is rural
(b) The percent increase in motor vehicle registration
(c) The extent of motor vehicle inspection
(d) The percentage of state-administered highway that is surfaced
(e) The average yearly minimum temperature
(f) The income per capitaThese are placed in descending order of importance. These six accounted for 70% of the variations in the rate.
The rate of traffic deaths in the United States doubled from 1915 to 1921 when it reached 12 deaths per 100,000 Americans. A century later, in 2021, the annual death rate was 12.9 per 100,000. Safety focus on protecting the occupants of automobiles has victimized bicyclists and pedestrians. From 2010 to 2019, fatalities rose 36% for bicyclists and nearly doubled for those on foot. Reasons include larger vehicles, faster driving, and digital distractions making walking and biking in the United States far more dangerous than in other comparable nations.Early 20th century "judges defended pedestrians rights in city streets. The convenience of drivers was no grounds for infringing these rights. Any motorist driving too fast to avoid injuring or killing a pedestrian was regarded as speeding.... (More recently) there has been a tacit agreement in the United States to treat the right to walk as dispensable, (tending) to attribute such deaths to individual failures for which individuals alone — reckless drivers or careless pedestrians — are responsible."The worlds first autonomous car incident resulting in the death of a pedestrian occurred on 18 March 2018 in Arizona. The pedestrian was walking her bicycle outside of the crosswalk, and died in the hospital after she was struck by a self-driving car being tested by Uber.
Society and culture
Economic costs
The global economic cost of MVCs was estimated at $518 billion per year in 2003, and $100 billion in developing countries. The Center for Disease Control and Prevention estimated the U.S. cost in 2000 at $230 billion. A 2010 US report estimated costs of $277 billion, which included lost productivity, medical costs, legal and court costs, emergency service costs (EMS), insurance administration costs, congestion costs, property damage, and workplace losses. "The value of societal harm from motor vehicle crashes, which includes both economic impacts and valuation for lost quality-of-life, was $870.8 billion in 2010. Sixty-eight percent of this value represents lost quality-of-life, while 32 percent are economic impacts."Traffic collision affect the national economy as the cost of road injuries are estimated to account for 1.0% to 2.0% of the gross national product (GNP) of every country each year. A recent study from Nepal showed that the total economic costs of road injuries were approximately $122.88 million, equivalent to 1.52% of the total Nepal GNP for 2017, indicating the growing national financial burden associate with preventable road injuries and deaths.The economic cost to the individuals involved in an MVC varies widely depending on geographic distribution, and varies largely on depth of accident insurance cover, and legislative policy. In the UK for example, a survey conducted using 500 post-accident insurance policy customers, showed an average individual financial loss of £1300.00. This is due in part to voluntary excesses that are common tactics used to reduce overall premium, and in part due to under valuation of vehicles. By contrast, Australian insurance policy holders are subject to an average financial loss of $950.00 AUD
Legal consequences
There are a number of possible legal consequences for causing a traffic collision, including:
Traffic citations: drivers who are involved in a collision may receive one or more traffic citations for improper driving conduct such as speeding, failure to obey a traffic control device, or driving under the influence of drugs or alcohol. Convictions for traffic violations are usually penalized with fines, and for more severe offenses, the suspension or revocation of driving privileges.
Civil lawsuits: a driver who causes a traffic collision may be sued for damages resulting from the accident, including damages to property and injuries to other persons.
Criminal prosecution: More severe driving misconduct, including impaired driving, may result in criminal charges against the driver. In the event of a fatality, a charge of vehicular homicide is occasionally prosecuted, especially in cases involving alcohol. Convictions for alcohol offenses may result in the revocation or long term suspension of the drivers license, and sometimes jail time, mandatory drug or alcohol rehabilitation, or both.
Fraud
Sometimes, people may make false insurance claims or commit insurance fraud by staging collisions or jumping in front of moving cars.
United Kingdom
In the United Kingdom, the Pre-Action Protocol for Low Value Personal Injury Claims in Road Traffic Accidents from 31 July 2013, otherwise known as the RTA Protocol,describes the behaviour the court expects of the parties prior to the start of proceedings where a claimant claims damages valued at no more than the Protocol upper limit as a result of a personal injury sustained by that person in a road traffic accident. As of February 2022 the "upper limit" is £25,000 for an accident which occurred on or after 31 July 2013; the limit under a previous version of the protocol was £10,000 for an accident which had occurred on or after 30 April 2010 but before 31 July 2013.
United States
Motor vehicle crashes are the leading cause of death in the workplace in the United States accounting for 35 percent of all workplace fatalities. In the United States, individuals involved in motor vehicle collisions may be held financially liable for the consequences of a collision, including property damage, and injuries to passengers and drivers. Where another drivers vehicle is damaged as the result of a crash, some states allow the owner of the vehicle to recover both the cost of repair for the diminished value of the vehicle from the at-fault driver. Because the financial liability that results from causing a crash is so high, most U.S. states require drivers to carry liability insurance to cover these potential costs. In the event of serious injuries or fatalities, it is possible for injured persons to seek compensation in excess of the at-fault drivers insurance coverage.In some cases, involving a defect in the design or manufacture of motor vehicles, such as where defective design results in SUV rollovers or sudden unintended acceleration, accidents caused by defective tires, or where injuries are caused or worsened as a result of defective airbags, it is possible that the manufacturer will face a class action lawsuit.
Art
Cars have come to represent a part of the American Dream of ownership coupled with the freedom of the road. The violence of a car wreck provides a counterpoint to that promise and is the subject of artwork by a number of artists, such as John Salt and Li Yan. Though English, John Salt was drawn to American landscapes of wrecked vehicles like Desert Wreck (airbrushed oil on linen, 1972). Similarly, Jan Anders Nelson works with the wreck in its resting state in junkyards or forests, or as elements in his paintings and drawings. American Landscape is one example of Nelsons focus on the violence of the wreck with cars and trucks piled into a heap, left to the forces of nature and time. This recurring theme of violence is echoed in the work of Li Yan. His painting Accident Nº 6 looks at the energy released during a crash.Andy Warhol used newspaper pictures of car wrecks with dead occupants in a number of his Disaster series of silkscreened canvases. John Chamberlain used components of wrecked cars (such as bumpers and crumpled sheet metal fenders) in his welded sculptures.Crash is a 1973 novel by English author J. G. Ballard, about car-crash sexual fetishism that was made into a film by David Cronenberg in 1996.
See also
Notes
References
External links
WHO road traffic injuries
NHTSA Accident Statistics
U.S. DOT Fatality Analysis Reporting System FARS |
Rotaviral gastroenteritis | Rotavirus gastroenteritis is a major cause of severe diarrhoea among infants and young children globally. It is caused by rotavirus, a genus of double-stranded RNA virus in the family Reoviridae. The diarrhea tends to be watery and is frequently accompanied by fever, vomiting and abdominal pain. By the age of five, nearly every child in the world has been infected with rotavirus at least once. However, with each infection, immunity develops, and subsequent infections are less severe; adults are rarely affected. There are five species of this virus, referred to as A, B, C, D, and E. Rotavirus A, the most common, causes more than 90% of infections in humans.The virus is transmitted by the faecal-oral route. It infects and damages the cells that line the small intestine and causes gastroenteritis (which is often called "stomach flu" despite having no relation to influenza). Although rotavirus was discovered in 1973 and accounts for up to 50% of hospitalisations for severe diarrhoea in infants and children, its importance is still not widely known within the public health community, particularly in developing countries. In addition to its impact on human health, rotavirus also infects animals, and is a pathogen of livestock.Rotavirus is usually an easily managed disease of childhood, but worldwide nearly 500,000 children under five years of age still die from rotavirus infection each year and almost two million more become severely ill. In the United States, before initiation of the rotavirus vaccination programme, rotavirus caused about 2.7 million cases of severe gastroenteritis in children, almost 60,000 hospitalisations, and around 37 deaths each year. Public health campaigns to combat rotavirus focus on providing oral rehydration therapy for infected children and vaccination to prevent the disease. The incidence and severity of rotavirus infections has declined significantly in countries that have added rotavirus vaccine to their routine childhood immunisation policies.
Signs and symptoms
Rotavirus gastroenteritis is a mild to severe disease characterised by vomiting, watery diarrhoea, and low-grade fever. Once a child is infected by the virus, there is an incubation period of about two days before symptoms appear. Symptoms often start with vomiting followed by profuse diarrhoea after at least four days. Dehydration is more common in rotavirus infection than in most of those caused by bacterial pathogens, and is the most common cause of death related to rotavirus infection.Rotavirus A infections can occur throughout life: the first usually produces symptoms, but subsequent infections are typically mild or asymptomatic, as the immune system provides some protection.: 106–124 Consequently, symptomatic infection rates are highest in children under two years of age and decrease progressively towards 45 years of age. Infection in newborn children, although common, is often associated with mild or asymptomatic disease; the most severe symptoms tend to occur in children six months to two years of age, the elderly, and those with compromised or absent immune system functions. Due to immunity acquired in childhood, most adults are not susceptible to rotavirus; gastroenteritis in adults usually has a cause other than rotavirus, but asymptomatic infections in adults may maintain the transmission of infection in the community.
Virology
Transmission
Rotavirus is transmitted by the faecal-oral route, via contact with contaminated hands, surfaces and objects, and possibly by the respiratory route. The faeces of an infected person can contain more than 10 trillion infectious particles per gram; fewer than 100 of these are required to transmit infection to another person.Rotaviruses are stable in the environment and have been found in estuary samples at levels as high as 1–5 infectious particles per US gallon. Sanitary measures adequate for eliminating bacteria and parasites seem to be ineffective in control of rotavirus, as the incidence of rotavirus infection in countries with high and low health standards is similar.
Types
There are five species of rotavirus, referred to as A, B, C, D and E. Humans are mostly infected by species A. All five species cause disease in other animals.
Within rotavirus A there are different strains, called serotypes. As with influenza virus, a dual classification system is used based on two proteins on the surface of the virus. The glycoprotein VP7 defines the G serotypes and the protease-sensitive protein VP4 defines P serotypes. Because the two genes that determine G-types and P-types can be passed on separately to progeny viruses, different combinations are found.
Replication
Rotaviruses replicate mainly in the gut, and infect enterocytes of the villi of the small intestine, leading to structural and functional changes of the epithelium. The triple protein coats make them resistant to the acidic pH of the stomach and the digestive enzymes in the gut.The virus enter cells by receptor mediated endocytosis and form a vesicle known as an endosome. Proteins in the third layer (VP7 and the VP4 spike) disrupt the membrane of the endosome, creating a difference in the calcium concentration. This causes the breakdown of VP7 trimers into single protein subunits, leaving the VP2 and VP6 protein coats around the viral dsRNA, forming a double-layered particle (DLP).The eleven dsRNA strands remain within the protection of the two protein shells and the viral RNA-dependent RNA polymerasecreates mRNA transcripts of the double-stranded viral genome. By remaining in the core, the viral RNA evades innate host immune responses called RNA interference that are triggered by the presence of double-stranded RNA.During the infection, rotavirus produces mRNA for both protein biosynthesis and gene replication. Most of the rotavirus proteins accumulate in viroplasm, where the RNA is replicated and the DLPs are assembled. Viroplasm is formed around the cell nucleus as early as two hours after virus infection, and consists of viral factories thought to be made by two viral nonstructural proteins: NSP5 and NSP2. Inhibition of NSP5 by RNA interference results in a sharp decrease in rotavirus replication. The DLPs migrate to the endoplasmic reticulum where they obtain their third, outer layer (formed by VP7 and VP4). The progeny viruses are released from the cell by lysis.
Pathophysiology
The diarrhoea is caused by multiple activities of the virus. Malabsorption occurs because of the destruction of gut cells called enterocytes. The toxic rotavirus protein NSP4 induces age- and calcium ion-dependent chloride secretion, disrupts SGLT1 transporter-mediated reabsorption of water, apparently reduces activity of brush-border membrane disaccharidases, and possibly activates the calcium ion-dependent secretory reflexes of the enteric nervous system. Healthy enterocytes secrete lactase into the small intestine; milk intolerance due to lactase deficiency is a symptom of rotavirus infection, which can persist for weeks. A recurrence of mild diarrhoea often follows the reintroduction of milk into the childs diet, due to bacterial fermentation of the disaccharide lactose in the gut.
Diagnosis
Diagnosis of infection with rotavirus normally follows diagnosis of gastroenteritis as the cause of severe diarrhoea. Most children admitted to hospital with gastroenteritis are tested for rotavirus A.
Specific diagnosis of infection with rotavirus A is made by finding the virus in the childs stool by enzyme immunoassay. There are several licensed test kits on the market which are sensitive, specific and detect all serotypes of rotavirus A. Other methods, such as electron microscopy and PCR, are used in research laboratories. Reverse transcription-polymerase chain reaction (RT-PCR) can detect and identify all species and serotypes of human rotavirus.
Prevention
Because improved sanitation does not decrease the prevalence of rotaviral disease, and the rate of hospitalisations remains high, despite the use of oral rehydrating medicines, the primary public health intervention is vaccination. Two rotavirus vaccines against Rotavirus A infection are safe and effective in children: Rotarix by GlaxoSmithKline and RotaTeq by Merck. Both are taken orally and contain attenuated live virus.Rotavirus vaccines are licensed in more than 100 countries, but only 17 countries have introduced routine rotavirus vaccination. Following the introduction of routine rotavirus vaccination in the US in 2006, the health burden of rotavirus gastroenteritis "rapidly and dramatically reduced" despite lower coverage levels compared to other routine infant immunizations. Clinical trials of the Rotarix rotavirus vaccine in South Africa and Malawi, found that the vaccine significantly reduced severe diarrhoea episodes caused by rotavirus, and that the infection was preventable by vaccination. A 2019 Cochrane systematic review of 55 clinical trials that included 216,480 participants concluded RV1 (Rotarix), RV5 (RotaTeq), and Rotavac and are effective vaccines. Additional rotavirus vaccines are under development. The World Health Organization(WHO) recommends that rotavirus vaccine be included in all national immunisation programmes. The incidence and severity of rotavirus infections has declined significantly in countries that have acted on this recommendation.The Rotavirus Vaccine Program is a collaboration between PATH, the (WHO), and the U.S. Centers for Disease Control and Prevention, and is funded by the GAVI Alliance. The Program aims to reduce child morbidity and mortality from diarrhoeal disease by making a vaccine against rotavirus available for use in developing countries.
Treatment
Treatment of acute rotavirus infection is nonspecific and involves management of symptoms and, most importantly, maintenance of hydration. If untreated, children can die from the resulting severe dehydration. Depending on the severity of diarrhea, treatment consists of oral rehydration, during which the child is given extra water to drink that contains small amounts of salt and sugar. Some infections are serious enough to warrant hospitalisation where fluids are given by intravenous drip or nasogastric tube, and the childs electrolytes and blood sugar are monitored. Antibiotics are not recommended.
Prognosis
Rotavirus infections rarely cause other complications and for a well managed child the prognosis is excellent.
Epidemiology
Rotavirus A, which accounts for more than 90% of rotavirus gastroenteritis in humans, is endemic worldwide. Each year rotavirus causes millions of cases of diarrhoea in developing countries, almost 2 million resulting in hospitalisation and an estimated 453,000 resulting in the death of a child younger than five. This is about 40 per cent of all hospital admissions related to diarrhea in children under five worldwide.In the United States alone—before initiation of the rotavirus vaccination programme—over 2.7 million cases of rotavirus gastroenteritis occurred annually, 60,000 children were hospitalised and around 37 died from the results of the infection. The major role of rotavirus in causing diarrhoea is not widely recognised within the public health community, particularly in developing countries. Almost every child has been infected with rotavirus by age five. It is the leading single cause of severe diarrhoea among infants and children, being responsible for about 20% of cases, and accounts for 50% of the cases requiring hospitalisation. Rotavirus causes 37% of deaths attributable to diarrhoea and 5% of all deaths in children younger than five. Boys are twice as likely as girls to be admitted to hospital.
Rotavirus infections occur primarily during cool, dry seasons. The number attributable to food contamination is unknown.Outbreaks of rotavirus A diarrhoea are common among hospitalised infants, young children attending day care centres, and elderly people in nursing homes. An outbreak caused by contaminated municipal water occurred in Colorado in 1981.
During 2005, the largest recorded epidemic of diarrhoea occurred in Nicaragua. This unusually large and severe outbreak was associated with mutations in the rotavirus A genome, possibly helping the virus escape the prevalent immunity in the population. A similar large outbreak occurred in Brazil in 1977.Rotavirus B, also called adult diarrhoea rotavirus or ADRV, has caused major epidemics of severe diarrhoea affecting thousands of people of all ages in China. These epidemics occurred as a result of sewage contamination of drinking water. Rotavirus B infections also occurred in India in 1998; the causative strain was named CAL. Unlike ADRV, the CAL strain is endemic. To date, epidemics caused by rotavirus B have been confined to mainland China, and surveys indicate a lack of immunity to this species in the United States.
History
In 1943, Jacob Light and Horace Hodes proved that a filterable agent in the faeces of children with infectious diarrhoea also caused scours (livestock diarrhoea) in cattle. Three decades later, preserved samples of the agent were shown to be rotavirus. In the intervening years, a virus in mice was shown to be related to the virus causing scours. In 1973, Ruth Bishop and colleagues described related viruses found in children with gastroenteritis.In 1974, Thomas Henry Flewett suggested the name rotavirus after observing that, when viewed through an electron microscope, a rotavirus particle looks like a wheel (rota in Latin); the name was officially recognised by the International Committee on Taxonomy of Viruses four years later. In 1976, related viruses were described in several other species of animals. These viruses, all causing acute gastroenteritis, were recognised as a collective pathogen affecting humans and animals worldwide. Rotavirus serotypes were first described in 1980, and in the following year, rotavirus from humans was first grown in cell cultures derived from monkey kidneys, by adding trypsin (an enzyme found in the duodenum of mammals and now known to be essential for rotavirus to replicate) to the culture medium. The ability to grow rotavirus in culture accelerated the pace of research, and by the mid-1980s the first candidate vaccines were being evaluated.In 1998, a rotavirus vaccine was licensed for use in the United States. Clinical trials in the United States, Finland, and Venezuela had found it to be 80 to 100% effective at preventing severe diarrhoea caused by rotavirus A, and researchers had detected no statistically significant serious adverse effects. The manufacturer, however, withdrew it from the market in 1999, after it was discovered that the vaccine may have contributed to an increased risk for intussusception, a type of bowel obstruction, in one of every 12,000 vaccinated infants. The experience provoked intense debate about the relative risks and benefits of a rotavirus vaccine.
In 2006, two new vaccines against rotavirus A infection were shown to be safe and effective in children, and in June 2009 the World Health Organization recommended that rotavirus vaccination be included in all national immunisation programmes to provide protection against this virus.
Other animals
Rotaviruses infect the young of many species of animals and they are a major cause of diarrhoea in wild and reared animals worldwide. As a pathogen of livestock, notably in young calves and piglets, rotaviruses cause economic loss to farmers because of costs of treatment associated with high morbidity and mortality rates. These rotaviruses are a potential reservoir for genetic exchange with human rotaviruses. There is evidence that animal rotaviruses can infect humans, either by direct transmission of the virus or by contributing one or several RNA segments to reassortants with human strains.
References
Further reading
Ramig, RF (October 2004). "Pathogenesis of intestinal and systemic rotavirus infection". Journal of Virology. 78 (19): 10213–20. doi:10.1128/JVI.78.19.10213-10220.2004. PMC 516399. PMID 15367586.
External links
WHO Rotavirus web page
CDC About Rotavirus |
Gout | Gout ( GOWT) is a form of inflammatory arthritis characterized by recurrent attacks of a red, tender, hot and swollen joint, caused by deposition of monosodium urate monohydrate crystals. Pain typically comes on rapidly, reaching maximal intensity in less than 12 hours. The joint at the base of the big toe is affected in about half of cases. It may also result in tophi, kidney stones, or kidney damage.Gout is due to persistently elevated levels of uric acid in the blood. This occurs from a combination of diet, other health problems, and genetic factors. At high levels, uric acid crystallizes and the crystals deposit in joints, tendons, and surrounding tissues, resulting in an attack of gout. Gout occurs more commonly in those who regularly drink beer or sugar-sweetened beverages or who eat foods that are high in purines such as liver, shellfish, or anchovies, or are overweight. Diagnosis of gout may be confirmed by the presence of crystals in the joint fluid or in a deposit outside the joint. Blood uric acid levels may be normal during an attack.Treatment with nonsteroidal anti-inflammatory drugs (NSAIDs), glucocorticoids, or colchicine improves symptoms. Once the acute attack subsides, levels of uric acid can be lowered via lifestyle changes and in those with frequent attacks, allopurinol or probenecid provides long-term prevention. Taking vitamin C and eating a diet high in low-fat dairy products may be preventive.Gout affects about 1 to 2% of adults in the developed world at some point in their lives. It has become more common in recent decades. This is believed to be due to increasing risk factors in the population, such as metabolic syndrome, longer life expectancy, and changes in diet. Older males are most commonly affected. Gout was historically known as "the disease of kings" or "rich mans disease". It has been recognized at least since the time of the ancient Egyptians.
Signs and symptoms
Gout can present in several ways, although the most common is a recurrent attack of acute inflammatory arthritis (a red, tender, hot, swollen joint). The metatarsal-phalangeal joint at the base of the big toe is affected most often, accounting for half of cases. Other joints, such as the heels, knees, wrists, and fingers, may also be affected. Joint pain usually begins during the night and peaks within 24 hours of onset. This is mainly due to lower body temperature. Other symptoms may rarely occur along with the joint pain, including fatigue and a high fever.Long-standing elevated uric acid levels (hyperuricemia) may result in other symptoms, including hard, painless deposits of uric acid crystals known as tophi. Extensive tophi may lead to chronic arthritis due to bone erosion. Elevated levels of uric acid may also lead to crystals precipitating in the kidneys, resulting in stone formation and subsequent urate nephropathy.
Cause
The crystallization of uric acid, often related to relatively high levels in the blood, is the underlying cause of gout. This can occur because of diet, genetic predisposition, or underexcretion of urate, the salts of uric acid. Underexcretion of uric acid by the kidney is the primary cause of hyperuricemia in about 90% of cases, while overproduction is the cause in less than 10%. About 10% of people with hyperuricemia develop gout at some point in their lifetimes. The risk, however, varies depending on the degree of hyperuricemia. When levels are between 415 and 530 μmol/L (7 and 8.9 mg/dl), the risk is 0.5% per year, while in those with a level greater than 535 μmol/L (9 mg/dL), the risk is 4.5% per year.
Lifestyle
Dietary causes account for about 12% of gout, and include a strong association with the consumption of alcohol, sugar-sweetened beverages, meat, and seafood. Among foods richest in purines yielding high amounts of uric acid are dried anchovies, shrimp, organ meat, dried mushrooms, seaweed, and beer yeast. Chicken and potatoes also appear related. Other triggers include physical trauma and surgery.Studies in the early 2000s found that other dietary factors are not relevant. Specifically, a diet with moderate purine-rich vegetables (e.g., beans, peas, lentils, and spinach) is not associated with gout. Neither is total dietary protein. Alcohol consumption is strongly associated with increased risk, with wine presenting somewhat less of a risk than beer or spirits. Eating skim milk powder enriched with glycomacropeptide (GMP) and G600 milk fat extract may reduce pain but may result in diarrhea and nausea.Physical fitness, healthy weight, low-fat dairy products, and to a lesser extent, coffee and taking vitamin C, appear to decrease the risk of gout; however, taking vitamin C supplements does not appear to have a significant effect in people who already have established gout. Peanuts, brown bread, and fruit also appear protective. This is believed to be partly due to their effect in reducing insulin resistance.Other than dietary and lifestyle choices, the recurrence of gout attacks is also linked to the weather. High ambient temperature and low relative humidity may increase the risk of a gout attack.
Genetics
Gout is partly genetic, contributing to about 60% of variability in uric acid level. The SLC2A9, SLC22A12, and ABCG2 genes have been found to be commonly associated with gout and variations in them can approximately double the risk. Loss-of-function mutations in SLC2A9 and SLC22A12 causes low blood uric acid levels by reducing urate absorption and unopposed urate secretion. The rare genetic disorders familial juvenile hyperuricemic nephropathy, medullary cystic kidney disease, phosphoribosylpyrophosphate synthetase superactivity and hypoxanthine-guanine phosphoribosyltransferase deficiency as seen in Lesch–Nyhan syndrome, are complicated by gout.
Medical conditions
Gout frequently occurs in combination with other medical problems. Metabolic syndrome, a combination of abdominal obesity, hypertension, insulin resistance, and abnormal lipid levels, occurs in nearly 75% of cases. Other conditions commonly complicated by gout include lead poisoning, kidney failure, hemolytic anemia, psoriasis, solid organ transplants, and myeloproliferative disorders such as polycythemia. A body mass index greater than or equal to 35 increases male risk of gout threefold. Chronic lead exposure and lead-contaminated alcohol are risk factors for gout due to the harmful effect of lead on kidney function.
Medication
Diuretics have been associated with attacks of gout, but a low dose of hydrochlorothiazide does not seem to increase risk. Other medications that increase the risk include niacin, aspirin (acetylsalicylic acid), ACE inhibitors, angiotensin receptor blockers, beta blockers, ritonavir, and pyrazinamide. The immunosuppressive drugs ciclosporin and tacrolimus are also associated with gout, the former more so when used in combination with hydrochlorothiazide. Hyperuricemia may be induced by excessive use of Vitamin D supplements. Levels of serum uric acid have been positively associated with 25(OH) D. The incidence of hyperuricemia increased 9.4% for every 10 nmol/L increase in 25(OH) D (P < 0.001).
Pathophysiology
Gout is a disorder of purine metabolism, and occurs when its final metabolite, uric acid, crystallizes in the form of monosodium urate, precipitating and forming deposits (tophi) in joints, on tendons, and in the surrounding tissues. Microscopic tophi may be walled off by a ring of proteins, which blocks interaction of the crystals with cells and therefore avoids inflammation. Naked crystals may break out of walled-off tophi due to minor physical damage to the joint, medical or surgical stress, or rapid changes in uric acid levels. When they break through the tophi, they trigger a local immune-mediated inflammatory reaction in macrophages, which is initiated by the NLRP3 inflammasome protein complex. Activation of the NLRP3 inflammasome recruits the enzyme caspase 1, which converts pro-interleukin 1β into active interleukin 1β, one of the key proteins in the inflammatory cascade. An evolutionary loss of urate oxidase (uricase), which breaks down uric acid, in humans and higher primates has made this condition common.The triggers for precipitation of uric acid are not well understood. While it may crystallize at normal levels, it is more likely to do so as levels increase. Other triggers believed to be important in acute episodes of arthritis include cool temperatures, rapid changes in uric acid levels, acidosis, articular hydration and extracellular matrix proteins. The increased precipitation at low temperatures partly explains why the joints in the feet are most commonly affected. Rapid changes in uric acid may occur due to factors including trauma, surgery, chemotherapy and diuretics. The starting or increasing of urate-lowering medications can lead to an acute attack of gout with febuxostat of a particularly high risk. Calcium channel blockers and losartan are associated with a lower risk of gout compared to other medications for hypertension.
Diagnosis
Gout may be diagnosed and treated without further investigations in someone with hyperuricemia and the classic acute arthritis of the base of the great toe (known as podagra). Synovial fluid analysis should be done if the diagnosis is in doubt. Plain X-rays are usually normal and are not useful for confirming a diagnosis of early gout. They may show signs of chronic gout such as bone erosion.
Synovial fluid
A definitive diagnosis of gout is based upon the identification of monosodium urate crystals in synovial fluid or a tophus. All synovial fluid samples obtained from undiagnosed inflamed joints by arthrocentesis should be examined for these crystals. Under polarized light microscopy, they have a needle-like morphology and strong negative birefringence. This test is difficult to perform and requires a trained observer. The fluid must be examined relatively soon after aspiration, as temperature and pH affect solubility.
Blood tests
Hyperuricemia is a classic feature of gout, but nearly half of the time gout occurs without hyperuricemia and most people with raised uric acid levels never develop gout. Thus, the diagnostic utility of measuring uric acid levels is limited. Hyperuricemia is defined as a plasma urate level greater than 420 μmol/L (7.0 mg/dl) in males and 360 μmol/L (6.0 mg/dl) in females. Other blood tests commonly performed are white blood cell count, electrolytes, kidney function and erythrocyte sedimentation rate (ESR). However, both the white blood cells and ESR may be elevated due to gout in the absence of infection. A white blood cell count as high as 40.0×109/l (40,000/mm3) has been documented.
Differential diagnosis
The most important differential diagnosis in gout is septic arthritis. This should be considered in those with signs of infection or those who do not improve with treatment. To help with diagnosis, a synovial fluid Gram stain and culture may be performed. Other conditions that can look similar include CPPD (pseudogout), rheumatoid arthritis, psoriatic arthritis, palindromic rheumatism, and reactive arthritis. Gouty tophi, in particular when not located in a joint, can be mistaken for basal cell carcinoma or other neoplasms.
Prevention
Risk of gout attacks can be lowered by complete abstinence from drinking alcoholic beverages, reducing the intake of fructose (e.g. high fructose corn syrup) and purine-rich foods of animal origin, such as organ meats and seafood. Eating dairy products, vitamin C-rich foods, coffee, and cherries may help prevent gout attacks, as does losing weight. Gout may be secondary to sleep apnea via the release of purines from oxygen-starved cells. Treatment of apnea can lessen the occurrence of attacks.
Medications
As of 2020, allopurinol is generally the recommended preventative treatment if medications are used. A number of other medications may occasionally be considered to prevent further episodes of gout, including probenecid, febuxostat, benzbromarone, and colchicine. Long term medications are not recommended until a person has had two attacks of gout, unless destructive joint changes, tophi, or urate nephropathy exist. It is not until this point that medications are cost-effective. They are not usually started until one to two weeks after an acute flare has resolved, due to theoretical concerns of worsening the attack. They are often used in combination with either an NSAID or colchicine for the first three to six months.While it has been recommended that urate-lowering measures should be increased until serum uric acid levels are below 300–360 µmol/L (5.0–6.0 mg/dl), there is little evidence to support this practice over simple putting people on a standard dose of allopurinol. If these medications are in chronic use at the time of an attack, it is recommended that they be continued. Levels that cannot be brought below 6.0 mg/dl while attacks continue indicates refractory gout.While historically it is not recommended to start allopurinol during an acute attack of gout, this practice appears acceptable. Allopurinol blocks uric acid production, and is the most commonly used agent. Long term therapy is safe and well-tolerated and can be used in people with renal impairment or urate stones, although hypersensitivity occurs in a small number of individuals. The HLA-B*58:01 allele of the human leukocyte antigen B (HLA-B) is strongly associated with severe cutaneous adverse reactions during treatment with allopurinol and is most common among Asian subpopulations, notably those of Korean, Han-Chinese, or Thai descent.Febuxostat is only recommended in those who cannot tolerate allopurinol. There are concerns about more deaths with febuxostat compared to allopurinol. Febuxostat may also increase the rate of gout flares during early treatment. However, there is tentative evidence that febuxostat may bring down urate levels more than allopurinol.Probenecid appears to be less effective than allopurinol and is a second line agent. Probenecid may be used if undersecretion of uric acid is present (24-hour urine uric acid less than 800 mg). It is, however, not recommended if a person has a history of kidney stones. Pegloticase is an option for the 3% of people who are intolerant to other medications. It is a third line agent. Pegloticase is given as an intravenous infusion every two weeks, and reduces uric acid levels. Pegloticase is useful decreasing tophi but has a high rate of side effects and many people develop resistance to it. Using lesinurad 400 mg plus febuxostat is more beneficial for tophi resolution than lesinural 200 mL with febuxostat, with similar side effects. Lesinural plus allopurinol is not effective for tophi resolution. Potential side effects include kidney stones, anemia and joint pain. In 2016, it was withdrawn from the European market.Lesinurad reduces blood uric acid levels by preventing uric acid absorption in the kidneys. It was approved in the United States for use together with allopurinol, among those who were unable to reach their uric acid level targets. Side effects include kidney problems and kidney stones.
Treatment
The initial aim of treatment is to settle the symptoms of an acute attack. Repeated attacks can be prevented by medications that reduce serum uric acid levels. Tentative evidence supports the application of ice for 20 to 30 minutes several times a day to decrease pain. Options for acute treatment include nonsteroidal anti-inflammatory drugs (NSAIDs), colchicine, and glucocorticoids. While glucocorticoids and NSAIDs work equally well, glucocorticoids may be safer. Options for prevention include allopurinol, febuxostat, and probenecid. Lowering uric acid levels can cure the disease. Treatment of associated health problems is also important. Lifestyle interventions have been poorly studied. It is unclear whether dietary supplements have an effect in people with gout.
NSAIDs
NSAIDs are the usual first-line treatment for gout. No specific agent is significantly more or less effective than any other. Improvement may be seen within four hours and treatment is recommended for one to two weeks. They are not recommended for those with certain other health problems, such as gastrointestinal bleeding, kidney failure, or heart failure. While indometacin has historically been the most commonly used NSAID, an alternative, such as ibuprofen, may be preferred due to its better side effect profile in the absence of superior effectiveness. For those at risk of gastric side effects from NSAIDs, an additional proton pump inhibitor may be given. There is some evidence that COX-2 inhibitors may work as well as nonselective NSAIDs for acute gout attack with fewer side effects.
Colchicine
Colchicine is an alternative for those unable to tolerate NSAIDs. At high doses, side effects (primarily gastrointestinal upset) limit its usage. At lower doses, which are still effective, it is well tolerated. Colchicine may interact with other commonly prescribed drugs, such as atorvastatin and erythromycin, among others.
Glucocorticoids
Glucocorticoids have been found to be as effective as NSAIDs and may be used if contraindications exist for NSAIDs. They also lead to improvement when injected into the joint. A joint infection must be excluded, however, as glucocorticoids worsen this condition. There were no short-term adverse effects reported.
Others
Interleukin-1 inhibitors, such as canakinumab, showed moderate effectiveness for pain relief and reduction of joint swelling, but have increased risk of adverse events, such as back pain, headache, and increased blood pressure. They, however, may work less well than usual doses of NSAIDS. The high cost of this class of drugs may also discourage their use for treating gout.
Prognosis
Without treatment, an acute attack of gout usually resolves in five to seven days; however, 60% of people have a second attack within one year. Those with gout are at increased risk of hypertension, diabetes mellitus, metabolic syndrome, and kidney and cardiovascular disease and thus are at increased risk of death. It is unclear whether medications that lower urate affect cardiovascular disease risks. This may be partly due to its association with insulin resistance and obesity, but some of the increased risk appears to be independent.Without treatment, episodes of acute gout may develop into chronic gout with destruction of joint surfaces, joint deformity, and painless tophi. These tophi occur in 30% of those who are untreated for five years, often in the helix of the ear, over the olecranon processes, or on the Achilles tendons. With aggressive treatment, they may dissolve. Kidney stones also frequently complicate gout, affecting between 10 and 40% of people, and occur due to low urine pH promoting the precipitation of uric acid. Other forms of chronic kidney dysfunction may occur.
Epidemiology
Gout affects around 1–2% of people in the Western world at some point in their lifetimes and is becoming more common. Some 5.8 million people were affected in 2013. Rates of gout approximately doubled between 1990 and 2010. This rise is believed to be due to increasing life expectancy, changes in diet and an increase in diseases associated with gout, such as metabolic syndrome and high blood pressure. Factors that influence rates of gout include age, race, and the season of the year. In men over 30 and women over 50, rates are 2%.In the United States, gout is twice as likely in males of African descent than those of European descent. Rates are high among Pacific Islanders and the Māori, but the disease is rare in aboriginal Australians, despite a higher mean uric acid serum concentration in the latter group. It has become common in China, Polynesia, and urban Sub-Saharan Africa. Some studies found that attacks of gout occur more frequently in the spring. This has been attributed to seasonal changes in diet, alcohol consumption, physical activity, and temperature.
History
The term "gout" was initially used by Randolphus of Bocking, around 1200 AD. It is derived from the Latin word gutta, meaning "a drop" (of liquid). According to the Oxford English Dictionary, this is derived from humorism and "the notion of the dropping of a morbid material from the blood in and around the joints".Gout has been known since antiquity. Historically, it was referred to as "the king of diseases and the disease of kings" or "rich mans disease". The Ebers papyrus and the Edwin Smith papyrus, (c. 1550 BC) each mention arthritis of the first metacarpophalangeal joint as a distinct type of arthritis. These ancient manuscripts cite (now missing) Egyptian texts about gout that are claimed to have been written 1,000 years earlier by Imhotep. Greek physician Hippocrates around 400 BC commented on it in his Aphorisms, noting its absence in eunuchs and premenopausal women. Aulus Cornelius Celsus (30 AD) described the linkage with alcohol, later onset in women and associated kidney problems:
Again thick urine, the sediment from which is white, indicates that pain and disease are to be apprehended in the region of joints or viscera... Joint troubles in the hands and feet are very frequent and persistent, such as occur in cases of podagra and cheiragra. These seldom attack eunuchs or boys before coition with a woman, or women except those in whom the menses have become suppressed... some have obtained lifelong security by refraining from wine, mead and venery.
Benjamin Welles, an English physician authored the first medical book on gout, A Treatise of the Gout, or Joint Evil, in 1669. In 1683, Thomas Sydenham, an English physician, described its occurrence in the early hours of the morning and its predilection for older males:
Gouty patients are, generally, either old men or men who have so worn themselves out in youth as to have brought on a premature old age—of such dissolute habits none being more common than the premature and excessive indulgence in venery and the like exhausting passions. The victim goes to bed and sleeps in good health. About two oclock in the morning he is awakened by a severe pain in the great toe; more rarely in the heel, ankle, or instep. The pain is like that of a dislocation and yet parts feel as if cold water were poured over them. Then follows chills and shivers and a little fever... The night is passed in torture, sleeplessness, turning the part affected and perpetual change of posture; the tossing about of body being as incessant as the pain of the tortured joint and being worse as the fit comes on.
Dutch scientist Antonie van Leeuwenhoek first described the microscopic appearance of urate crystals in 1679. In 1848, English physician Alfred Baring Garrod identified excess uric acid in the blood as the cause of gout.
Other animals
Gout is rare in most other animals due to their ability to produce uricase, which breaks down uric acid. Humans and other great apes do not have this ability; thus, gout is common. Other animals with uricase include fish, amphibians and most non-primate mammals. The Tyrannosaurus rex specimen known as "Sue" is believed to have had gout.
Research
A number of new medications are under study for treating gout, including anakinra, canakinumab, and rilonacept. Canakinumab may result in better outcomes than a low dose of a glucocorticoid, but costs five thousand times more. A recombinant uricase enzyme (rasburicase) is available but its use is limited, as it triggers an immune response. Less antigenic versions are in development.
References
External links
Gout at Curlie
Chisholm, Hugh, ed. (1911). "Gout" . Encyclopædia Britannica. Vol. 12 (11th ed.). Cambridge University Press. pp. 289–291.
"Gout". MedlinePlus. U.S. National Library of Medicine. |
Wife | A wife (PL: wives) is a female in a marital relationship. A woman who has separated from her partner continues to be a wife until the marriage is legally dissolved with a divorce judgement. On the death of her partner, a wife is referred to as a widow. The rights and obligations of a wife in relation to her partner and her status in the community and in law vary between cultures and have varied over time.
Etymology
The word is of Germanic origin, from Proto-Germanic *wībam, "woman". In Middle English it had the form wif, and in Old English wīf, "woman or wife". It is related to Modern German Weib (woman, female), and Danish viv (wife, usually poetic); The original meaning of the phrase "wife" as simply "woman", unconnected with marriage or a husband/wife, is preserved in words such as "midwife", "goodwife", "fishwife" and "spaewife".
Summary
In many cultures, marriage is generally expected that a woman will take her husbands surname, though that is not universal. A married woman may indicate her marital status in a number of ways: in Western culture a married woman would commonly wear a wedding ring but in other cultures other markers of marital status may be used. A married woman is commonly given the honorific title "Mrs", but some married women prefer to be referred to as "Ms", a title which is also used by preference or when the marital status of a woman is unknown.
Related terminology
A woman on her wedding day is usually described as a bride, even after the wedding ceremony, while being described as a wife is also appropriate after the wedding or after the honeymoon. If she is marrying a man, her partner is known as the bridegroom during the wedding, and within the marriage is called her husband.
In the older custom, still followed, e. g., by Roman Catholic ritual, the word bride actually means fiancée and applies up to the exchange of matrimonial consent (the actual marriage act); from then on, even while the rest of the very ceremony is ongoing, the woman is a wife, and no longer a bride, and the bridal couple is no longer referred to as such but as the newlywed couple or "newlyweds".
"Wife" refers to the institutionalized relation to the other spouse, unlike mother, a term that puts a woman into the context of her children. In some societies, especially historically, a concubine was a woman who was in an ongoing, usually matrimonially oriented relationship with a man who could not be married to her, often because of a difference in social status.
The term wife is most commonly applied to a woman in a union sanctioned by law (including religious law), not to a woman in an informal cohabitation relationship, which may be known as a girlfriend, partner, cohabitant, significant other, concubine, mistress etc. However, a woman in a so-called common law marriage may describe herself as a common law wife, de facto wife, or simply a wife. Those seeking to advance gender neutrality may refer to both marriage partners as "spouses", and many countries and societies are rewording their statute law by replacing "wife" and "husband" with "spouse". A former wife whose spouse is deceased is a widow.
Termination of the status of a wife
The status of a wife may be terminated by divorce, annulment, or the death of a spouse. In the case of divorce, terminology such as former-wife or ex-wife is often used. With regard to annulment, such terms are not, strictly speaking, correct, because annulment, unlike divorce, is usually retroactive, meaning that an annulled marriage is considered to be invalid from the beginning almost as if it had never taken place. In the case of the death of the other spouse, the term used is widow. The social status of such women varies by culture, but in some places, they may be subject to potentially harmful practices, such as widow inheritance or levirate marriage; or divorced women may be socially stigmatized. In some cultures, the termination of the status of wife made life itself meaningless, as in the case of those cultures that practiced sati, a funeral ritual within some Asian communities, in which a recently widowed woman committed suicide by fire, typically on the husbands funeral pyre.
Legal rights of the wife
The legal rights of a wife have been since the 19th century, and still are, in many jurisdictions subject to debate. This subject was in particular addressed by John Stuart Mill, in The Subjection of Women (1869). Historically, many societies have given sets of rights and obligations to husbands that have been very different from the sets of rights and obligations given to wives. In particular, the control of marital property, inheritance rights, and the right to dictate the activities of children of the marriage, have typically been given to male marital partners. However, this practice was curtailed to a great deal in many countries in the twentieth century, and more modern statutes tend to define the rights and duties of a spouse without reference to gender. Among the last European countries to establish full gender equality in marriage were Switzerland, Greece, Spain, and France in the 1980s. In various marriage laws around the world, however, the husband continues to have authority; for instance the Civil Code of Iran states at Article 1105: "In relations between husband and wife; the position of the head of the family is the exclusive right of the husband".
Exchanges of goods or money
Traditionally, and still in some parts of the world, the bride or her family bring her husband a dowry, or the husband or his family pay a bride price to the brides family, or both are exchanged between the families; or the husband pays the wife a dower. The purpose of the dowry varies by culture and has varied historically. In some cultures, it was paid not only to support the establishment of a new family, but also served as a condition that if the husband committed grave offenses upon his wife, the dowry had to be returned to the wife or her family; but during the marriage, the dowry was often made inalienable by the husband. Today, dowries continue to be expected in parts of South Asia such as India, Pakistan, Nepal, Bangladesh, and Sri Lanka, and conflicts related to their payment sometimes result in violence such as dowry deaths and bride burning.
Changing of name upon marriage
In some cultures, particularly in the Anglophone West, wives often change their surnames to that of the husband upon getting married. For some, this is a controversial practice, due to its tie to the historical doctrine of coverture and to the historically subordinated roles of wives. Others argue that today this is merely a harmless tradition that should be accepted as a free choice. Some jurisdictions consider this practice as discriminatory and contrary to womens rights, and have restricted or banned it; for example, since 1983, when Greece adopted a new marriage law which guaranteed gender equality between the spouses, women in Greece are required to keep their birth names for their whole life.
Childbearing
Traditionally, and still in many cultures, the role of a wife was closely tied to that of a mother, by a strong expectation that a wife ought to bear children, while, conversely, an unmarried woman should not have a child out of wedlock. These views have changed in many parts of the world. Children born outside marriage have become more common in many countries.Although some wives in particular in Western countries choose not to have children, such a choice is not accepted in some parts of the world. In northern Ghana, for example, the payment of bride price signifies a womans requirement to bear children, and women using birth control are at risk of threats and coercion. In addition, some religions are interpreted as requiring children in marriage; for instance Pope Francis said in 2015 that choosing not to have children was "selfish".
Differences in cultures
Antiquity
Many traditions like a dower, dowry and bride price have long traditions in antiquity. The exchange of any item or value goes back to the oldest sources, and the wedding ring likewise was always used as a symbol for keeping faith to a person.
Western cultures
Historical status
In ancient Rome, The Emperor Augustus introduced marriage legislation, the Lex Papia Poppaea, which rewarded marriage and childbearing. The legislation also imposed penalties on young persons who failed to marry and on those who committed adultery. Therefore, marriage and childbearing was made law between the ages of twenty-five and sixty for men, and twenty and fifty for women. Women who were Vestals Virgins, were selected between the ages of 6 and 10 to serve as priestesses in the temple of goddess Vesta in the Roman Forum for 30 years after which time they could marry. Noble women were known to marry as young as 12 years of age, whereas women in the lower classes were more likely to marry slightly further into their teenage years. Ancient Roman law required brides to be at least 12 years old, a standard adopted by Roman Catholic canon law. In ancient Roman law, first marriages to brides aged 12–25 required the consent of the bride and her father, but by the late antique period Roman law permitted women over 25 to marry without parental consent. The father had the right and duty to seek a good and useful match for his children, and might arrange a childs betrothal long before he or she came of age. To further the interests of their birth families, daughters of the elite would marry into respectable families. If a daughter could prove the proposed husband to be of bad character, she could legitimately refuse the match. The age of lawful consent to a marriage was 12 for maidens and 14 for youths. In late antiquity, Most Roman women seem to have married in their late teens to early twenties, but noble women married younger than those of the lower classes, and an aristocratic maiden was expected to be virgin until her first marriage. In late antiquity, under Roman law, daughters inherited equally from their parents if no will was produced. In addition, Roman law recognized wives property as legally separate from husbandss property, as did some legal systems in parts of Europe and colonial Latin America.
Christian cultures claim to be guided by the New Testament in regard to their view on the position of a wife in society as well as her marriage. The New Testament condemns divorce for both men and women (1 Cor 7:10–11), and assumes monogamy on the part of the husband: the wife is to have her "own" husband, and the husband is to have his "own" wife (1 Cor 7:2). In the medieval period, this was understood to mean that a wife should not share a husband with other wives. As a result, divorce was relatively uncommon in the pre-modern West, particularly in the medieval and early modern period, and husbands in the Roman, later medieval and early modern period did not publicly take more than one wife.
In pre-modern times, it was unusual to marry for love alone, although it became an ideal in literature by the early modern period. In the 12th century, the Roman Catholic Church drastically changed legal standards for marital consent by allowing daughters over 12 and sons over 14 to marry without their parents approval, even if their marriage was made clandestinely. Parish studies have confirmed that late medieval women did sometimes marry against their parents approval. The Roman Catholic Churchs policy of considering clandestine marriages and marriages made without parental consent to be valid was controversial, and in the 16th century both the French monarchy and the Lutheran church sought to end these practices, with limited success.The New Testament made no pronouncements about wives property rights, which in practice were influenced more by secular laws than religion. Most influential in the pre-modern West was the civil law, except in English-speaking countries where English common law emerged in the High Middle Ages. In addition, local customary law influenced wives property rights; as a result wives property rights in the pre-modern West varied widely from region to region. Because wives property rights and daughters inheritance rights varied widely from region to region due to differing legal systems, the amount of property a wife might own varied greatly. Under the English common law system, which dates to the later medieval period, daughters and younger sons were usually excluded from landed property if no will was produced. Under English common law, there was a system where a wife with a living husband ("feme couvert") could own little property in her own name. Unable to easily support herself, marriage was very important to most womens economic status. This problem has been dealt with extensively in literature, where the most important reason for womens limited power was the denial of equal education and equal property rights for females. The situation was assessed by the English conservative moralist Sir William Blackstone: "The husband and wife are one, and the husband is the one." Married womens property rights in the English-speaking world improved with the Married Womens Property Act 1882 and similar legal changes, which allowed wives with living husbands to own property in their own names. Until late in the 20th century, women could in some regions or times sue a man for wreath money when he took her virginity without taking her as his wife.If a woman did not want to marry, another option was entering a convent as a nun. to become a "bride of Christ", a state in which her chastity and economic survival would be protected. Both a wife and a nun wore Christian headcovering, which proclaimed their state of protection by the rights of marriage. Much more significant than the option of becoming a nun, was the option of non-religious spinsterhood in the West. An unmarried woman, a feme sole, had the right to own property and make contracts in her own name. As first demonstrated quantitatively by John Hajnal, in the 19th and early 20th centuries the percentage of non-clerical Western women who never married was typically as high as 10–15%, a prevalence of female celibacy never yet documented for any other major traditional civilization. In addition, early modern Western women married at quite high ages (typically mid to late 20s) relative to other major traditional cultures. The high age at first marriage for Western women has been shown by many parish reconstruction studies to be a traditional Western marriage pattern that dates back at least as early as the mid-16th century.
Contemporary status
In the 20th century, the role of the wife in Western marriage changed in two major ways; the first was the breakthrough from an "institution to companionate marriage"; for the first time since the Middle Ages, wives became distinct legal entities, and were allowed their own property and allowed to sue. Until then, partners were a single legal entity, but only a husband was allowed to exercise this right, called coverture. The second change was the drastic alteration of middle and upper-class family life, when in the 1960s these wives began to work outside their home, and with the social acceptance of divorces the single-parent family, and stepfamily or "blended family" as a more "individualized marriage".Today, some women may wear a wedding ring in order to show her status as a wife.In Western countries today, married women usually have an education, a profession and they (or their husbands) can take time off from their work in a legally procured system of ante-natal care, statutory maternity leave, and they may get maternity pay or a maternity allowance. The status of marriage, as opposed to unmarried pregnant women, allows the spouse to be responsible for the child, and to speak on behalf of their wife; a partner is also responsible for the wifes child in states where they are automatically assumed to be the biological legal parent. Vice versa, a wife has more legal authority in some cases when she speaks on behalf of a spouse than she would have if they were not married, e.g. when her spouse is in a coma after an accident, a wife may have the right of advocacy. If they divorce, she also might receive—or pay—alimony (see Law and divorce around the world).
Womens income affect the dynamics of heterosexual love relationships
The effect of womens income on heterosexual relationships’ dynamics depends on the couple’s different factors. Firstly on the cultural values that the couple has. As explained in (Brown and Roberts (2014)., according to the traditional levels of values of the couple, is going to dependent there effects that the wifes income is going to generate in the dynamics of the relationship. If the couple has strong traditional values, the income of women will affect mens gender identity and affect their well-being. On the other hand if they have strong liberal values, the income of a woman changes the dynamics almost in the opposite way, wife’s are the provider of the house and the man does the housework. However, in most cases of the sample in the study, the dynamics ended in a mutually dependent relationship, where the womans income is needed, but at the same time women have to do the majority of the housework, their well-being is seriously affected.
Another variable that changes the dynamics is time, according to the research (Schwartz and Pons, 2016), there can be the assumption, that could explain how the dynamics star changing through the past decades, at the beginning of the 1970s the traditional dynamics was that women did the house labor and that men work for income because literally, they were in a system that made men much more capable of bringing more income to the family.
However, theres also noticed that the few women who out-earned their husbands were more prepared than the average men, but they still earn less than the average men, for discrimination issues. Eventually, the feminist movement changed the patriotic system, and women started earning more. The dynamics of couples started changing in the 1980s, correlations between higher income of women and higher rates of divorce, started decreasing, because the dynamics had already changed. People were getting more used to changing traditional dynamics to bring more income to the family. This effect is supported by the previous study of Roger (2004).
Finally another variable that was analyzed, is the economic independence theory, (Rogers, 2004) establishes that if one side of the couple provides more than 60% of the total income of the couple, theres a dependence effect. Therefore, in the past decades women have had a major increase in their economic independence. However at the same time that the dynamics are changed to give the power to women to decide, theres sacrifices that they have to make, such as the postponement of Maternity. Several studies, such as (Laura Gordo, 2009), prove that in order to specialize more in their jobs, women are postponing maternity. According to experts, by doing this they reduce their fertility rates. This problem leaves the question of, how can this change?. In my own opinion, I would say that there has to be a more significant change in the system. Maybe feminism is not totally right because while there is an injustice when a man has more economic advantages than women, there is also an injustice when women have to decrease their probabilities of having their children in order to compete in the economic system. This topic should be investigated deeper in further research.
Asia cultures
Hinduism
In Indo-Aryan languages, a wife is known as Patni, which means a woman who shares everything in this world with her husband and he does the same, including their identity. Decisions are ideally made in mutual consent. A wife usually takes care of anything inside her household, including the familys health, the childrens education, a parents needs.
The majority of Hindu marriages in rural and traditional India are arranged marriages. Once they find a suitable family (family of same caste, culture and financial status), the boy and the girl see and talk to each other to decide the outcome. In recent times however the western culture has had significant influence and the new generations are more open to the idea of marrying for love.
Indian law has recognized rape, sexual, emotional or verbal abuse of a woman by her husband as crimes.
In Hinduism, a wife is known as a Patni or Ardhangini (similar to "the better half") meaning a part of the husband or his family. In Hinduism, a woman or man can get married, but only have one husband or wife respectively.
In India, women may wear vermilion powder on their foreheads, an ornament called Mangalsutra (Hindi: मंगलसूत्र) which is a form of necklace, or rings on their toes (which are not worn by single women) to show their status as married women.
Buddhism and Chinese folk religions
Chinas family laws were changed by the Communist revolution; and in 1950, the Peoples Republic of China enacted a comprehensive marriage law including provisions giving the spouses equal rights with regard to ownership and management of marital property.
Japan
In Japan, before enactment of the Meiji Civil Code of 1898, all of the womans property such as land or money passed to her husband except for personal clothing and a mirror stand. See Women in Japan, Law of Japan
Wife in Abrahamic religions
Wife in Christianity
Christian marriage as based on biblical teachings and conditions, is to be between one woman and one man, that God Himself joined them and that no human is to separate them, according to Christs words (Matthew 19:4-6). The New Testament states that an unmarried Christian woman is to be celibate or is to become the Christian wife of one husband to avoid sexual immorality and for his sexual passion (1 Cor 7:1-2 & 8–9). The New Testament permits divorce of a Christian wife by a Christian husband only if she has committed adultery (Matthew 5:32). A Christian widow to (re)marry a man she chooses (1 Cor 7:39) but forbids a divorced Christian woman to remarry because she would be committing adultery if she did (Matthew 5:32). As such she is to remain unmarried and celibate or be reconciled with her husband (1 Cor 7:1-2 & 8-9 and 1 Cor 7:10-11). A Christian wife can divorce a non-Christian husband if he wants a divorce (1 Cor 7:12-16). Christian husbands are to love their Christian wives as Christ loved the Church (Ephesians 5:25) and as he loves himself (Ephesians 5:33). The Christian wife is to respect her husband (Ephesians 5:33). Christian husbands are to not be harsh with their Christian wives (Colossians 3:19) and to treat them as a delicate vessel and with honor (1 Peter 3:7).
Wife in Islam
Women in Islam have a range of rights and obligations (see main article Rights and obligations of spouses in Islam). Marriage takes place on the basis of a marriage contract. The arranged marriage is relatively common in traditionalist families, whether in Muslim countries or as first or second generation immigrants elsewhere.
Women in general are supposed to wear specific clothes, as stated by the hadith, like the hijab, which may take different styles depending on the culture of the country, where traditions may seep in. The husband must pay a mahr to the bride.Traditionally, the wife in Islam is seen as a protected, chaste person that manages the household and the family. She has the ever important role of raising the children and bringing up the next generation of Muslims. In Islam, it is highly recommended that the wife remains at home although they are fully able to own property or work. The husband is obligated to spend on the wife for all of her needs while she is not obligated to spend even if she is wealthy. Muhammad is said to have commanded all Muslim men to treat their wives well. There is a hadith by Al-Tirmidhi, in which Muhammad is said to have stated "The believers who show the most perfect faith are those who have the best character and the best of you are those who are best to their wives."Traditionally, Muslim married women are not distinguished from unmarried women by an outward symbol (such as a wedding ring). However, womens wedding rings have recently been adopted in the past thirty years from Western culture.
Wife in Judaism
Rabbinic Judaism
Women in Judaism have a range of rights and obligations ( see main article Jewish views on marriage). Marriage takes place on the basis of a Jewish marriage contract, called a Ketubah. There is a blur of arranged marriages and love marriages in traditional families.
Married women, in traditional families, wear specific clothes, like the tichel.
Hebrew Bible
Once, a man called Shechem, a Hivite, offered a dowry to get an Israelite wife, but was rejected, since he was not an Israelite himself. Genesis 34
In ancient times there were Israelite women who were Judge, Queen regnant, Queen regent, Queen mother, Queen consort, and Prophetess:
Deborah was the wife of an Israelite man whose name was Lapidoth, which means "torches." Deborah was a Judge and a Prophetess.
Esther was the Jewish wife of a Persian King named Ahasuerus. Esther was Queen consort to the King of Persia and at the same time she was Queen regnant of the Jewish people in Persia and their Prophetess.
Bathsheba was the Queen consort of King-Prophet David and then the Queen mother of King-Prophet Solomon. He rose from his throne when she entered and bowed to her and ordered that a throne be brought and he had her sit at his right hand, which is in stark contrast to when she was Queen consort and bowed to King-Prophet David when she entered. Prophet Jeremiah portrays a Queen mother as sharing in her sons rule over the kingdom in Jeremiah 13:18-20.
The wife of Prophet Isaiah was a Prophetess. Isaiah 8:3
Expectation of fidelity and violence related to adultery
There is a widely held expectation, which has existed for most of recorded history and in most cultures, that a wife is not to have sexual relations with anyone other than her legal husband. A breach of this expectation of fidelity is commonly referred to as adultery or extramarital sex. Historically, adultery has been considered to be a serious offense, sometimes a crime, and a sin. Even if that is not so, it may still have legal consequences, particularly as a ground for a divorce. Adultery may be a factor to consider in a property settlement, it may affect the status of children, the custody of children; moreover, adultery can result in social ostracism in some parts of the world. In addition, affinity rules of Catholicism, of Judaism and of Islam prohibit an ex-wife or widow from engaging in sexual relations with and from marrying a number of relatives of the former husband.
In parts of the world, adultery may result in violent acts, such as honor killings or stoning. Some jurisdictions, especially those that apply Sharia law, allow for such acts to take place legally.As of September 2010, stoning is a legal punishment in countries such as Saudi Arabia, Sudan, Iran, Yemen, the United Arab Emirates, and some states in Nigeria as punishment for zina al-mohsena ("adultery of married persons").
See also
Bride kidnapping
Fiancée
Husband
Marriage
Personal property or movable property
Wife acceptance factor
Wife selling
== References == |
Zygoma fracture | A zygoma fracture (zygomatic fracture) is a form of facial fracture caused by a fracture of the zygomatic bone. A zygoma fracture is often the result of facial trauma such as violence, falls or automobile accidents.Symptoms include flattening of the face, trismus (reduced opening of the jaw) and lateral subconjunctival hemorrhage.
See also
Zygomaticomaxillary complex fracture
== References == |
Consciousness | Consciousness, at its simplest, is sentience and awareness of internal and external existence. However, the lack of definitions has led to millennia of analyses, explanations and debates by philosophers, theologians, linguisticians, and scientists. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was ones "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked.Examples of the range of descriptions, definitions or explanations are: simple wakefulness, ones sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain.
Inter-disciplinary perspectives
Western philosophers since the time of Descartes and Locke have struggled to comprehend the nature of consciousness and how it fits into a larger picture of the world. These questions remain central to both continental and analytic philosophy, in phenomenology and the philosophy of mind, respectively.
Consciousness has also become a significant topic of interdisciplinary research in cognitive science, involving fields such as psychology, linguistics, anthropology, neuropsychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness.
In medicine, consciousness is assessed by observing a patients arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale.
Etymology
In the late 20th century, philosophers like Hamlyn, Rorty, and Wilkes have disagreed with Kahn, Hardie and Modrak as to whether Aristotle even had a concept of consciousness. Aristotle does not use any single word or terminology to name the phenomenon; it is used only much later, especially by John Locke. Caston contends that for Aristotle, perceptual awareness was somewhat the same as what modern philosophers call consciousness.The origin of the modern concept of consciousness is often attributed to Lockes Essay Concerning Human Understanding, published in 1690. Locke defined consciousness as "the perception of what passes in a mans own mind". His essay influenced the 18th-century view of consciousness, and his definition appeared in Samuel Johnsons celebrated Dictionary (1755).
"Consciousness" (French: conscience) is also defined in the 1753 volume of Diderot and dAlemberts Encyclopédie, as "the opinion or internal feeling that we ourselves have from what we do".The earliest English language uses of "conscious" and "consciousness" date back, however, to the 1500s. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know"), but the Latin word did not have the same meaning as the English word—it meant "knowing with", in other words, "having joint or common knowledge with another". There were, however, many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase had the figurative meaning of "knowing that one knows", as the modern English word "conscious" does. In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another." The Latin phrase conscius sibi, whose meaning was more closely related to the current concept of consciousness, was rendered in English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness". Lockes definition from 1690 illustrates that a gradual shift in meaning had taken place.
A related word was conscientia, which primarily means moral conscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern speakers would use "conscience". In Search after Truth (Regulæ ad directionem ingenii ut et inquisitio veritatis per lumen naturale, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio).
The problem of definition
The dictionary definitions of the word consciousness extend through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between inward awareness and perception of the physical world, or the distinction between conscious and unconscious, or the notion of a "mental entity" or "mental activity" that is not physical.
The common usage definitions of consciousness in Websters Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows:
awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in ones inner self
inward awareness of an external object, state, or fact
concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness]
the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical
the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS
waking life (as that to which one returns after sleep, trance, fever) wherein all ones mental powers have returned...
the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUSThe Cambridge Dictionary defines consciousness as "the state of understanding and realizing something."
The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to ones surroundings.", "A persons awareness or perception of something." and "The fact of awareness by the mind of itself and the world."Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows:
Consciousness—Philosophers have used the term consciousness for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within ones mind is introspectively conscious just in case one introspects it (or is poised to do so). Introspection is often thought to deliver ones primary knowledge of ones mental life. An experience or other mental entity is phenomenally conscious just in case there is something it is like for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of ones own actions or perceptions; and streams of thought, as in the experience of thinking in words or in images. Introspection and phenomenality seem independent, or dissociable, although this is controversial.
Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition:
Consciousness—The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it.
A partisan definition such as Sutherlands can hugely affect researchers assumptions and the direction of their work:
If awareness of the environment... is the criterion of consciousness, then even the protozoans are conscious. If awareness of awareness is required, then it is doubtful whether the great apes and human infants are conscious.
Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Others, though, have argued that the level of disagreement about the meaning of the word indicates that it either means different things to different people (for instance, the objective versus subjective aspects of consciousness), that it encompasses a variety of distinct meanings with no simple element in common, or that we should eliminate this concept from our understanding of the mind, a position known as consciousness semanticism.
Philosophy of mind
Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues.
The coherence of the concept
Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.
Types of consciousness
Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.Some philosophers believe that Blocks two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I dont know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility."
Consciousness in children
Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied childrens dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection." In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this facultys acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness." Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind," calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between ones own mind and others minds in terms of beliefs, desires, emotions and thoughts." They write, "The hallmark of theory of mind, the understanding of false belief, occurs... at five to six years of age."
Mind–body problem
Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown.
The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartess rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought.Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (Lhomme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness.A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing.Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum.
Problem of other minds
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at Indiana University) regarding the literature and research studying artificial intelligence in androids.The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in an essay titled The Unimagined Preposterousness of Zombies, argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences.
Animal consciousness
The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.
Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animals brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffins 2001 book Animal Minds reviews a substantial portion of the evidence.On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the Cambridge Declaration on Consciousness, which summarizes the most important findings of the survey:
"We decided to reach a consensus and make a statement directed to the public that is not scientific. Its obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society.""Convergent evidence indicates that non-human animals..., including all mammals and birds, and other creatures,... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."
Artifact consciousness
The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:
It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine.... The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.
One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness.
In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.In the literature concerning artificial intelligence, Searles essay has been second only to Turings in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robots words in the robots sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition.In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on machines ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious |