page_title
stringlengths
1
91
page_text
stringlengths
0
34.2k
Muscle weakness
Muscle weakness is a lack of muscle strength. Its causes are many and can be divided into conditions that have either true or perceived muscle weakness. True muscle weakness is a primary symptom of a variety of skeletal muscle diseases, including muscular dystrophy and inflammatory myopathy. It occurs in neuromuscular junction disorders, such as myasthenia gravis. Muscle weakness can also be caused by low levels of potassium and other electrolytes within muscle cells. It can be temporary or long-lasting (from seconds or minutes to months or years). The term myasthenia is from my- from Greek μυο meaning "muscle" + -asthenia ἀσθένεια meaning "weakness". Types Neuromuscular fatigue can be classified as either "central" or "peripheral" depending on its cause. Central muscle fatigue manifests as an overall sense of energy deprivation, while peripheral muscle fatigue manifests as a local, muscle-specific inability to do work. Neuromuscular fatigue Nerves control the contraction of muscles by determining the number, sequence, and force of muscular contraction. When a nerve experiences synaptic fatigue it becomes unable to stimulate the muscle that it innervates. Most movements require a force far below what a muscle could potentially generate, and barring pathology, neuromuscular fatigue is seldom an issue.For extremely powerful contractions that are close to the upper limit of a muscles ability to generate force, neuromuscular fatigue can become a limiting factor in untrained individuals. In novice strength trainers, the muscles ability to generate force is most strongly limited by nerve’s ability to sustain a high-frequency signal. After an extended period of maximum contraction, the nerve’s signal reduces in frequency and the force generated by the contraction diminishes. There is no sensation of pain or discomfort, the muscle appears to simply ‘stop listening’ and gradually cease to move, often lengthening. As there is insufficient stress on the muscles and tendons, there will often be no delayed onset muscle soreness following the workout. Part of the process of strength training is increasing the nerves ability to generate sustained, high frequency signals which allow a muscle to contract with their greatest force. It is this "neural training" that causes several weeks worth of rapid gains in strength, which level off once the nerve is generating maximum contractions and the muscle reaches its physiological limit. Past this point, training effects increase muscular strength through myofibrillar or sarcoplasmic hypertrophy and metabolic fatigue becomes the factor limiting contractile force. Central fatigue Central fatigue is a reduction in the neural drive or nerve-based motor command to working muscles that results in a decline in the force output. It has been suggested that the reduced neural drive during exercise may be a protective mechanism to prevent organ failure if the work was continued at the same intensity. There has been a great deal of interest in the role of serotonergic pathways for several years because its concentration in the brain increases with motor activity. During motor activity, serotonin released in synapses that contact motoneurons promotes muscle contraction. During high level of motor activity, the amount of serotonin released increases and a spillover occurs. Serotonin binds to extrasynaptic receptors located on the axon initial segment of motoneurons with the result that nerve impulse initiation and thereby muscle contraction are inhibited. Peripheral muscle fatigue Peripheral muscle fatigue during physical work is an inability for the body to supply sufficient energy or other metabolites to the contracting muscles to meet the increased energy demand. This is the most common case of physical fatigue—affecting a national average of 72% of adults in the work force in 2002. This causes contractile dysfunction that manifests in the eventual reduction or lack of ability of a single muscle or local group of muscles to do work. The insufficiency of energy, i.e. sub-optimal aerobic metabolism, generally results in the accumulation of lactic acid and other acidic anaerobic metabolic by-products in the muscle, causing the stereotypical burning sensation of local muscle fatigue, though recent studies have indicated otherwise, actually finding that lactic acid is a source of energy.The fundamental difference between the peripheral and central theories of muscle fatigue is that the peripheral model of muscle fatigue assumes failure at one or more sites in the chain that initiates muscle contraction. Peripheral regulation therefore depends on the localized metabolic chemical conditions of the local muscle affected, whereas the central model of muscle fatigue is an integrated mechanism that works to preserve the integrity of the system by initiating muscle fatigue through muscle derecruitment, based on collective feedback from the periphery, before cellular or organ failure occurs. Therefore, the feedback that is read by this central regulator could include chemical and mechanical as well as cognitive cues. The significance of each of these factors will depend on the nature of the fatigue-inducing work that is being performed.Though not universally used, "metabolic fatigue" is a common alternative term for peripheral muscle weakness, because of the reduction in contractile force due to the direct or indirect effects of the reduction of substrates or accumulation of metabolites within the muscle fiber. This can occur through a simple lack of energy to fuel contraction, or through interference with the ability of Ca2+ to stimulate actin and myosin to contract. Lactic acid hypothesis It was once believed that lactic acid build-up was the cause of muscle fatigue. The assumption was lactic acid had a "pickling" effect on muscles, inhibiting their ability to contract. The impact of lactic acid on performance is now uncertain, it may assist or hinder muscle fatigue.Produced as a by-product of fermentation, lactic acid can increase intracellular acidity of muscles. This can lower the sensitivity of contractile apparatus to calcium ions (Ca2+) but also has the effect of increasing cytoplasmic Ca2+ concentration through an inhibition of the chemical pump that actively transports calcium out of the cell. This counters inhibiting effects of potassium ions (K+) on muscular action potentials. Lactic acid also has a negating effect on the chloride ions in the muscles, reducing their inhibition of contraction and leaving K+ as the only restricting influence on muscle contractions, though the effects of potassium are much less than if there were no lactic acid to remove the chloride ions. Ultimately, it is uncertain if lactic acid reduces fatigue through increased intracellular calcium or increases fatigue through reduced sensitivity of contractile proteins to Ca2+. Pathophysiology Muscle cells work by detecting a flow of electrical impulses from the brain which signals them to contract through the release of calcium by the sarcoplasmic reticulum. Fatigue (reduced ability to generate force) may occur due to the nerve, or within the muscle cells themselves. New research from scientists at Columbia University suggests that muscle fatigue is caused by calcium leaking out of the muscle cell. This causes there to be less calcium available for the muscle cell. In addition an enzyme is proposed to be activated by this released calcium which eats away at muscle fibers.Substrates within the muscle generally serve to power muscular contractions. They include molecules such as adenosine triphosphate (ATP), glycogen and creatine phosphate. ATP binds to the myosin head and causes the ‘ratchetting’ that results in contraction according to the sliding filament model. Creatine phosphate stores energy so ATP can be rapidly regenerated within the muscle cells from adenosine diphosphate (ADP) and inorganic phosphate ions, allowing for sustained powerful contractions that last between 5–7 seconds. Glycogen is the intramuscular storage form of glucose, used to generate energy quickly once intramuscular creatine stores are exhausted, producing lactic acid as a metabolic byproduct. Contrary to common belief, lactic acid accumulation doesnt actually cause the burning sensation we feel when we exhaust our oxygen and oxidative metabolism, but in actuality, lactic acid in presence of oxygen recycles to produce pyruvate in the liver which is known as the Cori cycle.Substrates produce metabolic fatigue by being depleted during exercise, resulting in a lack of intracellular energy sources to fuel contractions. In essence, the muscle stops contracting because it lacks the energy to do so. Diagnosis Grading The severity of muscle weakness can be classified into different "grades" based on the following criteria: Grade 0: No contraction or muscle movement. Grade 1: Trace of contraction, but no movement at the joint. Grade 2: Movement at the joint with gravity eliminated. Grade 3: Movement against gravity, but not against added resistance. Grade 4: Movement against external resistance with less strength than usual. Grade 5: Normal strength. Classification Proximal and distal Muscle weakness can also be classified as either "proximal" or "distal" based on the location of the muscles that it affects. Proximal muscle weakness affects muscles closest to the bodys midline, while distal muscle weakness affects muscles further out on the limbs. Proximal muscle weakness can be seen in Cushings syndrome and hyperthyroidism. True and perceived Muscle weakness can be classified as either "true" or "perceived" based on its cause. True muscle weakness (or neuromuscular weakness) describes a condition where the force exerted by the muscles is less than would be expected, for example muscular dystrophy. Perceived muscle weakness (or non-neuromuscular weakness) describes a condition where a person feels more effort than normal is required to exert a given amount of force but actual muscle strength is normal, for example chronic fatigue syndrome.In some conditions, such as myasthenia gravis, muscle strength is normal when resting, but true weakness occurs after the muscle has been subjected to exercise. This is also true for some cases of chronic fatigue syndrome, where objective post-exertion muscle weakness with delayed recovery time has been measured and is a feature of some of the published definitions. References Further reading Saguil A (April 2005). "Evaluation of the patient with muscle weakness". Am Fam Physician. 71 (7): 1327–36. PMID 15832536. == External links ==
Extremities
Extremities may refer to: Anatomy The distal limb (forearm or lower leg) of a tetrapod animal, more specifically its distalmost portion, including: Hand, a prehensile, multi-digited organ at the distal end of upper limb (arm) of bipedal primates (especially humans) that is highly adapted for grasping and fine manipulation of objects Foot, the terminal portion of a quadruped tetrapods limb that mainly bears weight and allows terrestrial locomotion. In toe-walking ungulates, the term typically refers to the hoofed portion of the foot. Paw, a furry, padded foot with claws, common in many quadruped animals Appendage, any external body part that protrudes outwards from an organisms core, such as a limb, tail, ear, nose, horn/antler, external genitalia, antenna, tusk/mouthpart or raptorial Other Extremities (play), a 1982 play by William Mastrosimone Extremities (film), a 1986 film based on the play See also Extreme (disambiguation)
Accident
An accident is an unintended, normally unwanted event that was not directly caused by humans. The term accident implies that nobody should be blamed, but the event may have been caused by unrecognized or unaddressed risks. Most researchers who study unintentional injury avoid using the term accident and focus on factors that increase risk of severe injury and that reduce injury incidence and severity. For example, when a tree falls down during a wind storm, its fall may not have been caused by humans, but the trees type, size, health, location, or improper maintenance may have contributed to the result. Most car wrecks are not true accidents; however English speakers started using that word in the mid-20th century as a result of media manipulation by the US automobile industry. Types Physical and non-physical Physical examples of accidents include unintended motor vehicle collisions, falls, being injured by touching something sharp or hot, or bumping into something while walking. Non-physical examples are unintentionally revealing a secret or otherwise saying something incorrectly, accidental deletion of data, or forgetting an appointment. Accidents by activity Accidents during the execution of work or arising out of it are called work accidents. According to the International Labour Organization (ILO), more than 337 million accidents happen on the job each year, resulting, together with occupational diseases, in more than 2.3 million deaths annually. In contrast, leisure-related accidents are mainly sports injuries. Accidents by vehicle Vehicle collisions are not usually accidents; they are mostly caused by preventable causes such as drunk driving and intentionally driving too fast. The use of the word accident to describe car wrecks was promoted by the US National Automobile Chamber of Commerce in the middle of the 20th century, as a way to make vehicle-related deaths and injuries seem like an unavoidable matter of fate, rather than a problem that could be addressed. The automobile industry accomplished this by writing customized articles as a free service for newspapers that used the industrys preferred language. Since 1994, the US National Highway Traffic Safety Administration has asked media and the public to not use the word accident to describe vehicle collisions. Aviation Bicycles Sailing ships Traffic collisions Train wrecks Trams Domino effect accidents In the process industry, a primary accident may propagate to nearby units, resulting in a chain of accidents, which is called domino effect accident. Common causes Poisons, vehicle collisions and falls are the most common causes of fatal injuries. According to a 2005 survey of injuries sustained at home, which used data from the National Vital Statistics System of the United States National Center for Health Statistics, falls, poisoning, and fire/burn injuries are the most common causes of death.The United States also collects statistically valid injury data (sampled from 100 hospitals) through the National Electronic Injury Surveillance System administered by the Consumer Product Safety Commission. This program was revised in 2000 to include all injuries rather than just injuries involving products. Data on emergency department visits is also collected through the National Health Interview Survey. In The U.S. the Bureau of Labor Statistics has available on their website extensive statistics on workplace accidents. Accident models Many models to characterize and analyze accidents have been proposed, which can be classified by type. No single model is the sole correct approach. Notable types and models include: Sequential models Domino Theory Loss Causation Model Complex linear models Energy Damage Model Time sequence models Generalized Time Sequence Model Accident Evolution and Barrier Function Epidemiological models Gordon 1949 Onward Mappings Model based on Resident Pathogens Metaphor Process model Benner 1975 Systemic models Rasmussen Reason Model of System Safety (embedding the Swiss cheese model) Healthcare error proliferation model Human reliability Woods, 1994 Non-linear models System accident Systems-Theoretic Accident Model and Process (STAMP) Functional Resonance Analysis Method (FRAM) Assertions that all existing models are insufficientIshikawa diagrams are sometimes used to illustrate root-cause analysis and five whys discussions. See also General Accident analysis Root cause analysis Accident-proneness Idiot-proof Injury Injury prevention List of accidents and disasters by death toll Safety Safety engineering Fail-safe Poka-yoke Risk management Transportation Air safety Aviation accidents and incidents Bicycle safety Car Automobile safety Traffic collision List of rail accidents Tram accident Sailing ship accidents Other specific topics Aisles: Safety and regulatory considerations Explosives safety Nuclear and radiation accidents Occupational safety and health Safety data sheet Personal protective equipment Criticality accident Sports injury References == External links ==
Pasteurellosis
Pasteurellosis is an infection with a species of the bacterial genus Pasteurella, which is found in humans and other animals. Pasteurella multocida (subspecies P. m. septica and P. m. multocida) is carried in the mouth and respiratory tract of various animals, including pigs. It is a small, Gram-negative bacillus with bipolar staining by Wayson stain. In animals, it can originate in fulminant septicaemia (chicken cholera), but is also a common commensal. Until taxonomic revision in 1999, Mannheimia spp. were classified as Pasteurella spp., and infections by organisms now called Mannheimia spp., as well as by organisms now called Pasteurella spp., were designated as pasteurellosis. The term "pasteurellosis" is often still applied to mannheimiosis, although such usage has declined. Types The several forms of the infection are: Skin/subcutaneous tissue disease is a septic phlegmon that develops classically in the hand and forearm after a cat bite. Inflammatory signs are very rapid to develop; in 1 or 2 hours, edema, severe pain, and serosanguineous exudate appear. Fever, moderate or very high, can be seen, along with vomiting, headache, and diarrhea. Lymphangitis is common. Complications are possible, in the form of septic arthritis, osteitis, or evolution to chronicity. Sepsis is very rare, but can be as fulminant as septicaemic plague, with high fever, rigors, and vomiting, followed by shock and coagulopathy. Pneumonia disease is also rare and appears in patients with some chronic pulmonary pathology. It usually presents as bilateral consolidating pneumonia, sometimes very severe. Zoonosis, pasteurellosis can be transmitted to humans through cats.Other locations are possible, such as septic arthritis, meningitis, and acute endocarditis, but are very rare. Diagnosis Diagnosis is made with isolation of Pasteurella multocida in a normally sterile site (blood, pus, or cerebrospinal fluid). Treatment As the infection is usually transmitted into humans through animal bites, antibiotics usually treat the infection, but medical attention should be sought if the wound is severely swelling. Pasteurellosis is usually treated with high-dose penicillin if severe. Either tetracycline or chloramphenicol provides an alternative in beta-lactam-intolerant patients. However, it is most important to treat the wound. Animals P. multocida causes numerous pathological conditions in domestic animals. It often acts with other infectious agents, such as Chlamydia and Mycoplasma species and viruses. Environmental conditions (transportation, housing deficiency, and bad weather) also play a role. These diseases are considered caused by P. multocida, alone or associated with other pathogens: Shipping fever in cattle and sheep ("shipping fever" may also be caused by Mannheimia haemolytica, in the absence of P. multocida, and M. haemolytica serovar A1 is known as the most common cause of the disease. The pathologic condition commonly arises where the causative organism becomes established by secondary infection, following a primary bacterial or viral infection, which may occur after stress, e.g. from handling or transport.) Enzootic pneumonia of sheep (and goats, with frequent intervention of M. haemolytica) Fowl cholera (chicken and other domestic poultry and cage birds) Enzootic pneumonia and atrophic rhinitis of pigs Pasteurellosis of chinchillas Pasteurellosis of rabbits Pasteurellosis is suspected to be the cause of recurrent mass mortality of Saiga antelopes. See also Hemorrhagic septicemia Pasteurellaceae References == External links ==
Failed back syndrome
Failed back syndrome or post-laminectomy syndrome is a condition characterized by chronic pain following back surgeries. Many factors can contribute to the onset or development of FBS, including residual or recurrent spinal disc herniation, persistent post-operative pressure on a spinal nerve, altered joint mobility, joint hypermobility with instability, scar tissue (fibrosis), depression, anxiety, sleeplessness, spinal muscular deconditioning and even Cutibacterium acnes infection. An individual may be predisposed to the development of FBS due to systemic disorders such as diabetes, autoimmune disease and peripheral blood vessels (vascular) disease. Common symptoms associated with FBS include diffuse, dull and aching pain involving the back or legs. Abnormal sensibility may include sharp, pricking, and stabbing pain in the extremities. The term "post-laminectomy syndrome" is used by some doctors to indicate the same condition as failed back syndrome. The treatments of post-laminectomy syndrome include physical therapy, microcurrent electrical neuromuscular stimulator, minor nerve blocks, transcutaneous electrical nerve stimulation (TENS), behavioral medicine, non-steroidal anti-inflammatory (NSAID) medications, membrane stabilizers, antidepressants, spinal cord stimulation, and intrathecal morphine pump. Use of epidural steroid injections may be minimally helpful in some cases. The targeted anatomic use of a potent anti-inflammatory anti-TNF therapeutics is being investigated. The number of spinal surgeries varies around the world. The United States and the Netherlands report the highest number of spinal surgeries, while the United Kingdom and Sweden report the fewest. Recently, there have been calls for more aggressive surgical treatment in Europe. Success rates of spinal surgery vary for many reasons. Cause Patients who have undergone one or more operations on the lumbar spine and continue to experience pain afterward can be divided into two groups. The first group comprise those in whom surgery was not actually indicated or the surgery performed was not likely to achieve the desired result, and those in whom surgery was indicated but which technically did not achieve the intended result. Patients whose pain complaints are of a radicular nature have a better chance for a good outcome than those whose pain complaints are limited to pain in the back. The second group includes patients who had incomplete or inadequate operations. Lumbar spinal stenosis may be overlooked, especially when it is associated with disc protrusion or herniation. Removal of a disc, while not addressing the underlying presence of stenosis, can lead to disappointing results. Occasionally operating on the wrong level occurs, as does failure to recognize an extruded or sequestered disc fragment. Inadequate or inappropriate surgical exposure can lead to other problems in not getting to the underlying pathology. Hakelius reported a 3% incidence of serious nerve root damage.In 1992, Turner et al. published a survey of 74 journal articles which reported the results after decompression for spinal stenosis. Good to excellent results were on average reported by 64% of the patients. There was, however, a wide variation in outcomes reported. There was a better result in patients who had a degenerative spondylolisthesis. A similarly designed study by Mardjekto et al. found that a concomitant spinal arthrodesis (fusion) had a greater success rate. Herron and Trippi evaluated 24 patients, all with degenerative spondylolisthesis treated with laminectomy alone. At follow-up varying between 18 and 71 months after surgery, 20 out of the 24 patients reported a good result. Epstein reported on 290 patients treated over a 25-year period. Excellent results were obtained in 69% and good results in 13%. These optimistic reports do not correlate with "return to competitive employment" rates, which for the most part are dismal in most spinal surgery series.In the past two decades there has been a dramatic increase in fusion surgery in the U.S.: in 2001 over 122,000 lumbar fusions were performed, a 22% increase from 1990 in fusions per 100,000 population, increasing to an estimate of 250,000 in 2003, and 500,000 in 2006. In 2003, the national bill for the hardware for fusion alone was estimated to have soared to $2.5 billion a year. For patients with continued pain after surgery which is not due to the above complications or conditions, interventional pain physicians speak of the need to identify the "pain generator" i.e. the anatomical structure responsible for the patients pain. To be effective, the surgeon must operate on the correct anatomic structure, but is often not possible to determine the source of the pain. The reason for this is that many patients with chronic pain often have disc bulges at multiple spinal levels and the physical examination and imaging studies are unable to pinpoint the source of pain. In addition, spinal fusion itself, particularly if more than one spinal level is operated on, may result in "adjacent segment degeneration". This is thought to occur because the fused segments may result in increased torsional and stress forces being transmitted to the intervertebral discs located above and below the fused vertebrae. This pathology is one reason behind the development of artificial discs as a possible alternative to fusion surgery. But fusion surgeons argue that spinal fusion is more time-tested, and artificial discs contain metal hardware that is unlikely to last as long as biological material without shattering and leaving metal fragments in the spinal canal. These represent different schools of thought. (See discussion on disc replacement infra.) Another highly relevant consideration is the increasing recognition of the importance of "chemical radiculitis" in the generation of back pain. A primary focus of surgery is to remove "pressure" or reduce mechanical compression on a neural element: either the spinal cord, or a nerve root. But it is increasingly recognized that back pain, rather than being solely due to compression, may instead entirely be due to chemical inflammation of the nerve root. It has been known for several decades that disc herniations result in a massive inflammation of the associated nerve root. In the past five years increasing evidence has pointed to a specific inflammatory mediator of this pain. This inflammatory molecule, called tumor necrosis factor-alpha (TNF), is released not only by the herniated or protruding disc, but also in cases of disc tear (annular tear), by facet joints, and in spinal stenosis. In addition to causing pain and inflammation, TNF may also contribute to disc degeneration. If the cause of the pain is not compression, but rather is inflammation mediated by TNF, then this may well explain why surgery might not relieve the pain, and might even exacerbate it, resulting in FBSS. Role of sacroiliac joint (SIJ) in lower back pain (LBP) A 2005 review by Cohen concluded,The SI joint is a real yet underappreciated pain generator in an estimated 15% to 25% of patients with axial LBP. Studies by Ha, et al., show that the incidence of SI joint degeneration in post-lumbar fusion surgery is 75% at 5 years post-surgery, based on imaging. Studies by DePalma and Liliang, et al., demonstrate that 40–61% of post-lumbar fusion patients were symptomatic for SI joint dysfunction based on diagnostic blocks. Smoking Recent studies have shown that cigarette smokers will routinely fail all spinal surgery, if the goal of that surgery is the decrease of pain and impairment. Many surgeons consider smoking to be an absolute contraindication to spinal surgery. Nicotine appears to interfere with bone metabolism through induced calcitonin resistance and decreased osteoblastic function. It may also restrict small blood vessel diameter leading to increased scar formation.There is an association between cigarette smoking, back pain and chronic pain syndromes of all types.In a report of 426 spinal surgery patients in Denmark, smoking was shown to have a negative effect on fusion and overall patient satisfaction, but no measurable influence on the functional outcome.There is a validation of the hypothetical assumption that postoperative smoking cessation helps to reverse the impact of cigarette smoking on outcome after spinal fusion. If patients cease cigarette smoking in the immediate post operative period, there is a positive impact on success.Regular smoking in adolescence was associated with low back pain in young adults. Pack-years of smoking showed an exposure-response relationship among girls.A recent study suggested that cigarette smoking adversely affects serum hydrocodone levels. Prescribing physicians should be aware that in some cigarette smokers, serum hydrocodone levels might not be detectable.In a study from Denmark reviewing many reports in the literature, it was concluded that smoking should be considered a weak risk indicator and not a cause of low back pain. In a multitude of epidemiologic studies, an association between smoking and low back pain has been reported, but variations in approach and study results make this literature difficult to reconcile. In a massive study of 3482 patients undergoing lumbar spine surgery from the National Spine Network, comorbidities of (1) smoking, (2) compensation, (3) self reported poor overall health and (4) pre-existing psychological factors were predictive in a high risk of failure. Followup was carried out at 3 months and one year after surgery. Pre-operative depressive disorders tended not to do well.Smoking has been shown to increase the incidence of post operative infection as well as decrease fusion rates. One study showed 90% of post operative infections occurred in smokers, as well as myonecrosis (muscle destruction) around the wound. Pathology Before the advent of CT scanning, the pathology in failed back syndrome was difficult to understand. Computerized tomography in conjunction with metrizamide myelography in the late 1960s and 1970s allowed direct observation of the mechanisms involved in post operative failures. Six distinct pathologic conditions were identified: Recurrent or persistent disc herniation Spinal stenosis Post operative infection Epidural post-operative fibrosis Adhesive arachnoiditis Nerve Injury Recurrent or persistent disc herniation Removal of a disc at one level can lead to disc herniation at a different level at a later time. Even the most complete surgical excision of the disc still leaves 30–40% of the disc, which cannot be safely removed. This retained disc can re-herniate sometime after surgery. Virtually every major structure in the abdomen and the posterior retroperitoneal space has been injured, at some point, by removing discs using posterior laminectomy/discectomy surgical procedures. The most prominent of these is a laceration of the left internal iliac vein, which lies in close proximity to the anterior portion of the disc. In some studies, recurrent pain in the same radicular pattern or a different pattern can be as high as 50% after disc surgery. Many observers have noted that the most common cause of a failed back syndrome is caused from recurrent disc herniation at the same level originally operated. A rapid removal in a second surgery can be curative. The clinical picture of a recurrent disc herniation usually involves a significant pain-free interval. However, physical findings may be lacking, and a good history is necessary. The time period for the emergence of new symptoms can be short or long. Diagnostic signs such as the straight leg raise test may be negative even if real pathology is present. The presence of a positive myelogram may represent a new disc herniation, but can also be indicative of a post operative scarring situation simply mimicking a new disc. Newer MRI imaging techniques have clarified this dilemma somewhat. Conversely, a recurrent disc can be difficult to detect in the presence of post op scarring. Myelography is inadequate to completely evaluate the patient for recurrent disc disease, and CT or MRI scanning is necessary. Measurement of tissue density can be helpful.Even though the complications of laminectomy for disc herniation can be significant, a recent series of studies involving thousands of patients published under auspices of Dartmouth Medical School concluded at four-year follow-up that those who underwent surgery for a lumbar disc herniation achieved greater improvement than nonoperatively treated patients in all primary and secondary outcomes except work status. Spinal stenosis Spinal stenosis can be a late complication after laminectomy for disc herniation or when surgery was performed for the primary pathologic condition of spinal stenosis. In the Maine Study, among patients with lumbar spinal stenosis completing 8- to 10-year follow-up, low back pain relief, predominant symptom improvement, and satisfaction with the current state were similar in patients initially treated surgically or nonsurgically. However, leg pain relief and greater back-related functional status continued to favor those initially receiving surgical treatment.A large study of spinal stenosis from Finland found the prognostic factors for ability to work after surgery were ability to work before surgery, age under 50 years, and no prior back surgery. The very long-term outcome (mean follow-up time of 12.4 years) was excellent-to-good in 68% of patients (59% women and 73% men). Furthermore, in the longitudinal follow-up, the result improved between 1985 and 1991. No special complications were manifested during this very long-term follow-up time. The patients with total or subtotal block in preoperative myelography achieved the best result. Furthermore, patients with block stenosis improved their result significantly in the longitudinal follow-up. The postoperative stenosis seen in computed tomography (CT) scans was observed in 65% of 90 patients, and it was severe in 23 patients (25%). However, this successful or unsuccessful surgical decompression did not correlate with patients subjective disability, walking capacity or severity of pain. Previous back surgery had a strong worsening effect on surgical results. This effect was very clear in patients with total block in the preoperative myelography. The surgical result of a patient with previous back surgery was similar to that of a patient without previous back surgery when the time interval between the last two operations was more than 18 months.Post-operative MRI findings of stenosis are probably of limited value compared to symptoms experienced by patients. Patients perception of improvement had a much stronger correlation with long-term surgical outcome than structural findings seen on postoperation magnetic resonance imaging. Degenerative findings had a greater effect on patients walking capacity than stenotic findings.Postoperative radiologic stenosis was very common in patients operated on for lumbar spinal stenosis, but this did not correlate with clinical outcome. The clinician must be cautious when reconciling clinical symptoms and signs with postoperative computed tomography findings in patients operated on for lumbar spinal stenosis.A study from Georgetown University reported on one-hundred patients who had undergone decompressive surgery for lumbar stenosis between 1980 and 1985. Four patients with postfusion stenosis were included. A 5-year follow-up period was achieved in 88 patients. The mean age was 67 years, and 80% were over 60 years of age. There was a high incidence of coexisting medical diseases, but the principal disability was lumbar stenosis with neurological involvement. Initially there was a high incidence of success, but recurrence of neurological involvement and persistence of low-back pain led to an increasing number of failures. By 5 years this number had reached 27% of the available population pool, suggesting that the failure rate could reach 50% within the projected life expectancies of most patients. Of the 26 failures, 16 were secondary to renewed neurological involvement, which occurred at new levels of stenosis in eight and recurrence of stenosis at operative levels in eight. Reoperation was successful in 12 of these 16 patients, but two required a third operation. The incidence of spondylolisthesis at 5 years was higher in the surgical failures (12 of 26 patients) than in the surgical successes (16 of 64). Spondylolisthetic stenosis tended to recur within a few years following decompression. Because of age and associated illnesses, fusion may be difficult to achieve in this group. Post operative infection A small minority of lumbar surgical patients will develop a post operative infection. In most cases, this is a bad complication and does not bode well for eventual improvement or future employability. Reports from the surgical literature indicate an infection rate anywhere from 0% to almost 12%. The incidence of infection tends to increase as the complexity of the procedure and operating time increase. Usage of metal implants (instrumentation) tends to increase the risk of infection. Factors associated with an increased infection include diabetes mellitus, obesity, malnutrition, smoking, previous infection, rheumatoid arthritis, and immunodeficiency. Previous wound infection should be considered as a contraindication to any further spinal surgery, since the likelihood of improving such patients with more surgery is small. Antimicrobial prophylaxis (giving antibiotics during or after surgery before an infection begins) reduces the rate of surgical site infection in lumbar spine surgery, but a great deal of variation exists regarding its use. In a Japanese study, utilizing the Centers for Disease Control recommendations for antibiotic prophylaxis, an overall rate of 0.7% infection was noted, with a single dose antibiotic group having 0.4% infection rate and multiple dosage antibiotic infection rate of 0.8%. The authors had previously used prophylactic antibiotics for 5 to 7 postoperative days. Based on the Centers for Disease Control and Prevention guideline, their antibiotic prophylaxis was changed to the day of surgery only. It was concluded there was no statistical difference in the rate of infection between the two different antibiotic protocols. Based on the CDC guideline, a single dose of prophylactic antibiotic was proven to be efficacious for the prevention of infection in lumbar spine surgeries. Epidural post-operative fibrosis Epidural scarring following a laminectomy for disc excision is a common feature when re-operating for recurrent sciatica or radiculopathy. When the scarring is associated with a disc herniation and/or recurrent spinal stenosis, it is relatively common, occurring in more than 60% of cases. For a time, it was theorized that placing a fat graft over the dural could prevent post operative scarring. However, initial enthusiasm has waned in recent years. In an extensive laminectomy involving 2 or more vertebra, post operative scarring is the norm. It is most often seen around the L5 and S1 nerve roots. Adhesive arachnoiditis Fibrous scarring can also be a complication within the subarachnoid space. It is notoriously difficult to detect and evaluate. Prior to the development of magnetic resonance imaging, the only way to ascertain the presence of arachnoiditis was by opening the dura. In the days of CT scanning and Pantopaque and later, Metrizamide myelography, the presence of arachnoiditis could be speculated based on radiographic findings. Often, myelography prior to the introduction of Metrizamide was the cause of arachnoiditis. It can also be caused by the long term pressure brought about with either a severe disc herniation or spinal stenosis. The presence of both epidural scarring and arachnoiditis in the same patient are probably quite common. Arachnoiditis is a broad term denoting inflammation of the meninges and subarachnoid space. A variety of causes exist, including infectious, inflammatory, and neoplastic processes. Infectious causes include bacterial, viral, fungal, and parasitic agents. Noninfectious inflammatory processes include surgery, intrathecal hemorrhage, and the administration of intrathecal (inside the dural canal) agents such as myelographic contrast media, anesthetics (e.g. chloroprocaine), and steroids (e.g. Depo-Medrol, Kenalog). Lately iatrogenic arachnoiditis has been attributed to misplaced Epidural Steroid Injection therapy when accidentally administered intrathecally. The preservatives and suspension agents found in all steroid injectates, which arent indicated for epidural administration by the U.S. Food & Drug Administration due to reports of severe adverse events including arachnoiditis, paralysis and death, have now been directly linked to the onset of the disease following the initial stage of chemical meningitis. Neoplasia includes the hematogenous spread of systemic tumors, such as breast and lung carcinoma, melanoma, and non-Hodgkin lymphoma. Neoplasia also includes direct seeding of the cerebrospinal fluid (CSF) from primary central nervous system (CNS) tumors such as glioblastoma multiforme, medulloblastoma, ependymoma, and choroid plexus carcinoma. Strictly speaking, the most common cause of arachnoiditis in failed back syndrome is not infectious or from cancer. It is due to non-specific scarring secondary to the surgery or the underlying pathology. Nerve injury Laceration of a nerve root, or damage from cautery or traction can lead to chronic pain, however this can be difficult to determine. Chronic compression of the nerve root by a persistent agent such as disc, bone (osteophyte) or scarring can also permanently damage the nerve root. Epidural scarring caused by the initial pathology or occurring after the surgery can also contribute to nerve damage. In one study of failed back patients, the presence of pathology was noted to be at the same site as the level of surgery performed in 57% of cases. The remaining cases developed pathology at a different level, or on the opposite side, but at the same level as the surgery was performed. In theory, all failed back patients have some sort of nerve injury or damage which leads to a persistence of symptoms after a reasonable healing time. Diagnosis Avoiding post-laminectomy/laminotomy syndrome Smaller procedures that do not remove bone (such as Endoscopic Transforaminal Lumbar Discectomy and Reconfiguration) do not cause post laminectomy/laminotomy syndrome. Management Failed back syndrome (FBS) is a well-recognized complication of surgery of the lumbar spine. It can result in chronic pain and disability, often with disastrous emotional and financial consequences to the patient. Many patients have traditionally been classified as "spinal cripples" and are consigned to a life of long-term narcotic treatment with little chance of recovery. Despite extensive work in recent years, FBS remains a challenging and costly disorder. Opioids A study of chronic pain patients from the University of Wisconsin found that methadone is most widely known for its use in the treatment of opioid dependence, but methadone also provides effective analgesia. Patients who experience inadequate pain relief or intolerable side effects with other opioids or who suffer from neuropathic pain may benefit from a transition to methadone as their analgesic agent. Adverse effects, particularly respiratory depression and death, make a fundamental knowledge of methadones pharmacological properties essential to the provider considering methadone as analgesic therapy for a patient with chronic pain. Patient selection Patients who have sciatic pain (pain in the back, radiating down the buttock to the leg) and clear clinical findings of an identifiable radicular nerve loss caused by a herniated disc will have a better post operative course than those who simply have low back pain. If a specific disc herniation causing pressure on a nerve root cannot be identified, the results of surgery are likely to be disappointing. Patients involved in workers compensation, tort litigation or other compensation systems tend to fare more poorly after surgery. Surgery for spinal stenosis usually has a good outcome, if the surgery is done in an extensive manner, and done within the first year or so of the appearance of symptoms.Oaklander and North define the Failed Back Syndrome as a chronic pain patient after one or more surgical procedure to the spine. They delineated these characteristics of the relation between the patient and the surgeon: The patient makes increasing demands on the surgeon for pain relief. The surgeon may feel a strong responsibility to provide a remedy when the surgery has not achieved the desired goals. The patient grows increasingly angry at the failure and may become litigious. There is an escalation of narcotic pain medication which can be habituating or addictive. In the face of expensive conservative treatments which are likely to fail, the surgeon is persuaded to attempt further surgery, even though this is likely to fail as well. The probability of returning to gainful employment decreases with increasing length of disability. The financial incentives to remain disabled may be perceived as outweighing the incentive to recover.In the absence of a financial source for disability or workers compensation, other psychological features may limit the ability of the patient to recover from surgery. Some patients are simply unfortunate, and fall into the category of "chronic pain" despite their desire to recover and the best efforts of the physicians involved in their care. Even less invasive forms of surgery are not uniformly successful; approximately 30,000–40,000 laminectomy patients obtain either no relief of symptomatology or a recurrence of symptoms. Another less invasive form of spinal surgery, percutaneous disc surgery, has reported revision rates as high as 65%. It is no surprise, therefore, that FBSS is a significant medical concern which merits further research and attention by the medical and surgical communities. Total disc replacement Lumbar total disc replacement was originally designed to be an alternative to lumbar arthrodesis (fusion). The procedure was met with great excitement and heightened expectations both in the United States and Europe. In late 2004, the first lumbar total disc replacement received approval from the U.S. Food and Drug Administration (FDA). More experience existed in Europe. Since then, the initial excitement has given way to skepticism and concern. Various failure rates and strategies for revision of total disc replacement have been reported.The role of artificial or total disc replacement in the treatment of spinal disorders remains ill-defined and unclear. Evaluation of any new technique is difficult or impossible because physician experience may be minimal or lacking. Patient expectations may be distorted. It has been difficult to establish clear cut indications for artificial disc replacement. It may not be a replacement procedure or alternative to fusion, since recent studies have shown that 100% of fusion patients had one or more contraindications to disc replacement. The role of disc replacement must come from new indications not defined in todays literature or a relaxation of current contraindications.A study by Regan found the result of replacement was the same at L4-5 and L5-S1 with the CHARITE disc. However, the ProDisc II had more favorable results at L4-5 compared with L5-S1.A younger age was predictive of a better outcome in several studies. In others it has been found to be a negative predictor or of no predictive value. Older patients may have more complications.Prior spinal surgery has mixed effects on disc replacement. It has been reported to be negative in several studies. It has been reported to have no effect in other studies. Many studies are simply inconclusive. Existing evidence does not allow drawing definite conclusions about the status of disc replacement at present. Electrical stimulation Many failed back patients are significantly impaired by chronic pain in the back and legs. Many of these will be treated with some form of electrical stimulation. This can be either a transcutaneous electrical nerve stimulation device placed on the skin over the back or a nerve stimulator implanted into the back with electrical probes which directly touch the spinal cord. Also, some chronic pain patients utilize fentanyl or narcotic patches. These patients are generally severely impaired and it is unrealistic to conclude that application of neurostimulation will reduce that impairment. For example, it is doubtful that neurostimulation will improve the patient enough to return to competitive employment. Neurostimulation is palliative. TENS units work by blocking neurotransmission as described by the pain theory of Melzack and Wall. Success rates for implanted neurostimulation has been reported to be 25% to 55%. Success is defined as a relative decrease in pain. Chiropractic Limited case series have shown improvement for patients with failed back surgery who were managed with chiropractic care. Prognosis Under rules promulgated by Titles II and XVI of the United States Social Security Act, chronic radiculopathy, arachnoiditis and spinal stenosis are recognized as disabling conditions under Listing 1.04 A (radiculopathy), 1.04 B (arachnoiditis) and 1.04 C (spinal stenosis). Return to work In a groundbreaking Canadian study, Wadd
Failed back syndrome
ell et al. reported on the value of repeat surgery and the return to work in workers compensation cases. They concluded that workers who undergo spinal surgery take longer to return to their jobs. Once two spinal surgeries are performed, few if any ever return to gainful employment of any kind. After two spinal surgeries, most people in the workers comp system will not be made better by more surgery. Most will be worse after a third surgery. Episodes of back pain associated with on the job injuries in the workers compensation setting are usually of short duration. About 10% of such episodes will not be simple, and will degenerate into chronic and disabling back pain conditions, even if surgery is not performed.It has been hypothesized that job dissatisfaction and individual perception of physical demands are associated with an increased time of recovery or an increased risk of no recovery at all. Individual psychological and social work factors, as well as worker-employer relations are also likely to be associated with time and rates of recovery.A Finnish study of return to work in patients with spinal stenosis treated by surgery found that: (1) none of the patients who had retired before the operation returned to work afterward. (2) The variables that predicted postoperative ability to work for women were: being fit to work at the time of operation, age < 50 years at the time of operation, and duration of lumbar spinal stenosis symptoms < 2 years. (3) For men, these variables were: being fit to work at the time of operation, age < 50 years at the time of operation, no prior surgery, and the extent of the surgical procedure equal to or less than one laminectomy. Womens and mens working capacity do not differ after lumbar spinal stenosis operation. If the aim is to maximize working capacity, then, when a lumbar spinal stenosis operation is indicated, it should be performed without delay. In lumbar spinal stenosis patients who are > 50 years old and on sick leave, it is unrealistic to expect that they will return to work. Therefore, after such an extensive surgical procedure, re-education of patients for lighter jobs could improve the chances of these patients returning to work.In a related Finnish study, a total of 439 patients operated on for lumbar spinal stenosis during the period 1974–1987 was re-examined and evaluated for working and functional capacity approximately 4 years after the decompressive surgery. The ability to work before or after the operation and a history of no prior back surgery were variables predictive of a good outcome. Before the operation 86 patients were working, 223 patients were on sick leave, and 130 patients were retired. After the operation 52 of the employed patients and 70 of the unemployed patients returned to work. None of the retired patients returned to work. Ability to work preoperatively, age under 50 years at the time of operation and the absence of prior back surgery predicted a postoperative ability to work.A report from Belgium noted that patients reportedly return to work an average of 12 to 16 weeks after surgery for lumbar disc herniation. However, there are studies that lend credence to the value of an earlier stimulation for return to work and performance of normal activities after a limited discectomy. At follow-up assessment, it was found that no patient had changed employment because of back or leg pain. The sooner the recommendation is made to return to work and perform normal activities, the more likely the patient is to comply. Patients with ongoing disabling back conditions have a low priority for return to work. The probability of return to work decreases as time off work increases. This is especially true in Belgium, where 20% of individuals did not resume work activities after surgery for a disc herniation of the lumbar spine. In Belgium, the medical advisers of sickness funds have an important role legally in the assessment of working capacity and medical rehabilitation measures for employees whose fitness for work is jeopardized or diminished for health reasons. The measures are laid down in the sickness and invalidity legislation. They are in accordance with the principle of preventing long-term disability. It is apparent from the authors experience that these measures are not adapted consistently in medical practice. Most of the medical advisers are focusing purely on evaluation of corporal damage, leaving little or no time for rehabilitation efforts. In many other countries, the evaluation of work capacity is done by social security doctors with a comparable task.In a comprehensive set of studies carried out by the University of Washington School of Medicine, it was determined that the outcome of lumbar fusion performed on injured workers was worse than reported in most published case series. They found 68% of lumbar fusion patients still unable to return to work two years after surgery. This was in stark contrast to reports of 68% post-op satisfaction in many series. In a follow-up study it was found that the use of intervertebral fusion devices rose rapidly after their introduction in 1996. This increase in metal usage was associated with a greater risk of complication without improving disability or re-operation rates. Research The identification of tumor necrosis factor-alpha (TNF) as a central cause of inflammatory spinal pain now suggests the possibility of an entirely new approach to selected patients with FBSS. Specific and potent inhibitors of TNF became available in the U.S. in 1998, and were demonstrated to be potentially effective for treating sciatica in experimental models beginning in 2001. Targeted anatomic administration of one of these anti-TNF agents, etanercept, a patented treatment method, has been suggested in published pilot studies to be effective for treating selected patients with chronic disc-related pain and FBSS. The scientific basis for pain relief in these patients is supported by the many current review articles. In the future new imaging methods may allow non-invasive identification of sites of neuronal inflammation, thereby enabling more accurate localization of the "pain generators" responsible for symptom production. These treatments are still experimental. If chronic pain in FBSS has a chemical component producing inflammatory pain, then prior to additional surgery it may make sense to use an anti-inflammatory approach. Often this is first attempted with non-steroidal anti-inflammatory medications, but the long-term use of Non-steroidal anti-inflammatory drugs (NSAIDS) for patients with persistent back pain is complicated by their possible cardiovascular and gastrointestinal toxicity; and NSAIDs have limited value to intervene in TNF-mediated processes. An alternative often employed is the injection of cortisone into the spine adjacent to the suspected pain generator, a technique known as "epidural steroid injection". Although this technique began more than a decade ago for FBSS, the efficacy of epidural steroid injections is now generally thought to be limited to short term pain relief in selected patients only. In addition, epidural steroid injections, in certain settings, may result in serious complications. Fortunately there are now emerging new methods that directly target TNF. These TNF-targeted methods represent a highly promising new approach for patients with chronic severe spinal pain, such as those with FBSS. Ancillary approaches, such as rehabilitation, physical therapy, anti-depressants, and, in particular, graduated exercise programs, may all be useful adjuncts to anti-inflammatory approaches. In addition, more invasive modalities, such as spinal cord stimulation, may offer relief for certain patients with FBSS, but these modalities, although often referred to as "minimally invasive", require additional surgery, and have complications of their own. Worldwide perspective A report from Spain noted that the investigation and development of new techniques for instrumented surgery of the spine is not free from conflicts of interest. The influence of financial forces in the development of new technologies and its immediate application to spine surgery, shows the relationship between the published results and the industry support. Authors who have developed and defended fusion techniques have also published new articles praising new spinal technologies. The author calls spinal surgery the "American Stock and Exchange" and "the bubble of spine surgery". The scientific literature doesnt show clear evidence in the cost-benefit studies of most instrumented surgical interventions of the spine compared with the conservative treatments. It has not been yet demonstrated that fusion surgery and disc replacement are better options than the conservative treatment. Its necessary to point out that at present "there are relationships between the industry and back pain, and there is also an industry of the back pain". Nonetheless, the "market of the spine surgery" is growing because patients are demanding solutions for their back problems. The tide of scientific evidence seems to go against the spinal fusions in the degenerative disc disease, discogenic pain and in specific back pain. After decades of advances in this field, the results of spinal fusions are mediocre. New epidemiological studies show that "spinal fusion must be accepted as a non proved or experimental method for the treatment of back pain". The surgical literature on spinal fusion published in the last 20 years establishes that instrumentation seems to slightly increase the fusion rate and that instrumentation doesnt improve the clinical results in general. We still are in need of randomized studies to compare the surgical results with the natural history of the disease, the placebo effect, or conservative treatment. The European Guidelines for lumbar chronic pain management show "strong evidence" indicating that complex and demanding spine surgery where different instrumentation is used, is not more effective than a simple, safer and cheaper posterolateral fusion without instrumentation. Recently, the literature published in this field is sending a message to use "minimally invasive techniques"; – the abandonment of transpedicular fusions. Surgery in general, and usage of metal fixation should be discarded in most cases.In Sweden, the national registry of lumbar spine surgery reported in the year 2000 that 15% of patients with spinal stenosis surgery underwent a concomitant fusion. Despite the traditionally conservative approach to spinal surgery in Sweden, there have been calls from that country for a more aggressive approach to lumbar procedures in recent years. Cherkin et al., evaluated worldwide surgical attitudes. There were twice the number of surgeons per capita in the United States compared to the United Kingdom. Numbers were similar to Sweden. Despite having very few spinal surgeons, the Netherlands proved to be quite aggressive in surgery. Sweden, despite having a large number of surgeons was conservative and produced relatively few surgeries. The most surgeries were done in the United States. In the UK, more than a third of non-urgent patients waited over a year to see a spinal surgeon. In Wales, more than half waited over three months for consult. Lower rates of referrals in the United Kingdom was found to discourage surgery in general. Fee for service and easy access to care was thought to encourage spinal surgery in the United States, whereas salaried position and a conservative philosophy led to less surgery in the United Kingdom. There were more spinal surgeons in Sweden than in the United States. However, it was speculated that the Swedish surgeons being limited to compensation of 40–48 hours a week might lead to a conservative philosophy. There have been calls for a more aggressive approach to lumbar surgery in both the United Kingdom and Sweden in recent years. References == External links ==
Necatoriasis
Necatoriasis is the condition of infection by Necator hookworms, such as Necator americanus. This hookworm infection is a type of helminthiasis (infection) which is a type of neglected tropical disease. Signs and symptoms When adult worms attach to the villi of the small intestine, they suck on the hosts blood, which may cause abdominal pain, diarrhea, cramps, and weight loss that can lead to anorexia. Heavy infections can lead to the development of iron deficiency and hypochromic microcytic anemia. This form of anemia in children can give rise to physical and mental retardation. Infection caused by cutaneous larvae migrans, a skin disease in humans, is characterized by skin ruptures and severe itching. Cause Necatoriasis is caused by N. americanus. N. americanus can be divided into two areas – larvae and adult stage. The third stage larvae are guided to human skin by following thermal gradients. Typically, the larvae enter through the hands and feet following contact with contaminated soil. A papular, pruritic, itchy rash will develop around the site of entry into the human host. This is also known as "ground itch". Generally, migration through the lungs is asymptomatic but a mild cough and pharyngeal irritation may occur during larval migration in the airways. Once larvae break through the alveoli and are swallowed, they enter the gastrointestinal tract and attach to the intestinal mucosa where they mature into adult worms. The hookworms attach to the mucosal lining using their cutting plates which allows them to penetrate blood vessels and feed on the hosts blood supply. Each worm consumes 30 μl of blood per day. The major issue results from this intestinal blood loss which can lead to iron-deficiency anemia in moderate to heavy infections. Other common symptoms include epigastric pain and tenderness, nausea, exertional dyspnea, pain in lower extremities and in joints, sternal pain, headache, fatigue, and impotence. Death is rare in humans. Diagnosis The standard method for diagnosing necatoriasis is through identification of N. americanus eggs in a fecal sample using a microscope. Eggs can be difficult to visualize in a lightly infected sample so a concentration method is generally used such as flotation or sedimentation. However, the eggs of A. duodenale and N. americanus cannot be distinguished; thus, the larvae must be examined to identify these hookworms. Larvae cannot be found in stool specimens unless the specimen was left at ambient temperature for a day or more.The most common technique used to diagnose a hookworm infection is to take a stool sample, fix it in 10% formalin, concentrate it using the formalin-ethyl acetate sedimentation technique, and then create a wet mount of the sediment for viewing under a microscope. Prevention Education, improved sanitation, and controlled disposal of human feces are critical for prevention. Nonetheless, wearing shoes in endemic areas helps reduce the prevalence of infection. Treatment An infection of N. americanus parasites can be treated by using benzimidazoles: albendazole or mebendazole. A blood transfusion may be necessary in severe cases of anemia. Light infections are usually left untreated in areas where reinfection is common. Iron supplements and a diet high in protein will speed the recovery process. In a case study involving 56-60 men with Trichuris trichiura and/or N. americanus infections, both albendazole and mebendazole were 90% effective in curing T. trichiura. However, albendazole had a 95% cure rate for N. americanus, while mebendazole only had a 21% cure rate. This suggests albendazole is most effective for treating both T. trichiura and N. americanus.Cryotherapy by application of liquid nitrogen to the skin has been used to kill cutaneous larvae migrans, but the procedure has a low cure rate and a high incidence of pain and severe skin damage, so it now is passed over in favor of suitable pharmaceuticals. Topical application of some pharmaceuticals has merit, but requires repeated, persistent applications and is less effective than some systemic treatments.During the 1910s, common treatments for hookworm included thymol, 2-naphthol, chloroform, gasoline, and eucalyptus oil. By the 1940s, the treatment of choice was tetrachloroethylene, given as 3 to 4 cc in the fasting state, followed by 30 to 45 g of sodium sulfate. Tetrachloroethylene was reported to have a cure rate of 80 percent for Necator infections, but 25 percent in Ancylostoma infections, and often produced mild intoxication in the patient. Epidemiology Necator americanus was first discovered in Brazil and then was found in Texas. Later, it was found to be indigenous in Africa, China, southwest Pacific islands, India, and Southeast Asia. This parasite is a tropical parasite and is the most common species in humans. Roughly 95% of hookworms found in the southern region of the United States are N. americanus. This parasite is found in humans, but can also be found in pigs and dogs.Transmission of N. americanus infection requires the deposition of egg-containing feces on shady, well-drained soil and is favored by warm, humid (tropical) conditions. Therefore, infections worldwide are usually reported in places where direct contact with contaminated soil occurs. References == External links ==
Tick-borne encephalitis
Tick-borne encephalitis (TBE) is a viral infectious disease involving the central nervous system. The disease most often manifests as meningitis, encephalitis or meningoencephalitis. Myelitis and spinal paralysis also occurs. In about one third of cases sequelae, predominantly cognitive dysfunction, persists for a year or more.The number of reported cases has been increasing in most countries. TBE is posing a concerning health challenge to Europe, as the number of reported human cases of TBE in all endemic regions of Europe have increased by almost 400% within the last three decades.The tick-borne encephalitis virus is known to infect a range of hosts including ruminants, birds, rodents, carnivores, horses, and humans. The disease can also be spread from animals to humans, with ruminants and dogs providing the principal source of infection for humans. Signs and symptoms The disease is most often biphasic. After an incubation period of approximately one week (range: 4–28 days) from exposure (tick bite) non-specific symptoms occurs. These symptoms are fever, malaise, headache, nausea, vomiting and myalgias that persist for about 5 days. Then, after approximately one week without symptoms, some of the infected develop neurological symptoms, i.e. meningitis, encephalitis or meningoencephalitis. Myelitis also occurs with or without encephalitis.Sequelae persists for a year or more in approximately one third of people who develop neurological disease. Most common long-term symptoms are headache, concentration difficulties, memory impairment and other symptoms of cognitive dysfunction.Mortality depends on the subtype of the virus. For the European subtype mortality rates are 0.5% to 2% for people who develop neurological disease.In dogs, the disease also manifests as a neurological disorder with signs varying from tremors to seizures and death.In ruminants, neurological disease is also present, and animals may refuse to eat, appear lethargic, and also develop respiratory signs. Cause TBE is caused by tick-borne encephalitis virus, a member of the genus Flavivirus in the family Flaviviridae. It was first isolated in 1937. Three virus sub-types also exist: European or Western tick-borne encephalitis virus (transmitted by Ixodes ricinus), Siberian tick-borne encephalitis virus (transmitted by I. persulcatus), and Far-Eastern tick-borne encephalitis virus, formerly known as Russian spring summer encephalitis virus (transmitted by I. persulcatus).The former Soviet Union conducted research on tick-borne diseases, including the TBE viruses. Transmission It is transmitted by the bite of several species of infected woodland ticks, including Ixodes scapularis, I. ricinus and I. persulcatus, or (rarely) through the non-pasteurized milk of infected cows.Infection acquired through goat milk consumed as raw milk or raw cheese (Frischkäse) has been documented in 2016 and 2017 in the German state of Baden-Württemberg. None of the infected had neurological disease. Diagnosis Detection of specific IgM and IgG antibodies in patients sera combined with typical clinical signs, is the principal method for diagnosis. In more complicated situations, e.g. after vaccination, testing for presence of antibodies in cerebrospinal fluid may be necessary. It has been stated that lumbar puncture always should be performed when diagnosing TBE and that pleocytosis in cerebrospinal fluid should be added to the diagnostic criteria.PCR (polymerase chain reaction) method is rarely used, since TBE virus RNA is most often not present in patient sera or cerebrospinal fluid at the time of neurological symptoms. Prevention Prevention includes non-specific (tick-bite prevention, tick checks) and specific prophylaxis in the form of a vaccination. Tick-borne encephalitis vaccines are very effective and available in many disease endemic areas and in travel clinics. Trade names are Encepur N and FSME-Immun CC. Treatment There is no specific antiviral treatment for TBE. Symptomatic brain damage requires hospitalization and supportive care based on syndrome severity. Anti-inflammatory drugs, such as corticosteroids, may be considered under specific circumstances for symptomatic relief. Tracheal intubation and respiratory support may be necessary. Epidemiology As of 2011, the disease was most common in Central and Eastern Europe, and Northern Asia. About ten to twelve thousand cases are documented a year but the rates vary widely from one region to another. Most of the variation has been the result of variation in host population, particularly that of deer. In Austria, an extensive vaccination program since the 1970s reduced the incidence in 2013 by roughly 85%.In Germany, during the 2010s, there have been a minimum of 95 (2012) and a maximum of 584 cases (2018) of TBE (or FSME as it is known in German). More than half of the reported cases from 2019 had meningitis, encephalitis or myelitis. The risk of infection was noted to be increasing with age, especially in people older than 40 years and it was greater in men than women. Most cases were acquired in Bavaria (46%) and Baden-Württemberg (37%), much less in Saxony, Hesse, Lower Saxony and other states. Altogether 164 Landkreise are designated TBE-risk areas, including all of Baden-Württemberg except for the city of Heilbronn.In Sweden, most cases of TBE occur in a band running from Stockholm to the west, especially around lakes and the nearby region of the Baltic sea. It reflects the greater population involved in outdoor activities in these areas. Overall, for Europe, the estimated risk is roughly 1 case per 10,000 human-months of woodland activity. Although in some regions of Russia and Slovenia, the prevalence of cases can be as high as 70 cases per 100,000 people per year. Travelers to endemic regions do not often become cases, with only 5 cases reported among U.S. travelers returning from Eurasia between 2000 and 2011, a rate so low that as of 2016 the U.S. Centers for Disease Control and Prevention recommended vaccination only for those who will be extensively exposed in high risk areas. References External links Tickborne encephalitis at Centers for Disease Control and Prevention (CDC) Factsheet from Viral Special Pathogens Branch at the CDC
Dyscalculia
Dyscalculia () is a disability resulting in difficulty learning or comprehending arithmetic, such as difficulty in understanding numbers, learning how to manipulate numbers, performing mathematical calculations, and learning facts in mathematics. It is sometimes colloquially referred to "math dyslexia", though this analogy can be misleading as they are distinct syndromes.Dyscalculia is associated with dysfunction in the region around the intraparietal sulcus and potentially also the frontal lobe. Dyscalculia does not reflect a general deficit in cognitive abilities or difficulties with time, measurement, and spatial reasoning. Estimates of the prevalence of dyscalculia range between 3 and 6% of the population. In 2015 it was established that 11% of children with dyscalculia also have ADHD. Dyscalculia has also been associated with Turner syndrome and people who have spina bifida.Mathematical disabilities can occur as the result of some types of brain injury, in which case the term acalculia is used instead of dyscalculia which is of innate, genetic or developmental origin. Signs and symptoms The earliest appearance of dyscalculia is typically a deficit in subitizing, the ability to know, from a brief glance and without counting, how many objects there are in a small group. Children as young as five can subitize six objects, especially looking at a die. However, children with dyscalculia can subitize fewer objects and even when correct take longer to identify the number than their age-matched peers. Dyscalculia often looks different at different ages. It tends to become more apparent as children get older; however, symptoms can appear as early as preschool. Common symptoms of dyscalculia are having difficulty with mental math, trouble analyzing time and reading an analog clock, struggle with motor sequencing that involves numbers, and often counting on fingers when adding numbers. Common symptoms Dyscalculia is characterized by difficulties with common arithmetic tasks. These difficulties may include: Difficulty reading analog clocks Difficulty stating which of two numbers is larger Sequencing issues Inability to comprehend financial planning or budgeting, sometimes even at a basic level; for example, estimating the cost of the items in a shopping basket or balancing a checkbook Visualizing numbers as meaningless or nonsensical symbols, rather than perceiving them as characters indicating a numerical value (hence the misnomer "math dyslexia") Difficulty with multiplication, subtraction, addition, and division tables, mental arithmetic, etc. Inconsistent results in addition, subtraction, multiplication and division When writing, reading and recalling numbers, mistakes may occur in the areas such as: number additions, substitutions, transpositions, omissions, and reversals Poor memory (retention and retrieval) of math concepts; may be able to perform math operations one day, but draw a blank the next; may be able to do book work but then fails tests Ability to grasp math on a conceptual level, but an inability to put those concepts into practice Difficulty recalling the names of numbers, or thinking that certain different numbers "feel" the same (e.g. frequently interchanging the same two numbers for each other when reading or recalling them) Problems with differentiating between left and right A "warped" sense of spatial awareness, or an understanding of shapes, distance, or volume that seems more like guesswork than actual comprehension Difficulty with time, directions, recalling schedules, sequences of events, keeping track of time, frequently late or early Difficulty reading maps Difficulty working backwards in time (e.g. What time to leave if needing to be somewhere at X time) Difficulty reading musical notation Difficulty with choreographed dance steps Having particular difficulty mentally estimating the measurement of an object or distance (e.g., whether something is 3 or 6 meters (10 or 20 feet) away) Inability to grasp and remember mathematical concepts, rules, formulae, and sequences Inability to concentrate on mentally intensive tasks Mistaken recollection of names, poor name/face retrieval; may substitute names beginning with same letter. Some people with dyscalculia also have aphantasia that affects how they can "see" in their minds eye. However, people without dyscalculia also report having this – but it is more prevalent in those with dyslexia and other learning disabilities. Persistence in children Although many researchers believe dyscalculia to be a persistent disorder, evidence on the persistence of dyscalculia remains mixed. For instance, in a study done by Mazzocco and Myers (2003), researchers evaluated children on a slew of measures and selected their most consistent measure as their best diagnostic criterion: a stringent 10th-percentile cut-off on the TEMA-2. Even with their best criterion, they found dyscalculia diagnoses for children longitudinally did not persist; only 65% of students who were ever diagnosed over the course of four years were diagnosed for at least two years. The percentage of children who were diagnosed in two consecutive years was further reduced. It is unclear whether this was the result of misdiagnosed children improving in mathematics and spatial awareness as they progressed as normal, or that the subjects who showed improvement were accurately diagnosed, but exhibited signs of a non-persistent learning disability. Persistence in adults There are very few studies of adults with dyscalculia who have had a history of it growing up, but such studies have shown that it can persist into adulthood. It can affect major parts of an adults life. Most adults with dyscalculia have a hard time processing math at a 4th grade level. For 1st-4th grade level, many adults will know what to do for the math problem, but they will often get them wrong because of "careless errors", although they are not careless when it comes to the problem. The adults cannot process their errors on the math problems or may not even recognize that they have made these errors. Visual-spatial input, auditory input, and touch input will be affected due to these processing errors. Dyscalculics may have a difficult time adding numbers in a column format because their mind can mix up the numbers, and it is possible that they may get the same (wrong) answer twice due to their mind processing the problem incorrectly. Dyscalculics can have problems determining differences in different coins and their size or giving the correct amount of change and if numbers are grouped together, it is possible that they cannot determine which has less or more. If a dyscalculic is asked to choose the greater of two numbers, with the lesser number in a larger font than the greater number, they may take the question literally and pick the number with the bigger font. Adults with dyscalculia have a tough time with directions while driving and with controlling their finances, which causes difficulties on a day-to-day basis. College students or other adult learners College students particularly may have a difficult time due to the fast pace and change in difficulty of the work they are given. As a result of this, students may develop much anxiety and frustration. After dealing with their anxiety for a long time, students can become averse to math and try to avoid it as much as possible, which may result in lower grades in math courses. (That said, students with dyscalculia can also do exceptionally well in writing, reading, and speaking.) Causes Both domain-general and domain-specific causes have been put forth. With respect to pure developmental dyscalculia, domain-general causes are unlikely as they should not impair ones ability in the numerical domain without also affecting other domains such as reading.Two competing domain-specific hypotheses about the causes of developmental dyscalculia have been proposed – the magnitude representation (or number module deficit hypothesis) and the access deficit hypothesis. Magnitude representation deficit Dehaenes "number sense" theory suggests that approximate numerosities are automatically ordered in an ascending manner on a mental number line. The mechanism to represent and process non-symbolic magnitude (e.g., number of dots) is often known as the "approximate number system" (ANS), and a core deficit in the precision of the ANS, known as the "magnitude representation hypothesis" or "number module deficit hypothesis", has been proposed as an underlying cause of developmental dyscalculia.In particular, the structural features of the ANS is theoretically supported by a phenomenon called the "numerical distance effect", which has been robustly observed in numerical comparison tasks. Typically developing individuals are less accurate and slower in comparing pairs of numbers closer together (e.g., 7 and 8) than further apart (e.g., 2 and 9). A related "numerical ratio effect" (in which the ratio between two numbers varies but the distance is kept constant, e.g., 2 vs. 5 and 4 vs. 7) based on the Webers law has also been used to further support the structure of the ANS. The numerical ratio effect is observed when individuals are less accurate and slower in comparing pairs of numbers that have a larger ratio (e.g., 8 and 9, ratio = 8/9) than a smaller ratio (2 and 3; ratio = 2/3). A larger numerical distance or ratio effect with comparison of sets of objects (i.e., non-symbolic) is thought to reflect a less precise ANS, and the ANS acuity has been found to correlate with math achievement in typically developing children and also in adults.More importantly, several behavioral studies have found that children with developmental dyscalculia show an attenuated distance/ratio effect than typically developing children. Moreover, neuroimaging studies have also provided additional insights even when behavioral difference in distance/ratio effect might not be clearly evident. For example, Gavin R. Price and colleagues found that children with developmental dyscalculia showed no differential distance effect on reaction time relative to typically developing children, but they did show a greater effect of distance on response accuracy. They also found that the right intraparietal sulcus in children with developmental dyscalculia was not modulated to the same extent in response to non-symbolic numerical processing as in typically developing children. With the robust implication of the intraparietal sulcus in magnitude representation, it is possible that children with developmental dyscalculia have a weak magnitude representation in the parietal region. Yet, it does not rule out an impaired ability to access and manipulate numerical quantities from their symbolic representations (e.g., Arabic digits). Moreover, findings from a cross-sectional study suggest that children with developmental dyscalculia might have a delayed development in their numerical magnitude representation by as much as five years. However, the lack of longitudinal studies still leaves the question open as to whether the deficient numerical magnitude representation is a delayed development or impairment. Access deficit hypothesis Rousselle & Noël propose that dyscalculia is caused by the inability to map preexisting representations of numerical magnitude onto symbolic Arabic digits. Evidence for this hypothesis is based on research studies that have found that individuals with dyscalculia are proficient on tasks that measure knowledge of non-symbolic numerical magnitude (i.e., non-symbolic comparison tasks) but show an impaired ability to process symbolic representations of number (i.e., symbolic comparison tasks). Neuroimaging studies also report increased activation in the right intraparietal sulcus during tasks that measure symbolic but not non-symbolic processing of numerical magnitude. However, support for the access deficit hypothesis is not consistent across research studies. Diagnosis At its most basic level, dyscalculia is a learning disability affecting the normal development of arithmetic skills.A consensus has not yet been reached on appropriate diagnostic criteria for dyscalculia. Mathematics is a specific domain that is complex (i.e. includes many different processes, such as arithmetic, algebra, word problems, geometry, etc.) and cumulative (i.e. the processes build on each other such that mastery of an advanced skill requires mastery of many basic skills). Thus dyscalculia can be diagnosed using different criteria, and frequently is; this variety in diagnostic criteria leads to variability in identified samples, and thus variability in research findings regarding dyscalculia. Other than using achievement tests as diagnostic criteria, researchers often rely on domain-specific tests (i.e. tests of working memory, executive function, inhibition, intelligence, etc.) and teacher evaluations to create a more comprehensive diagnosis. Alternatively, fMRI research has shown that the brains of the neurotypical children can be reliably distinguished from the brains of the dyscalculic children based on the activation in the prefrontal cortex. However, due to the cost and time limitations associated with brain and neural research, these methods will likely not be incorporated into diagnostic criteria despite their effectiveness. Types Research on subtypes of dyscalculia has begun without consensus; preliminary research has focused on comorbid learning disorders as subtyping candidates. The most common comorbidity in individuals with dyscalculia is dyslexia. Most studies done with comorbid samples versus dyscalculic-only samples have shown different mechanisms at work and additive effects of comorbidity, indicating that such subtyping may not be helpful in diagnosing dyscalculia. But there is variability in results at present.Due to high comorbidity with other disabilities such as dyslexia and ADHD, some researchers have suggested the possibility of subtypes of mathematical disabilities with different underlying profiles and causes. Whether a particular subtype is specifically termed "dyscalculia" as opposed to a more general mathematical learning disability is somewhat under debate in the scientific literature. Semantic memory: This subtype often coexists with reading disabilities such as dyslexia and is characterized by poor representation and retrieval from long-term memory. These processes share a common neural pathway in the left angular gyrus, which has been shown to be selective in arithmetic fact retrieval strategies and symbolic magnitude judgments. This region also shows low functional connectivity with language-related areas during phonological processing in adults with dyslexia. Thus, disruption to the left angular gyrus can cause both reading impairments and difficulties in calculation. This has been observed in individuals with Gerstmann syndrome, of which dyscalculia is one of constellation of symptoms. Procedural concepts: Research by Geary has shown that in addition to increased problems with fact retrieval, children with math disabilities may rely on immature computational strategies. Specifically, children with mathematical disabilities showed poor command of counting strategies unrelated to their ability to retrieve numeric facts. This research notes that it is difficult to discern whether poor conceptual knowledge is indicative of a qualitative deficit in number processing or simply a delay in typical mathematical development. Working memory: Studies have found that children with dyscalculia showed impaired performance on working memory tasks compared to neurotypical children. Furthermore, research has shown that children with dyscalculia have weaker activation of the intraparietal sulcus during visuospatial working memory tasks. Brain activity in this region during such tasks has been linked to overall arithmetic performance, indicating that numerical and working memory functions may converge in the intraparietal sulcus. However, working memory problems are confounded with domain-general learning difficulties, thus these deficits may not be specific to dyscalculia but rather may reflect a greater learning deficit. Dysfunction in prefrontal regions may also lead to deficits in working memory and other executive function, accounting for comorbidity with ADHD.Studies have also shown indications of causes due to congenital or hereditary disorders, but evidence of this is not yet concrete. Treatment To date, very few interventions have been developed specifically for individuals with dyscalculia. Concrete manipulation activities have been used for decades to train basic number concepts for remediation purposes. This method facilitates the intrinsic relationship between a goal, the learners action, and the informational feedback on the action. A one-to-one tutoring paradigm designed by Lynn Fuchs and colleagues which teaches concepts in arithmetic, number concepts, counting, and number families using games, flash cards, and manipulables has proven successful in children with generalized math learning difficulties, but intervention has yet to be tested specifically on children with dyscalculia. These methods require specially trained teachers working directly with small groups or individual students. As such, instruction time in the classroom is necessarily limited. For this reason, several research groups have developed computer adaptive training programs designed to target deficits unique to dyscalculic individuals.Software intended to remediate dyscalculia has been developed. While computer adaptive training programs are modeled after one-to-one type interventions, they provide several advantages. Most notably, individuals are able to practice more with a digital intervention than is typically possible with a class or teacher. As with one-to-one interventions, several digital interventions have also proven successful in children with generalized math learning difficulties. Räsänen and colleagues have found that games such as The Number Race and Graphogame-math can improve performance on number comparison tasks in children with generalized math learning difficulties.Several digital interventions have been developed for dyscalculics specifically. Each attempts to target basic processes that are associated with maths difficulties. Rescue Calcularis was one early computerized intervention that sought to improve the integrity of and access to the mental number line. Other digital interventions for dyscalculia adapt games, flash cards, and manipulables to function through technology.While each intervention claims to improve basic numerosity skills, the authors of these interventions do admit that repetition and practice effects may be a factor involved in reported performance gains. An additional criticism is that these digital interventions lack the option to manipulate numerical quantities. While the previous two games provide the correct answer, the individual using the intervention cannot actively determine, through manipulation, what the correct answer should be. Butterworth and colleagues argued that games like The Number Bonds, which allows an individual to compare different sized rods, should be the direction that digital interventions move toward. Such games use manipulation activities to provide intrinsic motivation toward content guided by dyscalculia research. One of these serious games is Meister Cody – Talasia, an online training that includes the CODY Assessment – a diagnostic test for detecting dyscalculia. Based on these findings, Dybuster Calcularis was extended by adaptation algorithms and game forms allowing manipulation by the learners. It was found to improve addition, subtraction and number line tasks, and was made available as Dybuster Calcularis.A study used transcranial direct current stimulation (TDCS) to the parietal lobe during numerical learning and demonstrated selective improvement of numerical abilities that was still present six months later in typically developing individuals. Improvement were achieved by applying anodal current to the right parietal lobe and cathodal current to the left parietal lobe and contrasting it with the reverse setup. When the same research group used tDCS in a training study with two dyscalculic individuals, the reverse setup (left anodal, right cathodal) demonstrated improvement of numerical abilities. Epidemiology Dyscalculia is thought to be present in 3–6% of the general population, but estimates by country and sample vary somewhat. Many studies have found prevalence rates by gender to be equivalent. Those that find gender difference in prevalence rates often find dyscalculia higher in females, but some few studies have found prevalence rates higher in males. History The term dyscalculia was coined in the 1940s, but it was not completely recognized until 1974 by the work of Czechoslovakian researcher Ladislav Kosc. Kosc defined dyscalculia as "a structural disorder of mathematical abilities." His research proved that the learning disability was caused by impairments to certain parts of the brain that control mathematical calculations and not because symptomatic individuals were mentally handicapped. Researchers now sometimes use the terms "math dyslexia" or "math learning disability" when they mention the condition. Cognitive disabilities specific to mathematics were originally identified in case studies with patients who experienced specific arithmetic disabilities as a result of damage to specific regions of the brain. More commonly, dyscalculia occurs developmentally as a genetically linked learning disability which affects a persons ability to understand, remember, or manipulate numbers or number facts (e.g., the multiplication tables). The term is often used to refer specifically to the inability to perform arithmetic operations, but is also defined by some educational professionals and cognitive psychologists such as Stanislas Dehaene and Brian Butterworth as a more fundamental inability to conceptualize numbers as abstract concepts of comparative quantities (a deficit in "number sense"), which these researchers consider to be a foundational skill upon which other mathematics abilities build. Symptoms of dyscalculia include the delay of simple counting, inability to memorize simple arithmetic facts such as adding, subtracting, etc. There are few known symptoms because little research has been done on the topic. Etymology The term dyscalculia dates back to at least 1949.Dyscalculia comes from Greek and Latin and means "counting badly". The prefix "dys-" comes from Greek and means "badly". The root "calculia" comes from the Latin "calculare", which means "to count"; it is also a cognate of "calculation" and "calculus". See also References Further reading Abeel, Samantha (2003). My thirteenth winter: a memoir. New York: Orchard Books. ISBN 978-0-439-33904-9. OCLC 51536704. Ardila A, Rosselli M (December 2002). "Acalculia and dyscalculia" (PDF). Neuropsychol Rev. 12 (4): 179–231. doi:10.1023/a:1021343508573. PMID 12539968. S2CID 2617160. Tony Attwood (2002). Dyscalculia in Schools: What it is and What You Can Do. First & Best in Education Ltd. ISBN 978-1-86083-614-5. OCLC 54991398. Butterworth, Brian; Yeo, Dorian (2004). Dyscalculia Guidance: Helping Pupils with Specific Learning Difficulties in Maths. London: NferNelson. ISBN 978-0-7087-1152-1. OCLC 56974589. Campbell, Jamie I. D. (2004). Handbook of Mathematical Cognition. Psychology Press (UK). ISBN 978-1-84169-411-5. OCLC 644354765. Brough, Mel; Henderson, Anne; Came, Fil (2003). Working with dyscalculia: recognising dyscalculia: overcoming barriers to learning in maths. Santa Barbara, Calif: Learning Works. ISBN 978-0-9531055-2-6. OCLC 56467270. Chinn, Stephen J. (2004). The Trouble with Maths: A Practical Guide to Helping Learners with Numeracy Difficulties. New York: RoutledgeFalmer. ISBN 978-0-415-32498-4. OCLC 53186668. Reeve R, Humberstone J (2011). "Five- to 7-year-olds finger gnosia and calculation abilities". Frontiers in Psychology. 2: 359. doi:10.3389/fpsyg.2011.00359. PMC 3236444. PMID 22171220. "Sharma: Publications". Dyscalculia.org. External links Dyscalculia at Curlie
Foster care
Foster care is a system in which a minor has been placed into a ward, group home (residential child care community, treatment center, etc.), or private home of a state-certified caregiver, referred to as a "foster parent" or with a family member approved by the state. The placement of the child is normally arranged through the government or a social service agency. The institution, group home, or foster parent is compensated for expenses unless with a family member. In some states, relative or "Kinship" caregivers of children who are wards of the state are provided with a financial stipend. The state, via the family court and child protective services agency, stand in loco parentis to the minor, making all legal decisions while the foster parent is responsible for the day-to-day care of the minor. Scholars and activists are concerned about the efficacy of the foster care services provided by NGOs. Specifically, this pertains to poor retention rates of social workers. Poor retention rates are attributed to being overworked in an emotionally draining field that offers minimal monetary compensation. The lack of professionals pursuing a degree in social work coupled with poor retention rates in the field has led to a shortage of social workers and created large caseloads for those who choose to work and stay in the field. The efficacy of caseworker retention also affects the overall ability to care for clients. Low staffing leads to data limitations that infringe on caseworkers ability to adequately serve clients and their families. Foster care is correlated with a range of negative outcomes compared to the general population. Children in foster care have a high rate of ill health, particularly psychiatric conditions such as anxiety, depression, and eating disorders. One third of foster children in a US study reported abuse from a foster parent or other adult in the foster home. Nearly half of foster children in the US become homeless when they reach the age of 18, and the poverty rate is three times higher among foster care alumni than in the general population. By country Australia In Australia foster care was known as "boarding-out". Foster care had its early stages in South Australia in 1867 and stretched to the second half of the 19th century. It is said that the system was mostly run by women until the early 20th century. Then the control was centered in many state childrens departments. "Although boarding-out was also implemented by non-government[al] child rescue organizations, many large institutions remained. These institutions assumed an increasing importance from the late 1920s when the system went into decline." The system was re-energized in the postwar era, and in the 1970s. The system is still the main structure for "out-of-home care." The system took care of both local and foreign children. "The first adoption legislation was passed in Western Australia in 1896, but the remaining states did not act until the 1920s, introducing the beginnings of the closed adoption that reached it peak in the period 1940–1975. New baby adoption dropped dramatically from the mid-1970s, with the greater tolerance of and support for single mothers". Cambodia Foster care in Cambodia is relatively new as an official practice within the government. However, despite a later start, the practice is currently making great strides within the country. Left with a large number of official and unofficial orphanages from the 1990s, the Cambodian government conducted several research projects in 2006 and 2008, pointing to the overuse of orphanages as a solution for caring for vulnerable children within the country. Most notably, the studies found that the percentage of children within orphanages that had parents approached 80%. At the same time, local NGOs like "Children In Families" began offering limited foster care services within the country. In the subsequent years, the Cambodian government began implementing policies that required the closure of some orphanages and the implementation of minimum standards for residential care institutions. These actions lead to an increase in the number of NGOs providing foster care placements and helped to set the course for care reform around the country. As of 2015, the Cambodian government is working with UNICEF, USAID, several governments, and many local NGOs in continuing to build the capacity for child protection and foster care within the Kingdom. Canada Foster children in Canada are known as permanent wards, (crown wards in Ontario). A ward is someone, in this case a child, placed under protection of a legal guardian and are the legal responsibility of the government. Census data from 2011 counted children in foster care for the first time, counting 47,885 children in care. The majority of foster children – 29,590, or about 62 per cent – were aged 14 and under. The wards remain under the care of the government until they "age out of care." All ties are severed from the government and there is no longer any legal responsibility toward the youth. This age is different depending on the province. Israel In December 2013, the Israeli Knesset approved a bill co-drafted by the Israel National Council for the Child to regulate the rights and obligations of participants in the foster care system in Israel. Japan The idea of foster care or taking in abandoned children actually came about around 1392-1490s in Japan. The foster care system in Japan is similar to the Orphan Trains because Brace thought the children would be better off on farms. The people in Japan thought the children would do better on farms rather than living in the "dusty city." The families would often send their children to a farm family outside the village and only keep their oldest son. The farm families served as the foster parents and they were financially rewarded for taking in the younger siblings. "It was considered an honor to be chosen as foster parents, and selection greatly depended on the familys reputation and status within the village". Around 1895 the foster care program became more like the system used in the United States because the Tokyo Metropolitan Police sent children to a hospital where they would be "settled". Problems emerged in this system, such as child abuse, so the government started phasing it out and "began increasing institutional facilities". In 1948 the Child Welfare Law was passed, increasing official oversight, and creating better conditions for the children to grow up in. United Kingdom In the United Kingdom, foster care and adoption has always been an option, "in the sense of taking other peoples children into their homes and looking after them on a permanent or temporary basis." Although, nothing about it had a legal foundation, until the 20th century. The UK had "wardship," the family taking in the child had custody by the Chancery Court. Wardship was not used very often because it did not give the guardian "parental rights." In the 19th century came a "series of baby farming scandals." At the end of the 19th century they started calling it "boarding-out" like they did in Australia. They started placing the children in orphanages and workhouses as well. "The First World War saw an increase in organized adoption through adoption societies and child rescue organizations, and pressure grew for adoption to be given legal status." The first laws based on adoption and foster care were passed in 1926. "The peak number of adoptions was in 1968, since then there has been an enormous decline in adoption in the United Kingdom. The main reasons for children being adopted in the United Kingdom had been unmarried mothers giving up their children for adoption and stepparents adopting their new partners children". United States In the United States, foster care started as a result of the efforts of Charles Loring Brace. "In the mid 19th Century, some 30,000 homeless or neglected children lived in the New York City streets and slums." Brace took these children off the streets and placed them with families in most states in the country. Brace believed the children would do best with a Christian farm family. He did this to save them from "a lifetime of suffering" He sent these children to families by train, which gave the name The Orphan Train Movement. "This lasted from 1853 to the early 1890s and transported more than 120,000 250,000? children to new lives." When Brace died in 1890, his sons took over his work of the Childrens Aid Society until they retired. The Childrens Aid Society created "a foster care approach that became the basis for the federal Adoption and Safe Families Act of 1997" called Concurrent Planning. This greatly impacted the foster care system. From August 1999 - August 2019, 9,073,607 American children have been removed from their families and placed in foster homes according to the federal government Adoption and Foster Care Analysis and Reporting System.As last reported in August 2019, 437,238 children nationally were removed from their families and placed in foster homes according to the federal government Adoption and Foster Care Analysis and Reporting System.- 24% of foster children are between the ages of 0 and 2- 18% of foster children are between the ages of 3 and 5- 28% of foster children are between the ages of 6 and 12- 40% of foster children are between the ages of 13 and 21- Average # of birthdays a child spends in foster care: 2 - 22% of children had three or more placements during a length of 20 months in foster care.- 91% of foster children under the age of 2 are adopted. France In France, foster families are called familles daccueil (literally "welcome families"). Foster homes must obtain an official approval from the government in order to welcome a minor or an elderly person. In order to receive this approval they must follow a training and their home is inspected to be sure it is safe and healthy. In 2017, 344 000 minors and 15000 elderly persons were welcomed in foster homes. Placement Family-based foster care is generally preferred to other forms of out of home care. Foster care is intended to be a short-term solution until a permanent placement can be made. In most states, the primary objective is to reconcile children with the biological parents. However, if the parents are unable or unwilling to care for the child, or if the child is an orphan, then the first choice of adoptive parents is a relative such as an aunt, uncle or grandparent, known as kinship care. Most kinship care is done informally, without the involvement of a court or public organization. However, in the United States, formal kinship care is increasingly common. In 2012, a quarter of all children in formal foster care were placed with relatives instead of being placed into the system.If no related family member is willing or able to adopt, the next preference is for the child to be adopted by the foster parents or by someone else involved in the childs life (such as a teacher or coach). This is to maintain continuity in the childs life. If neither above option are available, the child may be adopted by someone who is a stranger to the child. If none of these options are viable, the plan for the minor may be to enter OPPLA (Other Planned Permanent Living Arrangement). This option allows the child to stay in custody of the state and the child can stay placed in a foster home, with a relative or a long-term care facility, such as a residential child care community or, for children with development disabilities, physical disabilities or mental disabilities, a treatment center. 671,000 children were served by the foster care system in the United States in 2015. "After declining more than 20 percent between FY 2006 and FY 2012 to a low of 397,000, the number of children in foster care on the last day of the fiscal year increased to 428,000 in FY 2015, with a slightly higher percent change from 2014 to 2015 (3.3%) than observed from 2013 to 2014 (3.2%)." Since FY 2012, the number of children in foster care at the end of each FY has steadily increased.The median amount of time a child spent in foster care in the U.S. in 2015 was 13.5 months. That year, 74% of children spent less than two years in foster care, while 13% were in care for three or more years. Of the estimated 427,910 children in foster care on September 30, 2015: 43 percent were White, 24 percent were African-American, 21 percent were Hispanic (of any race), 10 percent were other races or multiracial, and 2 percent were unknown or unable to be determined.Children may enter foster care voluntarily or involuntarily. Voluntary placement may occur when a biological parent or lawful guardian is unable to care for a child. Involuntary placement occurs when a child is removed from their biological parent or lawful guardian due to the risk or actual occurrence of physical or psychological harm, or if the child has been orphaned. In the US, most children enter foster care due to neglect. If a biological parent or legal guardian is unwilling to care for a child, the child is deemed to be dependent and is placed under the care of the child protection agency. The policies regarding foster care as well as the criteria to be met in order to become a foster parent vary according to legal jurisdiction. Especially egregious failures of child protective services often serve as a catalyst for increased removal of children from the homes of biological parents. An example is the brutal torture and murder of 17-month-old Peter Connelly, a British toddler who died in London Borough of Haringey, North London after suffering more than 50 severe injuries over an eight-month period, including eight broken ribs and a broken back. Throughout the period of time in which he was being tortured, he was repeatedly seen by Haringey Childrens services and NHS health professionals. Haringey Childrens services already failed ten years earlier in the case of Victoria Climbié. In the time since his death, in 2007, cases have reached a record rate in England surpassing 10,000 in the reporting year ending in March 2012. Abuse and negligence From 1993 through 2002 there were 107 recorded deaths; there are approximately 400,000 children in out-of-home care, in the United States. Almost 10% of children in foster care have stayed in foster care for five or more years. Nearly half of all children in foster care have chronic medical problems. 8% of all children in foster care have serious emotional problems, 11% of children exiting foster care aged out of the system, in 2011. Children in foster care experience high rates of child abuse, emotional deprivation, and physical neglect. In one study in the United Kingdom "foster children were 7–8 times, and children in residential care 6 times more likely to be assessed by a pediatrician for abuse than a child in the general population". A study of foster children in Oregon and Washington State found that nearly one third reported being abused by a foster parent or another adult in a foster home. The "Parent Trauma Response Questionnaire" states that parental overprotection can be just as harmful psychologically as neglect. Development As of 2019, the majority of children in the foster care system were under 8 years of age. These early years are quite important for the physical and mental development of children. More specifically, these early years are most important for brain development. Stressful and traumatic experiences have been found to have long-term negative consequences for the brain development in children whereas talking, singing, and playing can help encourage brain growth. Since the majority of children are removed from their homes due to neglect, this means that many of these children did not experience stable and stimulating environments to help promote this necessary growth. In a research study conducted at the University of Minnesota, researchers found that children placed in non-parental homes, such as foster homes, showed significant behavior problems and higher levels of internalizing problems in comparison to children in traditional families and even children who were mistreated by caregivers. According to an article written by Elizabeth Curry titled The five things you should know about how orphanage life affects children they state that a child who has lived in an orphanage or a home for multiple children, they will have learned survival skills but lack family skills due to them never understanding permanency. Medical and psychiatric disorders A higher prevalence of physical, psychological, cognitive and epigenetic disorders for children in foster care has been established in studies in various countries. The Casey Family Programs Northwest Foster Care Alumni Study was a fairly extensive study of various aspects of children who had been in foster care. Individuals who were in foster care experience higher rates of physical and psychiatric morbidity than the general population and suffer from not being able to trust and that can lead to placements breaking down.In the Casey study of foster children in Oregon and Washington state, they were found to have double the incidence of depression, 20% as compared to 10% and were found to have a higher rate of posttraumatic stress disorder (PTSD) than combat veterans with 25% of those studied having PTSD. Children in foster care have a higher probability of having attention deficit hyperactivity disorder (ADHD), and deficits in executive functioning, anxiety as well as other developmental problems.These children experience higher degrees of incarceration, poverty, homelessness, and suicide. Studies in the U.S. have suggested that some foster care placements may be more detrimental to children than remaining in a troubled home, but a more recent study suggested that these findings may have been affected by selection bias, and that foster care has little effect on behavioral problems. Neurodevelopment Foster children have elevated levels of cortisol, a stress hormone, in comparison to children raised by their biological parents. Elevated cortisol levels can compromise the immune system. (Harden BJ, 2004). Most of the processes involved in typical neurodevelopment are predicated upon the establishment of close nurturing relationships and environmental stimulation. Negative environmental influences during this critical period of brain development can have lifelong consequences. Post traumatic stress disorder Children in foster care have a higher incidence of posttraumatic stress disorder (PTSD). In one study, 60% of children in foster care who had experienced sexual abuse had PTSD, and 42% of those who had been physically abused met the PTSD criteria. PTSD was also found in 18% of the children who were not abused. These children may have developed PTSD due to witnessing violence in the home. (Marsenich, 2002). In order to figure out if a child is going through PTSD, there is a PTSD module, the anxiety disorder interview. This is considered a reliable resource for establishing if a child is going through post traumatic stress disorder due to physical, sexual, or mental abuse.In a study conducted in Oregon and Washington state, the rate of PTSD in adults who were in foster care for one year between the ages of 14–18 was found to be higher than that of combat veterans, with 25 percent of those in the study meeting the diagnostic criteria as compared to 12–13 percent of Iraq war veterans and 15 percent of Vietnam war veterans, and a rate of 4% in the general population. The recovery rate for foster home alumni was 28.2% as opposed to 47% in the general population. "More than half the study participants reported clinical levels of mental illness, compared to less than a quarter of the general population". Eating disorders Foster children are at increased risk for a variety of eating disorders in comparison to the general population. In a study done in the United Kingdom, 35% of foster children experienced an increase in Body Mass Index (BMI) once in care. Food Maintenance Syndrome is characterized by a set of aberrant eating behaviors of children in foster care. It is "a pattern of excessive eating and food acquisition and maintenance behaviors without concurrent obesity"; it resembles "the behavioral correlates of Hyperphagic Short Stature". It is hypothesized that this syndrome is triggered by the stress and maltreatment foster children are subjected to, it was prevalent amongst 25 percent of the study group in New Zealand. Bulimia nervosa is seven times more prevalent among former foster children than in the general population. Poverty and homelessness Nearly half of foster children in the U.S. become homeless when they turn 18. One of every 10 foster children stays in foster care longer than seven years, and each year about 15,000 reach the age of majority and leave foster care without a permanent family—many to join the ranks of the homeless or to commit crimes and be imprisoned.Three out of 10 of the United States homeless are former foster children. According to the results of the Casey Family Study of Foster Care Alumni, up to 80 percent are doing poorly—with a quarter to a third of former foster children at or below the poverty line, three times the national poverty rate. Very frequently, people who are homeless had multiple placements as children: some were in foster care, but others experienced "unofficial" placements in the homes of family or friends. Individuals with a history of foster care tend to become homeless at an earlier age than those who were not in foster care. The length of time a person remains homeless is longer in individuals who were in foster care. Suicide-death rate Children in foster care are at a greater risk of suicide. The increased risk of suicide is still prevalent after leaving foster care. In a small study of twenty-two Texan youths who aged out of the system, 23 percent had a history of suicide attempts.A Swedish study utilizing the data of almost one million people including 22,305 former foster children who had been in care prior to their teens, concluded: Former child welfare clients were in year of birth and sex standardised risk ratios (RRs) four to five times more likely than peers in the general population to have been hospitalised for suicide attempts....Individuals who had been in long-term foster care tended to have the most dismal outcome...former child welfare/protection clients should be considered a high-risk group for suicide attempts and severe psychiatric morbidity. Death rate Children in foster care have an overall higher mortality rate than children in the general population. A study conducted in Finland among current and former foster children up to age 24 found a higher mortality rate due to substance abuse, accidents, suicide and illness. The deaths due to illness were attributed to an increased incidence of acute and chronic medical conditions and developmental delays among children in foster care. Georgia Senator Nancy Schaefer published a report "The Corrupt Business of Child Protective Services" stating: "The National Center on Child Abuse and Neglect in 1998 reported that six times as many children died in foster care than in the general public and that once removed to official "safety", these children are far more likely to suffer abuse, including sexual molestation than in the general population". Academic prospects Educational outcomes of ex-foster children in the Northwest Alumni Study: 56% completed high school compared to 82% of the general population, although an additional 29% of former foster children received a G.E.D. compared to an additional 5% of the general population. 42.7% completed some education beyond high school. 20.6% completed any degree or certificate beyond high school 16.1% completed a vocational degree; 21.9% for those over 25. 1.8% complete a bachelors degree, 2.7% for over 25, the completion rate for the general population in the same age group is 24%, a sizable difference.The study reviewed case records for 659 foster care alumni in Northwest USA, and interviewed 479 of them between September 2000 and January 2002. Higher Education Approximately 10% of foster youth make it to college and of those 10%, only about 3% actually graduate and obtain a 4-year degree. Although the number of foster youth who are starting at a 4-year university after high school has increased over the years, the number of youth who graduate from college continues to remain stable. A study of 712 youth in California, the results revealed that foster care youth are fives times less likely to attend college than youth who do not go through foster care. There are different resources that offer both financial and emotional support for foster youth to continue their education. Simultaneously, there are also many barriers that make getting to a college or university difficult. Borton describes some of the barriers youth face in her article, Barriers to Post-Secondary Enrollment for Former Foster Youth. A few of those barriers include financial hurdles, navigating through the application process with little to no support, and lack of housing.Many studies have shown that there are a few factors that have seemingly played a role in the success of foster youth making it to and graduating from a college or university. While having financial resources for foster youth is a huge help, there are other components to look at. Beginning with having support for these youth at the high school level. In order for foster youth to obtain a college degree, they must enroll at a university first. Out of the different factors that play in increasing college enrollment such as youth participating in extended foster care, reading ability, etc., youth who received assistance or had supportive relationships from adults, were more likely than youth who did not have supportive relationships, to enroll at a university.At colleges across the nation, there are programs that are specifically put in place to help youth who have aged out of the foster care system and continued into higher education. These programs often help youth financially by giving them supplemental funds and providing support through peer mentor programs or academic counseling services. While funding is an important key in helping get through college, it hasnt been found as the only crucial component in aiding a youths success. A study done by Jay and colleagues provides insight on what youth view as important in helping them thrive on a college campus. The study, which had a sample of 51 foster youth, used Conceptual Mapping to break down the different components of support that may be important for youth to receive on a college campus. It is important to take in the different factors that can be helpful for youth at a university and to look beyond providing financial support. Psychotropic medication use Studies have revealed that youth in foster care covered by Medicaid insurance receive psychotropic medication at a rate that was 3 times higher than that of Medicaid-insured youth who qualify by low family income. In a review (September 2003 to August 2004) of the medical records of 32,135 Texas foster care 0–19 years old, 12,189 were prescribed psychotropic medication, resulting in an annual prevalence of 37.9% of these children being prescribed medication. 41.3% received 3 different classes of these drugs during July 2004, and 15.9% received 4 different classes. The most frequently used medications were antidepressants (56.8%), attention-deficit/hyperactivity disorder drugs (55.9%), and antipsychotic agents (53.2%). The study also showed that youth in foster care are frequently treated with concomitant psychotropic medication, for which sufficient evidence regarding safety and effectiveness is not available.The use of expensive, brand name, patent protected medication was prevalent. In the case of SSRIs the use of the most expensive medications was noted to be 74%; in the general market only 28% are for brand name SSRIs vs generics. The average out-of-pocket expense per prescription was $34.75 for generics and $90.17 for branded products, a $55.42, difference. Therapeutic intervention Children in the child welfare system have often experienced significant and repeated traumas and having a background in foster homes—especially in instances of sexual abuse—can be the precipitating factor in a wide variety of psychological and cognitive deficits it may also serve to obfuscate the true cause of underlying issues. The foster care experience may have nothing to do with the symptoms, or on the other hand, a disorder may be exacerbated by having a history of foster care and attendant abuses. The human brain however has been shown to have a fair degree of neuroplasticity. and adult neurogenesis has been shown to be an ongoing process.Multidimensional Treatment Foster Care (MTFC), also referred to as Treatment Foster Care Oregon (TFCO) and Treatment Foster Care (TFC) is a community-based intervention that was created in 1983 by Dr. Patricia Chamberlain and her associated colleagues with the initial design intended to offer a replacement for group facilities. MTFC has differing approaches for different age groups. Preschoolers receive “a behavior-management approach and intensively trains, supervises, and supports foster caregivers to provide positive adult support and consistent limit setting” coupled with “coordinated interventions with the child’s biological parents.” MTFC for adolescence consists of individual placement with an intensely trained foster family providing “coordinated interventions in the home, with peers, [and] in educational settings.” MTFC has been shown to provide better results than group facilities and proves
Foster care
to be more cost effective. Reports show that Multidimensional treatment has effective results in reducing depression, arrest rates, deviant peer affiliations, placement disruption, and pregnancy rates while having positive replication trials. It is one method that attempts to incorporate trauma- and violence-informed care into its design.Researchers have faced difficulty when it comes to accurately assessing what makes MTFC and other similar programs that involve multiple levels of intervention successful. It seems to remain in a "black box" scenario where it is unsure what aspect of the treatment plan is actually producing positive effects. Multiple peer-reviewed research articles on foster care programs point out a lack of research effectively evaluating the outcomes of specific foster care programs, calling for more complete assessments to be conducted in order to properly compare outcomes between treatment plans and evaluate what practices in MTFC are most effective. Ethical concerns have also been raised by Therese Åström and other associated researchers when conducting a systematic review on behalf of the Swedish Agency for Health Technology Assessment and Assessment of Social Services in 2018, noting that on the one hand MTFC is evaluated as effective, however, it tends to be implemented in a way that diminishes the childs agency. Cross-cultural adoption policies George Shanti, Nico Van Oudenhoven, and Ekha Wazir, co-authors of Foster Care Beyond the Crossroads: Lessons from an International Comparative Analysis, say that there are four types of Government foster care systems. The first one is that of developing countries. These countries do not have policies implemented to take care of the basic needs of these children and these children mostly receive assistance from relatives. The second system is that of former socialist governments. The historical context of these states has not allowed for the evolution of their foster care system. NGOs have urged them to evolve; however the traditional system of institutionalizing these children is still in place. Thirdly, liberal democracies do not have the support from its political system in order to take care of these children, even though they have the resources. Finally, social democracies are the most advanced governments in regards to their foster care system. These governments have a massive infrastructure, funding, and support system in order to help foster care children. Adoption Foster care adoption is a type of domestic adoption where the child is initially placed into a foster care system and is subsequently placed for adoption. Children may be placed into foster care for a variety of reasons; including, removal from the home by a governmental agency because of maltreatment. In some jurisdictions, adoptive parents are licensed as and technically considered foster parents while the adoption is being finalized. According to the U.S Department of Health and Human Services Childrens Bureau, there were approximately 408,425 children in foster care in 2010. Of those children, twenty-five percent had a goal of adoption. In 2015, 243,060 children exited foster care and twenty-two percent were adopted. Nationwide, there are more than one hundred thousand children in the U.S. foster care system waiting for permanent families. Outcomes Youth who are aging out of foster care often face difficulties in transitioning into adulthood, especially in terms of finding stable housing, employment, finances, and educational opportunities. The suspected reason for these difficulties involves a lack of stability experienced while in the foster care system, and the reported abuse and/or neglect in their childhood, which may affect their ability to cope with significant life changes. In the United States, there are independent living programs designed with the intent to serve the needs of transitioning foster youth. However, youth aging out of foster care have indicated that these programs are failing to fully address the needs of young adults without familial assistance.In a study conducted by Gypen et al. (2017), involving a cross-database analysis of research articles relevant to the outcomes of former foster youth, they found that the educational, mental health, employment, income, stable housing, criminal involvement and substance abuse issues outcomes for youth who have aged out of the foster care system are substantially poorer than their peers. For example, Gypen et al. (2017), indicated that only 45% of former foster youth received a high school diploma, which is 23% lower than the general population. There are also significantly poorer outcomes for children who were formerly in foster care than children from low-income households. Children who are eventually adopted by their placement family show greater outcomes, in terms of finding stable housing, employment, finances and education opportunities, than those who aged out of the foster care system without a permanent placement.It has also been reported that former foster youth have a higher chance of ending up in prostitution, and even fall prey to sex trafficking. This has also been called the "foster care to prostitution pipeline". a 2012 study in Los Angeles found that 59% of juveniles arrested for prostitution were or had been in foster care, but the generalizability of these findings has been disputed. See also References Further reading External links The Mental Health of Children in Out-of-Home Care: Scale and Complexity of Mental Health Problems Effects of Enhanced Foster Care on the Long-term Physical and Mental Health of Foster Care Alumni The impact of foster care on development Effects of early psychosocial deprivation on the development of memory and executive function Enduring neurobehavioral effects of early life trauma mediated through learning and corticosterone suppression Chisholm, Hugh, ed. (1911). "Boarding-Out System". Encyclopædia Britannica (11th ed.). Cambridge University Press.
Intracerebral hemorrhage
Intracerebral hemorrhage (ICH), also known as cerebral bleed, intraparenchymal bleed, and hemorrhagic stroke, or haemorrhagic stroke, is a sudden bleeding into the tissues of the brain, into its ventricles, or into both. It is one kind of bleeding within the skull and one kind of stroke. Symptoms can include headache, one-sided weakness, vomiting, seizures, decreased level of consciousness, and neck stiffness. Often, symptoms get worse over time. Fever is also common.Causes include brain trauma, aneurysms, arteriovenous malformations, and brain tumors. The biggest risk factors for spontaneous bleeding are high blood pressure and amyloidosis. Other risk factors include alcoholism, low cholesterol, blood thinners, and cocaine use. Diagnosis is typically by CT scan. Other conditions that may present similarly include ischemic stroke.Treatment should typically be carried out in an intensive care unit. Guidelines recommend decreasing the blood pressure to a systolic of 140 mmHg. Blood thinners should be reversed if possible and blood sugar kept in the normal range. Surgery to place a ventricular drain may be used to treat hydrocephalus, but corticosteroids should not be used. Surgery to remove the blood is useful in certain cases.Cerebral bleeding affects about 2.5 per 10,000 people each year. It occurs more often in males and older people. About 44% of those affected die within a month. A good outcome occurs in about 20% of those affected. Intracerebral hemorrhage, a type of hemorrhagic stroke, was first distinguished from ischemic strokes due to insufficient blood flow, so called "leaks and plugs", in 1823. Signs and symptoms People with intracerebral bleeding have symptoms that correspond to the functions controlled by the area of the brain that is damaged by the bleed. These localizing signs and symptoms can include hemiplegia (or weakness localized to one side of the body) and paresthesia (loss of sensation) including hemisensory loss (if localized to one side of the body). These symptoms are usually rapid in onset, sometimes occurring in minutes, but not as rapid as the symptom onset in ischemic stroke. Other symptoms include those that indicate a rise in intracranial pressure caused by a large mass (due to hematoma expansion) putting pressure on the brain. These symptoms include headaches, nausea, vomiting, a depressed level of consciousness, stupor and death. Continued elevation in the intracranial pressure and the accompanying mass effect may eventually cause brain herniation (when different parts of the brain are displaced or shifted to new areas in relation to the skull and surrounding dura mater supporting structures). Brain herniation is associated with hyperventilation, extensor rigidity, pupillary asymmetry, pyramidal signs, coma and death.Hemorrhage into the basal ganglia or thalamus causes contralateral hemiplegia due to damage to the internal capsule. Other possible symptoms include gaze palsies or hemisensory loss. Intracerebral hemorrhage into the cerebellum may cause ataxia, vertigo, incoordination of limbs and vomiting. Some cases of cerebellar hemorrhage lead to blockage of the fourth ventricle with subsequent impairment of drainage of cerebrospinal fluid from the brain. The ensuing hydrocephalus, or fluid buildup in the ventricles of the brain leads to a decreased level of consciousness and coma. Brainstem hemorrhage most commonly occurs in the pons and is associated with cranial nerve palsies, pinpoint (but reactive) pupils, gaze palsies, facial weakness, and coma (if there is damage to the reticular activating system). Causes Intracerebral bleeds are the second most common cause of stroke, accounting for 10% of hospital admissions for stroke. High blood pressure raises the risks of spontaneous intracerebral hemorrhage by two to six times. More common in adults than in children, intraparenchymal bleeds are usually due to penetrating head trauma, but can also be due to depressed skull fractures. Acceleration-deceleration trauma, rupture of an aneurysm or arteriovenous malformation (AVM), and bleeding within a tumor are additional causes. Amyloid angiopathy is not an uncommon cause of intracerebral hemorrhage in patients over the age of 55. A very small proportion is due to cerebral venous sinus thrombosis.Risk factors for ICH include: Hypertension (high blood pressure) Diabetes mellitus Menopause Excessive alcohol consumption Severe migraineHypertension is the strongest risk factor associated with intracerebral hemorrhage and long term control of elevated blood pressure has been shown to reduce the incidence of hemorrhage. Cerebral amyloid angiopathy, a disease characterized by deposition of amyloid beta peptides in the walls of the small blood vessels of the brain, leading to weakened blood vessel walls and an increased risk of bleeding; is also an important risk factor for the development of intracerebral hemorrhage. Other risk factors include advancing age (usually with a concomitant increase of cerebral amyloid angiopathy risk in the elderly), use of anticoagulants or antiplatelet medications, the presence of cerebral microbleeds, chronic kidney disease, and low low density lipoprotein (LDL) levels (usually below 70). The direct oral anticoagulants (DOACs) such as the factor Xa inhibitors or direct thrombin inhibitors are thought to have a lower risk of intracerebral hemorrhage as compared to the vitamin K antagonists such as warfarin.Cigarette smoking may be a risk factor but the association is weak.Traumautic intracerebral hematomas are divided into acute and delayed. Acute intracerebral hematomas occur at the time of the injury while delayed intracerebral hematomas have been reported from as early as 6 hours post injury to as long as several weeks. Diagnosis Both computed tomography angiography (CTA) and magnetic resonance angiography (MRA) have been proved to be effective in diagnosing intracranial vascular malformations after ICH. So frequently, a CT angiogram will be performed in order to exclude a secondary cause of hemorrhage or to detect a "spot sign". Intraparenchymal hemorrhage can be recognized on CT scans because blood appears brighter than other tissue and is separated from the inner table of the skull by brain tissue. The tissue surrounding a bleed is often less dense than the rest of the brain because of edema, and therefore shows up darker on the CT scan. The oedema surrounding the haemorrhage would rapidly increase in size in the first 48 hours, and reached its maximum extent at day 14. The bigger the size of the haematoma, the larger its surrounding oedema. Location When due to high blood pressure, intracerebral hemorrhages typically occur in the putamen (50%) or thalamus (15%), cerebrum (10–20%), cerebellum (10–13%), pons (7–15%), or elsewhere in the brainstem (1–6%). Treatment Treatment depends substantially on the type of ICH. Rapid CT scan and other diagnostic measures are used to determine proper treatment, which may include both medication and surgery. Tracheal intubation is indicated in people with decreased level of consciousness or other risk of airway obstruction. IV fluids are given to maintain fluid balance, using isotonic rather than hypotonic fluids.Rapid lowering of the blood pressure for those with hypertensive emergency can have higher functional recovery at 90 days post intracerebral haemorrhage, when compared to those who undergone other treatments such as mannitol administration, reversal of anticoagulation (those previously on anticoagulant treatment for other conditions), surgery to evacuate the haematoma, and standard rehabilitation care in hospital, while showing similar rate of death at 12%. Medication One review found that antihypertensive therapy to bring down the blood pressure in acute phases appears to improve outcomes. Other reviews found an unclear difference between intensive and less intensive blood pressure control. The American Heart Association and American Stroke Association guidelines in 2015 recommended decreasing the blood pressure to a SBP of 140 mmHg. However, the evidence finds tentative usefulness as of 2015. Giving Factor VIIa within 4 hours limits the bleeding and formation of a hematoma. However, it also increases the risk of thromboembolism. It thus overall does not result in better outcomes in those without hemophilia. Frozen plasma, vitamin K, protamine, or platelet transfusions may be given in case of a coagulopathy. Platelets however appear to worsen outcomes in those with spontaneous intracerebral bleeding on antiplatelet medication. The specific reversal agents idarucizumab and andexanet alfa may be used to stop continued intracerebral hemorrhage in people taking directly oral acting anticoagulants (such as factor Xa inhibitors or direct thrombin inhibitors). However, if these specialized medications are not available, prothrombin complex concentrate may also be used. Fosphenytoin or other anticonvulsant is given in case of seizures or lobar hemorrhage. H2 antagonists or proton pump inhibitors are commonly given for to try to prevent stress ulcers, a condition linked with ICH. Corticosteroids, were thought to reduce swelling. However, in large controlled studies, corticosteroids have been found to increase mortality rates and are no longer recommended. Surgery Surgery is required if the hematoma is greater than 3 cm (1 in), if there is a structural vascular lesion or lobar hemorrhage in a young patient. A catheter may be passed into the brain vasculature to close off or dilate blood vessels, avoiding invasive surgical procedures. Aspiration by stereotactic surgery or endoscopic drainage may be used in basal ganglia hemorrhages, although successful reports are limited. A craniectomy holds promise of reduced mortality, but the effects of long‐term neurological outcome remain controversial. Prognosis The risk of death from an intraparenchymal bleed in traumatic brain injury is especially high when the injury occurs in the brain stem. Intraparenchymal bleeds within the medulla oblongata are almost always fatal, because they cause damage to cranial nerve X, the vagus nerve, which plays an important role in blood circulation and breathing. This kind of hemorrhage can also occur in the cortex or subcortical areas, usually in the frontal or temporal lobes when due to head injury, and sometimes in the cerebellum. Larger volumes of hematoma at hospital admission as well as greater expansion of the hematoma on subsequent evaluation (usually occurring within 6 hours of symptom onset) are associated with a worse prognosis. Perihematomal edema, or secondary edema surrounding the hematoma, is associated with secondary brain injury, worsening neurological function and is associated with poor outcomes. Intraventricular hemorrhage, or bleeding into the ventricles of the brain, which may occur in 30-50% of patients, is also associated with long term disability and a poor prognosis. Brain herniation is associated with poor prognoses.For spontaneous intracerebral hemorrhage seen on CT scan, the death rate (mortality) is 34–50% by 30 days after the injury, and half of the deaths occur in the first 2 days. Even though the majority of deaths occur in the first few days after ICH, survivors have a long-term excess mortality rate of 27% compared to the general population. Of those who survive an intracerebral hemorrhage, 12–39% are independent with regard to self-care; others are disabled to varying degrees and require supportive care. Epidemiology The incidence of intracerebral hemorrhage is estimated at 24.6 cases per 100,000 person years with the incidence rate being similar in men and women. The incidence is much higher in the elderly, especially those who are 85 or older, who are 9.6 times more likely to have an intracerebral hemorrhage as compared to those of middle age. It accounts for 20% of all cases of cerebrovascular disease in the United States, behind cerebral thrombosis (40%) and cerebral embolism (30%). History Intracerebral hemorrhage was first distinguished from strokes due to insufficient blood flow, so called "leaks and plugs", in 1823. Franklin D. Roosevelt, the 32nd President of the United States, died from a cerebral hemorrhage in 1945 and so did Soviet dictator Joseph Stalin in 1953. Research The inflammatory response triggered by stroke has been viewed as harmful in the early stage, focusing on blood-borne leukocytes, neutrophils and macrophages, and resident microglia and astrocytes. A human postmortem study shows that inflammation occurs early and persists for several days after ICH. Modulating microglial activation and polarization might mitigate intracerebral hemorrhage-induced brain injury and improve brain repair. A new area of interest is the role of mast cells in ICH. References Further reading Hemphill JC, 3rd; Greenberg, SM; Anderson, CS; Becker, K; Bendok, BR; Cushman, M; Fung, GL; Goldstein, JN; Macdonald, RL; Mitchell, PH; Scott, PA; Selim, MH; Woo, D (28 May 2015). "Guidelines for the Management of Spontaneous Intracerebral Hemorrhage: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association". Stroke: A Journal of Cerebral Circulation. 46 (7): 2032–60. doi:10.1161/STR.0000000000000069. PMID 26022637. == External links ==
Legionnaires disease
Legionnaires disease is a form of atypical pneumonia caused by any species of Legionella bacteria, quite often Legionella pneumophila. Signs and symptoms include cough, shortness of breath, high fever, muscle pains, and headaches. Nausea, vomiting, and diarrhea may also occur. This often begins 2–10 days after exposure.A legionellosis is any disease caused by Legionella, including Legionnaires disease (a pneumonia), Pontiac fever (a nonpneumonia illness), and Pittsburgh pneumonia, but Legionnaires disease is the most common, so mentions of legionellosis often refer to Legionnaires disease. The bacterium is found naturally in fresh water. It can contaminate hot water tanks, hot tubs, and cooling towers of large air conditioners. It is usually spread by breathing in mist that contains the bacteria. It can also occur when contaminated water is aspirated. It typically does not spread directly between people, and most people who are exposed do not become infected. Risk factors for infection include older age, a history of smoking, chronic lung disease, and poor immune function. Those with severe pneumonia and those with pneumonia and a recent travel history should be tested for the disease. Diagnosis is by a urinary antigen test and sputum culture.No vaccine is available. Prevention depends on good maintenance of water systems. Treatment of Legionnaires disease is with antibiotics. Recommended agents include fluoroquinolones, azithromycin, or doxycycline. Hospitalization is often required. The fatality rate is around 10% for healthy persons and 25% for those with underlying conditions.The number of cases that occur globally is not known. Legionnaires disease is the cause of an estimated 2–9% of pneumonia cases that are acquired outside of a hospital. An estimated 8,000 to 18,000 cases a year in the United States require hospitalization. Outbreaks of disease account for a minority of cases. While it can occur any time of the year, it is more common in the summer and fall. The disease is named after the outbreak where it was first identified, at a 1976 American Legion convention in Philadelphia. Signs and symptoms The length of time between exposure to the bacteria and the appearance of symptoms (incubation period) is generally 2–10 days, but can more rarely extend to as long as 20 days. For the general population, among those exposed, between 0.1 and 5.0% develop the disease, while among those in hospital, between 0.4 and 14% develop the disease.Those with Legionnaires disease usually have fever, chills, and a cough, which may be dry or may produce sputum. Almost all experience fever, while around half have cough with sputum, and one-third cough up blood or bloody sputum. Some also have muscle aches, headache, tiredness, loss of appetite, loss of coordination (ataxia), chest pain, or diarrhea and vomiting. Up to half of those with Legionnaires disease have gastrointestinal symptoms, and almost half have neurological symptoms, including confusion and impaired cognition. "Relative bradycardia" may also be present, which is low to normal heart rate despite the presence of a fever.Laboratory tests may show that kidney functions, liver functions, and electrolyte levels are abnormal, which may include low sodium in the blood. Chest X-rays often show pneumonia with consolidation in the bottom portion of both lungs. Distinguishing Legionnaires disease from other types of pneumonia by symptoms or radiologic findings alone is difficult; other tests are required for definitive diagnosis.People with Pontiac fever, a much milder illness caused by the same bacterium, experience fever and muscle aches without pneumonia. They generally recover in 2–5 days without treatment. For Pontiac fever, the time between exposure and symptoms is generally a few hours to two days. Cause Over 90% of cases of Legionnaires disease are caused by Legionella pneumophila. Other types include L. longbeachae, L. feeleii, L. micdadei, and L. anisa. Transmission Legionnaires disease is usually spread by the breathing in of aerosolized water or soil contaminated with the Legionella bacteria. Experts have stated that Legionnaires disease is not transmitted from person to person. In 2014, one case of possible spread from someone sick to the caregiver occurred. Rarely, it has been transmitted by direct contact between contaminated water and surgical wounds. The bacteria grow best at warm temperatures and thrive at water temperatures between 25 and 45 °C (77 and 113 °F), with an optimum temperature of 35 °C (95 °F). Temperatures above 60 °C (140 °F) kill the bacteria. Sources where temperatures allow the bacteria to thrive include hot water tanks, cooling towers, and evaporative condensers of large air conditioning systems, such as those commonly found in hotels and large office buildings. Though the first known outbreak was in Philadelphia, cases of legionellosis have occurred throughout the world. Reservoirs L. pneumophila thrives in aquatic systems, where it is established within amoebae in a symbiotic relationship. Legionella bacteria survive in water as intracellular parasites of water-dwelling protozoa, such as amoebae. Amoebae are often part of biofilms, and once Legionella and infected amoebae are protected within a biofilm, they are particularly difficult to destroy.In the built environment, central air conditioning systems in office buildings, hotels, and hospitals are sources of contaminated water. Other places the bacteria can dwell include cooling towers used in industrial cooling systems, evaporative coolers, nebulizers, humidifiers, whirlpool spas, hot water systems, showers, windshield washers, fountains, room-air humidifiers, ice-making machines, and misting systems typically found in grocery-store produce sections.The bacteria may also be transmitted from contaminated aerosols generated in hot tubs if the disinfection and maintenance programs are not followed rigorously. Freshwater ponds, creeks, and ornamental fountains are potential sources of Legionella. The disease is particularly associated with hotels, fountains, cruise ships, and hospitals with complex potable water systems and cooling systems. Respiratory-care devices such as humidifiers and nebulizers used with contaminated tap water may contain Legionella species, so using sterile water is very important. Other sources include exposure to potting mix and compost. Mechanism Legionella spp. enter the lungs either by aspiration of contaminated water or inhalation of aerosolized contaminated water or soil. In the lung, the bacteria are consumed by macrophages, a type of white blood cell, inside of which the Legionella bacteria multiply, causing the death of the macrophage. Once the macrophage dies, the bacteria are released from the dead cell to infect other macrophages. Virulent strains of Legionella kill macrophages by blocking the fusion of phagosomes with lysosomes inside the host cell; normally, the bacteria are contained inside the phagosome, which merges with a lysosome, allowing enzymes and other chemicals to break down the invading bacteria. Diagnosis People of any age may develop Legionnaires disease, but the illness most often affects middle-aged and older people, particularly those who smoke cigarettes or have chronic lung disease. Immunocompromised people are also at higher risk. Pontiac fever most commonly occurs in those who are otherwise healthy.The most useful diagnostic tests detect the bacteria in coughed-up mucus, find Legionella antigens in urine samples, or allow comparison of Legionella antibody levels in two blood samples taken 3–6 weeks apart. A urine antigen test is simple, quick, and very reliable, but only detects L. pneumophila serogroup 1, which accounts for 70% of disease caused by L. pneumophila, which means use of the urine antigen test alone may miss as many as 30% of cases. This test was developed by Richard Kohler in 1982. When dealing with L. pneumophila serogroup 1, the urine antigen test is useful for early detection of Legionnaires disease and initiation of treatment, and has been helpful in early detection of outbreaks. However, it does not identify the specific subtypes, so it cannot be used to match the person with the environmental source of infection. The Legionella bacteria can be cultured from sputum or other respiratory samples. Legionella spp. stain poorly with Gram stain, stain positive with silver, and are cultured on charcoal yeast extract with iron and cysteine (CYE agar). A significant under-reporting problem occurs with legionellosis. Even in countries with effective health services and readily available diagnostic testing, about 90% of cases of Legionnaires disease are missed. This is partly due to the disease being a relatively rare form of pneumonia, which many clinicians may not have encountered before, thus may misdiagnose. A further issue is that people with legionellosis can present with a wide range of symptoms, some of which (such as diarrhea) may distract clinicians from making a correct diagnosis. Prevention Although the risk of Legionnaires disease being spread by large-scale water systems cannot be eliminated, it can be greatly reduced by writing and enforcing a highly detailed, systematic water safety plan appropriate for the specific facility involved (office building, hospital, hotel, spa, cruise ship, etc.) Some of the elements that such a plan may include are: Keep water temperature either above or below the 20–50 °C (68–122 °F) range in which the Legionella bacterium thrives. Prevent stagnation, for example, by removing from a network of pipes any sections that have no outlet (dead ends). Where stagnation is unavoidable, as when a wing of a hotel is closed for the off-season, systems must be thoroughly disinfected just prior to resuming normal operation. Prevent the buildup of biofilm, for example, by not using (or by replacing) construction materials that encourage its development, and by reducing the quantity of nutrients for bacterial growth that enter the system. Periodically disinfect the system, by high heat or a chemical biocide, and use chlorination where appropriate. Treatment of water with copper-silver ionization or ultraviolet light may also be effective. System design (or renovation) can reduce the production of aerosols and reduce human exposure to them, by directing them well away from building air intakes.An effective water safety plan also covers such matters as training, record-keeping, communication among staff, contingency plans, and management responsibilities. The format and content of the plan may be prescribed by public health laws or regulations. To inform the water safety plan, the undertaking of a site specific legionella risk assessment is often recommended in the first instance. The legionella risk assessment identifies the hazards, the level of risk they pose and provides recommendations of control measures to put place within the overarching water safety plan. Treatment Effective antibiotics include most macrolides, tetracyclines, ketolides, and quinolones. Legionella spp. multiply within the cell, so any effective treatment must have excellent intracellular penetration. Current treatments of choice are the respiratory tract quinolones (levofloxacin, moxifloxacin, gemifloxacin) or newer macrolides (azithromycin, clarithromycin, roxithromycin). The antibiotics used most frequently have been levofloxacin, doxycycline, and azithromycin. Macrolides (azithromycin) are used in all age groups, while tetracyclines (doxycycline) are prescribed for children above the age of 12 and quinolones (levofloxacin) above the age of 18. Rifampicin can be used in combination with a quinolone or macrolide. Whether rifampicin is an effective antibiotic to take for treatment is uncertain. The Infectious Diseases Society of America does not recommend the use of rifampicin with added regimens. Tetracyclines and erythromycin led to improved outcomes compared to other antibiotics in the original American Legion outbreak. These antibiotics are effective because they have excellent intracellular penetration in Legionella-infected cells. The recommended treatment is 5–10 days of levofloxacin or 3–5 days of azithromycin, but in people who are immunocompromised, have severe disease, or other pre-existing health conditions, longer antibiotic use may be necessary. During outbreaks, prophylactic antibiotics have been used to prevent Legionnaires disease in high-risk individuals who have possibly been exposed.The mortality at the original American Legion convention in 1976 was high (29 deaths in 182 infected individuals) because the antibiotics used (including penicillins, cephalosporins, and aminoglycosides) had poor intracellular penetration. Mortality has plunged to less than 5% if therapy is started quickly. Delay in giving the appropriate antibiotic leads to higher mortality. Prognosis The fatality rate of Legionnaires disease has ranged from 5–30% during various outbreaks and approaches 50% for nosocomial infections, especially when treatment with antibiotics is delayed. Hospital-acquired Legionella pneumonia has a fatality rate of 28%, and the principal source of infection in such cases is the drinking-water distribution system. Epidemiology Legionnaires disease acquired its name in July 1976, when an outbreak of pneumonia occurred among people attending a convention of the American Legion at the Bellevue-Stratford Hotel in Philadelphia. Of the 182 reported cases, mostly men, 29 died. On 18 January 1977, the causative agent was identified as a previously unknown strain of bacteria, subsequently named Legionella, and the species that caused the outbreak was named Legionella pneumophila. Following this discovery, unexplained outbreaks of severe respiratory disease from the 1950s were retrospectively attributed to Legionella. Legionnaires disease also became a prominent historical example of an emerging infectious disease.Outbreaks of Legionnaires disease receive significant media attention, but this disease usually occurs in single, isolated cases not associated with any recognized outbreak. When outbreaks do occur, they are usually in the summer and early autumn, though cases may occur at any time of year. Most infections occur in those who are middle-aged or older. National surveillance systems and research studies were established early, and in recent years, improved ascertainment and changes in clinical methods of diagnosis have contributed to an upsurge in reported cases in many countries. Environmental studies continue to identify novel sources of infection, leading to regular revisions of guidelines and regulations. About 8,000 to 18,000 cases of Legionnaires disease occur each year in the United States, according to the Bureau of Communicable Disease Control.Between 1995 and 2005, over 32,000 cases of Legionnaires disease and more than 600 outbreaks were reported to the European Working Group for Legionella Infections. The data on Legionella are limited in developing countries, and Legionella-related illnesses likely are underdiagnosed worldwide. Improvements in diagnosis and surveillance in developing countries would be expected to reveal far higher levels of morbidity and mortality than are currently recognised. Similarly, improved diagnosis of human illness related to Legionella species and serogroups other than Legionella pneumophila would improve knowledge about their incidence and spread.A 2011 study successfully used modeling to predict the likely number of cases during Legionnaires outbreaks based on symptom onset dates from past outbreaks. In this way, the eventual likely size of an outbreak can be predicted, enabling efficient and effective use of public-health resources in managing an outbreak.During the COVID-19 pandemic, some researchers and organisations raised concerns about the impact of the COVID-19 lockdowns on Legionnaires disease outbreaks. Additionally, at least two people in England died from a co-infection of Legionella and SARS-CoV-2. Outbreaks An outbreak is defined as two or more cases where the onset of illness is closely linked in time (weeks rather than months) and space, where a suspicion or evidence exists of a common source of infection, with or without microbiological support (i.e. common spatial location of cases from travel history).In April 1985, 175 people in Stafford, England, were admitted to the District or Kingsmead Stafford Hospitals with chest infection or pneumonia. A total of 28 people died. Medical diagnosis showed that Legionnaires disease was responsible and the immediate epidemiological investigation traced the source of the infection to the air-conditioning cooling tower on the roof of Stafford District Hospital.In March 1999, a large outbreak in the Netherlands occurred during the Westfriese Flora flower exhibition in Bovenkarspel; 318 people became ill and at least 32 people died. This was the second-deadliest outbreak since the 1976 outbreak and possibly the deadliest, as several people were buried before Legionnaires disease had been diagnosed.The worlds largest outbreak of Legionnaires disease happened in July 2001, with people appearing at the hospital on 7 July, in Murcia, Spain. More than 800 suspected cases were recorded by the time the last case was treated on 22 July; 636–696 of these cases were estimated and 449 confirmed (so, at least 16,000 people were exposed to the bacterium) and six died, a case-fatality rate around 1%.In September 2005, 127 residents of a nursing home in Canada became ill with L. pneumophila. Within a week, 21 of the residents had died. Culture results at first were negative, which is not unusual, as L. pneumophila is a "fastidious" bacterium, meaning it requires specific nutrients, living conditions, or both to grow. The source of the outbreak was traced to the air-conditioning cooling towers on the nursing homes roof.In an outbreak in lower Quebec City, Canada, 180 people were affected with 13 resulting deaths due to contaminated water in a cooling tower.In November 2014, 302 people were hospitalized following an outbreak of legionellosis in Portugal, and seven related deaths were reported. All cases emerged in three civil parishes from the municipality of Vila Franca de Xira in the northern outskirts of Lisbon, and were treated in hospitals of the greater Lisbon area. The source is suspected to be located in the cooling towers of the fertilizer plant Fertibéria.Twelve people were diagnosed with the disease in an outbreak in the Bronx, New York, in December 2014; the source was traced to contaminated cooling towers at a housing development. In July and August 2015, another, unrelated outbreak in the Bronx killed 12 people and made about 120 people sick; the cases arose from a cooling tower on top of a hotel. At the end of September, another person died of the disease and 13 were sickened in yet another unrelated outbreak in the Bronx. The cooling towers from which the people were infected in the latter outbreak had been cleaned during the summer outbreak, raising concerns about how well the bacteria could be controlled.On 28 August 2015, an outbreak of Legionnaires disease was detected at San Quentin State Prison in Northern California; 81 people were sickened and the cause was sludge that had built up in cooling towers.Between June 2015, and January 2016, 87 cases of Legionnaires disease were reported by the Michigan Department of Health and Human Services for the city of Flint, Michigan, and surrounding areas. The outbreak may have been linked to the Flint water crisis, in which the citys water source was changed to a cheaper and inadequately treated source. Ten of those cases were fatal.In November 2017, an outbreak was detected at Hospital de São Francisco Xavier, Lisbon, Portugal, with up to 53 people being diagnosed with the disease and five of them dying from it.In Quincy, Illinois, at the Illinois Veterans Home, a 2015 outbreak of the disease killed 12 people and sickened more than 50 others. It was believed to be caused by infected water supply. Three more cases were identified by November 2017.In the autumn of 2017, 22 cases were reported in a Legionnaires disease outbreak at Disneyland in Anaheim, California. It was believed to have been caused by a cooling tower that releases mist for the comfort of visitors. The contaminated droplets likely spread to the people in and beyond the park.In July 2019, 11 former guests of the Sheraton Atlanta hotel were diagnosed with the disease, with 55 additional probable cases.In September 2019, 141 visitors to the Western North Carolina Mountain State Fair were diagnosed with Legionnaires disease, with four reported deaths, after a hot tub exhibit is suspected to have developed and spread the bacteria. At least one additional exposure apparently occurred during the Asheville Quilt Show that took place a few weeks after the fair in the same building where the hot tub exhibit was held. The building had been sanitized after the outbreak.In December 2019, the government of Western Australias Department of Health was notified of four cases of Legionnaires disease. Those exposed had recently visited near Balis Ramayana Resort and Spa in central Kuta. References External links Legionnaires disease at Curlie "Legionnaires Disease". MedlinePlus. U.S. National Library of Medicine.
Kernicterus
Kernicterus is a bilirubin-induced brain dysfunction. The term was coined in 1904 by Christian Georg Schmorl. Bilirubin is a naturally occurring substance in the body of humans and many other animals, but it is neurotoxic when its concentration in the blood is too high, a condition known as hyperbilirubinemia. Hyperbilirubinemia may cause bilirubin to accumulate in the grey matter of the central nervous system, potentially causing irreversible neurological damage. Depending on the level of exposure, the effects range from clinically unnoticeable to severe brain damage and even death. When hyperbilirubinemia increases past a mild level, it leads to jaundice, raising the risk of progressing to kernicterus. When this happens in adults, it is usually because of liver problems. Newborns are especially vulnerable to hyperbilirubinemia-induced neurological damage, because in the earliest days of life, the still-developing liver is heavily exercised by the breakdown of fetal hemoglobin as it is replaced with adult hemoglobin and the blood–brain barrier is not as developed. Mildly elevated serum bilirubin levels are common in newborns, and neonatal jaundice is not unusual, but bilirubin levels must be carefully monitored in case they start to climb, in which case more aggressive therapy is needed, usually via light therapy but sometimes even via exchange transfusion. Classification Acute bilirubin encephalopathy (ABE) ABE is an acute state of elevated bilirubin in the central nervous system. Clinically, it encompasses a wide range of symptoms. These include lethargy, decreased feeding, hypotonia or hypertonia, a high-pitched cry, spasmodic torticollis, opisthotonus, setting sun sign, fever, seizures, and even death. If the bilirubin is not rapidly reduced, ABE quickly progresses to chronic bilirubin encephalopathy. Chronic bilirubin encephalopathy (CBE) CBE is a chronic state of severe bilirubin-induced neurological lesions. Reduction of bilirubin in this state will not reverse the sequelae. Clinically, manifestations of CBE include: movement disorders – dyskinetic CP with often spasticity. 60% have severe motor disability (unable to walk). auditory dysfunction – auditory neuropathy (ANSD) visual/oculomotor impairments (nystagmus, strabismus, impaired upward or downward gaze, and/or cortical visual impairment). In rare cases, decreased visual acuity(blindness) can occur. dental enamel hypoplasia/dysplasia of the deciduous teeth, gastroesophageal reflux, impaired digestive function. slightly decreased intellectual function: Although most individuals (approximately 85%) with kernicterus fall in normal or dull-normal range. epilepsy is uncommon.These impairments are associated with lesions in the basal ganglia, auditory nuclei of the brain stem, and oculomotor nuclei of the brain stem. Cortex and white matter are subtly involved. Cerebellum may be involved. Severe cortical involvement is uncommon. Subtle bilirubin encephalopathy (SBE) SBE is a chronic state of mild bilirubin-induced neurological dysfunction (BIND). Clinically, this may result in neurological, learning and movement disorders, isolated hearing loss and auditory dysfunction. In the past it was thought that kernicterus (KI) often cause an intellectual disability. This was assumed due to difficulty with hearing, that is typically not detected in a normal audiogram accompanied by impairments of speech, with choreoathetosis. With advances in technology, this has proven to not be the case as those living with KI have repeatedly demonstrated their intelligence using Augmentative Communication devices. Although most individuals with kernicteric cerebral palsy have normal intelligence, some children with mild choreoathetosis develop dull normal intelligence or mild intellectual disability even without auditory dysfunction. Causes In the vast majority of cases, kernicterus is associated with unconjugated hyperbilirubinemia during the neonatal period. The blood–brain barrier is not fully functional in neonates and therefore bilirubin is able to cross into the central nervous system. Moreover, neonates have much higher levels of bilirubin in their blood due to: Rapid breakdown of fetal red blood cells immediately prior to birth, with subsequent replacement by normal adult human red blood cells. This breakdown of fetal red blood cells releases large amounts of bilirubin. Severe hemolytic disease of the newborn. Many children who survive exhibit permanent mental impairment or damage to motor areas of the brain because of precipitation of bilirubin in neurons. Neonates have a limited ability to metabolize and excrete bilirubin. The sole pathway for bilirubin elimination is through the uridine diphosphate glucuronosyltransferase isoform 1A1 (UGT1A1) enzyme, which performs a reaction called "glucuronidation". This reaction adds a large sugar to the bilirubin, which makes the compound more water-soluble, so it can be more readily excreted via the urine and/or the feces. The UGT1A1 enzyme is not active in appreciable amounts until several months after birth. Apparently, this is a developmental compromise since the maternal liver and placenta perform glucuronidation for the fetus. Administration of aspirin to neonates and infants. Aspirin displaces bilirubin from serum albumin, thus generating an increased level of free bilirubin which can cross the developing blood brain barrier. This can be life-threatening.Bilirubin is known to accumulate in the gray matter of neurological tissue where it exerts direct neurotoxic effects. It appears that its neurotoxicity is due to mass-destruction of neurons by apoptosis and necrosis. Risk factors Premature birth Rh incompatibility Polycythemia – often present in neonates Sulfonamides (e.g. co-trimoxazole) – displaces bilirubin from serum albumin Crigler–Najjar syndrome, type I G6PD deficiency BruisingGilberts syndrome and G6PD deficiency occurring together especially increases the risk for kernicterus. Diagnosis In neonates with kernicterus, the Moro reflex may be absent or symmetrically reduced. Prevention Measuring the serum bilirubin is helpful in evaluating a babys risk for developing kernicterus. These numbers can then be plotted on the Bhutani nomogram. In neonates with hyperbilirubinemia, light therapy may be effective in reducing the serum bilirubin level. More severe cases may require the use of exchange transfusion. Treatment Currently no effective treatment exists for kernicterus. Future therapies may include neuroregeneration. A handful of patients have undergone deep brain stimulation, and experienced some benefit. Drugs such as baclofen, clonazepam, gabapentin, and artane are often used to manage movement disorders associated with kernicterus. Proton pump inhibitors are also used to help with reflux. Cochlear implants and hearing aids have also been known to improve the hearing loss that can come with kernicterus (auditory neuropathy – ANSD). Notable people Claudio Tiribelli, Italian hepatologist, studies on Kernicterus References == External links ==
Tungiasis
Tungiasis is an inflammatory skin disease caused by infection with the female ectoparasitic Tunga penetrans, a flea also known as the chigoe, chigo, chigoe flea, chigo flea, jigger, nigua, sand flea, or burrowing flea (and not to be confused with the chigger, a different arthropod). The flea and the disease that it causes are found in the tropical parts of Africa, the Caribbean, Central and South America, and India. Tunga penetrans is the smallest known flea, measuring 1 mm across. It is also known in Latin America as the nigua and bicho de pie (Spanish) or bicho de pé (Portuguese), literally "foot bug". Tunga penetrans is a member of the genus Tunga, which comprises 13 species.Tungiasis causes skin inflammation, severe pain, itching, and a lesion at the site of infection that is characterized by a black dot at the center of a swollen red lesion, surrounded by what looks like a white halo. Desquamation of the skin is always seen, especially after the flea expands during hypertrophy. As of 2009, tungiasis is present worldwide in 88 countries with varying degrees of incidence. This disease is of special public health concern in highly endemic areas such as Nigeria, Trinidad and Tobago, and Brazil, where its prevalence, especially in poor communities, has been known to approach 50%.The chigoe flea is properly classified as a member of the order Siphonaptera as it is a flea. Although commonly referred to as chiggers, true chiggers are mites, which are minute arachnids. Mites penetrate the skin and feed on skin cells that are broken down by an enzyme they secrete from their mouthparts, but they do not lay eggs in the host as T. penetrans does. Moreover, in mites, the adult and the larval forms both feed on other animals. This is not the case with T. penetrans, as only the adults feed on mammals and it is only the female that stays attached to the host. Tunga penetrans is also known by the following names: chigoe flea, sand flea, nigua, chigger flea, jigger flea, bicho de pé, pico, sikka, kuti, and piqui, among many others. Signs and symptoms The symptoms of this disease include: Severe pruritus Pain Inflammation and swelling Lesions and ulcerations, with black dots in the centerIf a Tunga infection is left untreated, secondary infections, such as bacteremia, tetanus, necrosis and gangrene, may be expected. In all cases, tungiasis by itself only caused morbidity, though secondary infection may lead to mortality. The life cycle section presents the Fortaleza stages from the fleas developmental perspective. The discussion is specific to symptoms of human infection. The clinical presentation in humans follows the Fortaleza Classification as the stage of infection will determine the symptoms present. The following discussion will give an overview of the symptoms beginning in stage 2 because patients are not likely to present themselves at the early stages of infection, mostly because the fleas burrowing is usually not felt. This may be due to a keratolytic enzyme secreted during stage 1.The patient with a single flea may present as early as stage 2 when, though the erythema is barely perceptible, a boring pain and the curious sensation of pleasant itching occur. This inflammatory reaction is the initial immunological response to the infestation. Heavily infested patients may not notice a stage 2 infection due to the other fleas’ causing irritation as well. Feces may be seen, but this is more common in the 3rd stage. Around the third day after penetration, erythema and skin tenderness are felt, accompanied by pruritus (severe itching) and a black furuncular nodule surrounded by a white halo of stretched skin caused by the expansion of the flea. Fecal coils may protrude from the center of the nodule where the fleas anus is facing upward. They should be washed off quickly as the feces may remain in the skin unless removed. During this 3a substage, pain can be severe, especially at night or, if the nodule is on the foot, while walking. Eggs will also begin to be released and a watery secretion can be observed. The radical metamorphosis during the 3rd to 6th day after penetration, or neosomy, precedes the formation of a small caldera-like rim rampart as a result of the increased thickness of the fleas chitin exoskeleton. During the caldera formation, the nodule shrinks a bit and it looks as if it is beginning to dry out; this takes 2 weeks and comprises substage 3b.At the third week after penetration and substage 4a, the eggs’ release will have stopped and the lesion will become smaller and more wrinkled. As the flea is near death, fecal and water secretion will stop altogether. Pain, tenderness, and skin inflammation will still be present. Around the 25th day after penetration, the lesion looks like a black crust and the fleas carcass is removed by host repair mechanisms and the skin begins to heal. With the flea gone, inflammation may still persist for a while.Although patients would not present within the 5th stage of tungiasis as the flea would be dead and no longer in the body, this stage is characterized by the reorganization of the skin (1–4 weeks) and a circular residue of 5–10 mm in diameter around the site in penetration. An intraepithelial abscess, which developed due to the presence of the flea, will drain and later heal. Although these disease residues would persist for a few months, tungiasis is no longer present.In severe cases, ulcers are common, as well as complete tissue and nail deformation. A patient may be unable to walk due to severe pain if too many of the lesions are present in the feet. Suppuration (pus formation), tissue death, auto-amputation of digits (via ainhum), and chronic lymphedema may also be seen.If the patient is not vaccinated, tetanus is often a complication due to secondary infection. Necrosis and gangrene are other common complications of severe infestation and superinfection. Staphylococcus aureus and Wolbachia endobacteria can be transmitted by the chigoe flea, as well as nearly 150 other different pathogens. For these reasons, the chigoe flea should be removed as soon as possible. Incubation Because of the relatively rapid onset of tungiasis, the incubation period tends to be short. Although some reddening around the site of penetration occurs, the first symptoms are perceived in stage 2 as itching and severe pain, usually a day after penetration. Cause Tungiasis is strictly caused by chigoe fleas (the term transmission does not apply because Tunga penetrans is itself responsible for the disease.) The preponderance of tungiasis lesions on the toes may be because chigoe flea is a poor jumper, attaining only a high of 20 cm. But the reality is more complex; for example, the jumping ability cannot explain why hands are the second-most affected body part. Lesions on the hands are better explained by playing in the sand and noting that hands are often used to remove sand from other parts of the body. The occurrence of tungiasis lesions on the toes, between them, and on the soles can be easily explained because most of the victims are poor, walk barefoot, and live in places where the sand (home to chigoe fleas) constitutes the floor. Rate of incidence therefore is greatly increased in poor communities and populations because of the lack of adequate housing. This occurs in significantly higher proportions during the peak of the dry season in local communities. Reservoirs and transmission T. penetrans has been documented to use various warm-blooded animals as reservoir hosts, including humans, pigs, dogs, cats, rats, sheep, cattle, donkeys, monkeys, birds, and elephants. These hosts directly propagate the disease by being the origin of the next generation of fleas. Once the female flea expels 100–200 eggs, the cycle of transmission begins again. Lifecycle T. penetrans eggs, on average, are 604 μm long, The larva will hatch from the egg within one to six days, assuming the environmental conditions (e.g., moisture, humidity, etc.) are favorable. After hatching, the flea will progress through two instar phases. This is unique in that most fleas go through three, instead. Over the course of that development, the flea will first decrease in size from its just-hatched size of 1,500 μm to 1,150 μm (first instar) before growing to 2,900 μm (second instar).About 6 to 8 days after hatching, the larva pupates and builds a cocoon around itself. Because it lives mostly on and below the surface of the sand, sand is used to stabilize the cocoon and help to promote its development. An environmental disturbance such as rain or a lack of sand have been shown to decrease incidence, most likely due to decreasing the environmental factors (i.e., sand) on which the flea depends for overall growth. Barring any disturbances to the cocoon, an adult flea will emerge from the puparium after 9–15 days.In the adult phase, the flea will occasionally feed on unsuspecting animals. Only once the female burrows into the skin can reproduction occur, as the male and female show no interest in each other in the wild. The male flea dies after copulation. The female flea continues in vivo ectodevelopment, described in stages by the Fortaleza classification of tungiasis. Fortaleza classification In a seminal paper on the biology and pathology of Tunga penetrans, Eisele et al. (2003) provided and detailed the five stages of tungiasis, thereby detailing the in vivo development of the female chigoe flea for the first time. In dividing the natural history of the disease, the Fortaleza Classification formally describes the last part of the female fleas life cycle where it burrows into its hosts skin, expels eggs, and dies. Due to the nature of the discussion, overlap with other sections, particularly the one on symptoms, is unavoidable.Stage 1 is characterized by the penetration of the skin by the female chigoe flea. Running along the body, the female uses its posterior legs to push its body upward by an angle of 45–90 degrees. Penetration then starts, beginning with the proboscis going through the epidermis.By stage 2 (day 1–2), penetration is complete and the flea has burrowed most of its body into the skin. Only the anus, the copulatory organs, and four rear air holes in fleas called stigmata remain on the outside of the epidermis. The anus will excrete feces that is thought to attract male fleas for mating, described in a later section. The hypertrophic zone between tergites 2 and 3 in the abdominal region begins to expand a day or two after penetration and takes the appearance of a life belt. During this time, the flea begins to feed on the hosts blood.Stage 3 is divided into two substages, 3a and 3b. Stage 3a is 2–3 days after penetration is complete. In substage 3a, the fleas midsection swells, balloon-like, to the size of a pea. This expansion of the flea causes its integument to be stretched thin, a process called physogastrism. The swelling ends in the condition of physogastry, and results in the appearance of a white halo around a black dot at the center of the lesion. That dot is the rear end of the flea. In substage 3b, the chitin exoskeleton of tergites 2 and 3 increase in thickness, which gives the structure the look of a miniature caldera. Egg release is common in substage 3b, as are fecal coils. The eggs tend to stick to the skin.At about the 3rd week after penetration, stage 4 begins, which is also divided into two substages. In 4a, the flea loses its signs of vitality and appears near death. As a result, the lesion shrinks in size, turns brown, and appears wrinkled. The death of the flea marks the beginning of substage 4b (around day 25 post-penetration) as the body begins to eliminate the parasite through skin repair mechanisms (e.g. shedding and subsequent skin repair). At this phase, the lesion is seen as brown or black.By the 5th stage of tungiasis, the carcass of the T. penetrans flea has been expelled and there are circular skin residues of the infection that remain. There are only lingering symptoms at this time, described in the next section. Morphology In a study of 1000 freshly ejected T. penetrans eggs, it was found that females are generally smaller than males for all criteria. In some cases, though, females had a bigger epipharynx and maxillar palpus. Due to its burrowing activity, the chigoe flea has developed a well-developed lacinia and epipharynx that is used to penetrate the skin. Overall, the fleas’ head is relatively flattened, which again aids in burrowing through the epidermal and dermal layers.Investigators have also found that adult T. penetrans have different morphologies with respect to the shape of their head. Some have a rounded head, others have head shapes that resemble ski ramps more than anything else; still, others demonstrate head shapes that are very linear with a slight bulge at the nose. These morphologies were seen to be host-specific, as only fleas of some head-types were found in specific hosts. This, along with genetic differences among the T. penetrans fleas that infect different host animals, may suggest that there are several species of closely related species have been grouped taxonomically under one binomial nomenclature.Though the chigoe flea resembles most others in morphology, the flea has a hypertrophic region between tergites 2 and 3. As stated in Eisele et al. (2003), tergites 2 and 3, as well as the abdominal sternites, stretch considerably and are bent apart. Chitinous clasps that are built for the abdominal enlargement surround these regions and hold onto the hypertrophic zone, giving them the appearance of a three-leafed clover. (See image 7 of the life cycle diagram.) Surprisingly, the rest of the flea, including the head and the thorax, do not change in shape.,With the rapid expansion of the flea, the morphology of the flea is now vastly different. It has gone from the smallest flea in the world to a bulging mass that measures 5–10mm in diameter. This results in a volume that is 2000 to 3000 times what it used to be. Reproduction Females have a depression or groove at their abdominal end whereas the males have their protrusive copulatory organs in that same region. These morphological differences reflect the way the male and female copulate. In the first step toward copulation, the female penetrates an organism in an ungravid state. It is only there that the male will find her and copulate. Copulation of adults has not been observed in the wild. With the female reproductive organs pointing outward, the male will place his reproductive organs "in direction to the upright abdominal end of the female" to copulate. Having copulated for only a few seconds to 2 minutes, the male will then begin to search for another female. After copulation is complete, the male will die, although sometimes he will take a blood meal before doing so. Eggs will be expelled whether or not they have been fertilized.The chigoe flea eggs’ average length is 604 μm and the just hatched larvae, in their first instar, have an average length of 1,500 μm. At the second and last instar (T. penetrans is unique among the fleas in that it only has two, instead of three, instars.) the larvae decrease in size to 1,150 μm after growing to at least 2,900 μm. The development from instar 1 to instar 2 lasts less than one day.On the whole, Tunga penetrans does not do very well in terms of its Darwinian fitness. In a laboratory setting in which different mediums were provided for larval growth, the rate of survival from egg to adult in the best medium was 1.05%. Only 15% of the eggs were found to develop into larvae, and of those, only 14% formed a cocoon. Moreover, only half of the pupae reached the adult phase, resulting in gender disequilibrium. Although these results reflect a laboratory setting, the general lack of success for T. penetrans’s reproductive (opportunistic) R-strategy is surprising given the number of fleas that a single person can attract. The low survival rate suggests that a concentrated public health effort directed at any point in the flea’s life cycle is likely to deal a crippling blow to the overall population of the flea in the area. Diagnosis There are no diagnostic tests for tungiasis. This is most likely because the parasite is ectoparasitic with visible symptoms. Identification of the parasite through removal, and a patient’s traveling history, should suffice for diagnosis, though the latter is clearly more useful than the former. Localization of the lesion may be a useful diagnostic method for the clinician. A biopsy may be done, though again, it is not required for diagnosis. Prevention Due to the high number of hosts, eradication of tungiasis is not feasible, at least not easily so. Public health and prevention strategies should then be done with elimination as the target. Better household hygiene, including having a cemented rather than a sand floor, and washing it often, would lower the rates of tungiasis significantly.Though vaccines would be useful, due to the ectoparasitic nature of chigoe flea, they are neither a feasible nor an effective tool against tungiasis. Nevertheless, due to the high incidence of secondary infection, those at risk of tungiasis should get vaccinated against tetanus. A better approach is to use repellents that specifically target the chigoe flea. One very successful repellent is called Zanzarin, a derivative of coconut oil, jojoba oil, and aloe vera. In a recent study involving two cohorts, the infestation rates dropped 92% on average for the first one and 90% for the other. Likewise, the intensity of the cohorts dropped by 86% and 87% respectively. The non-toxic nature of Zanzarin, combined with its "remarkable regression of the clinical pathology" make this a tenable public health tool against tungiasis.The use of pesticide, like DDT, has also led to elimination of the Tunga penetrans, but this control/prevention strategy should be utilized very carefully, if at all, because of the possible side effects such pesticides can have on the greater biosphere. In the 1950s, there was a worldwide effort to eradicate malaria. As part of that effort, Mexico launched the Campaña Nacional para la Erradicación de Paludismo, or the National Campaign for the Eradication of Malaria. By spraying DDT in homes, the Anopheles a genus of mosquitoes known to carry the deadly Plasmodium falciparum was mostly eliminated. As a consequence of this national campaign, other arthropods were either eliminated or significantly reduced in number, including the reduviid bug responsible for Chagas disease (American Trypanosomiasis) and T. penetrans. Controlled, in-home spraying of DDT is effective as it gives the home immunity against arthropods while not contaminating the local water supplies and doing as much ecological damage as was once the case when DDT was first introduced.While other species gradually gained resistance to DDT and other insecticides that were used, T. penetrans did not; as a result, the incidence of tungiasis in Mexico is very low when compared to the rest of Latin America, especially Brazil, where rates in poor areas have been known to be as high or higher than 50%. There was a 40-year period with no tungiasis cases in Mexico. It was not until August 1989 that three Mexican patients presented with the disease. Though there were other cases of tungiasis reported thereafter, all were acquired in Africa. Treatment As the disease is self-limiting, at least when exposure to the parasite is limited, management is mostly confined to treatment. Due to the secondary infection that can cause serious medical issues, the recommended course of action upon diagnosis is a surgical extraction of the fleas followed by the application of a topical antibiotic. Care should be taken to avoid tearing the flea during the extraction procedures as severe inflammation will result. The same will occur if part of the flea is left behind. Sterile equipment should always be used, as contaminated instruments could act as mechanical vectors for pathogens to enter the body.There is no drug that has proven to be effective against embedded fleas. Oral niridazole was once considered a therapeutic drug, but well-designed studies are lacking and, given the severe adverse effects, this is one drug that is likely to cause more harm than good. However, it has some anecdotal evidence of lysing the fleas altogether. Oral ivermectin is considered by some in endemic areas to be a panacea against the fleas but studies using high doses have failed to validate this hypothesis. Other drugs such as topical ivermectin and metrifonate have been somewhat successful, but not enough to be significant. [2,5] For superinfections, trimethoprim, sulfamethoxazole, metronidazole, amoxicillin, (with/without clavulanate) have been used successfully, though these treat only secondary infections.Successful topical treatments also include cryotherapy and electrodesiccation of the lesion. If formaldehyde, chloroform, or DDT are used topically, care should be taken when dealing with the resulting morbidity. The T. penetrans flea can also be suffocated using occlusive petrolatum, while Vaseline will kill the organism as well, most likely due to suffocation as the stigmata would be covered. The gum of the mammee apple (Mammea americana), a fruit that also goes by the name Saint Domingo apricot, has also been used to kill the chigoe flea, though this has not been reported in the main T. penetrans literature.Even without treatment, the burrowed fleas will die within five weeks and are naturally sloughed off as the skin sheds. Epidemiology For the most part, the chigoe flea lives 2–5 cm below the sand, an observation which helps explains its overall distribution. The temperature is generally too hot for the larvae to develop on the surface of the sand and the deeper sand does not have enough oxygen. This preferred ecological niche offers a way to decrease transmission among humans by investing in concrete grounds as opposed to the sand that is usually used in shacks and some favelas. Indeed, Nany et al. (2007) report that "In shacks with concreted ground being cleaned every day with water, Tunga [penetrans] larvae were hardly found."In a longitudinal study conducted from March 2001 to January 2002, incidence of tungiasis was found to vary significantly with the local seasons of an endemic community in Brazil. In particular, the study found that "occurrence of tungiasis varies throughout the year and seems to follow local precipitation patterns. Maximum and minimum prevalence rates differed by more than a factor of three." The authors suggest that the correlation is due to the high humidity in the soil impairing larval development during the rainy season, as well as the more obvious reason that rain may simply wash away all stages of T. penetrans due its small size of 1mm.Acting as both biological vectors and definitive hosts, humans have spread Tunga penetrans from its isolated existence in the West Indies to all of Latin America and most of Africa via sea travel. Since the chigoe flea technically has no reservoir species and the female will cause tungiasis to any mammalian organism it can penetrate, this means the flea will have a relatively large number of hosts and victims. Epidemiologically, this is important as tungiasis often causes secondary infections. History Tungiasis had been endemic in pre-Columbian Andean society for centuries before discovery of T. penetrans as native to the West Indies. The first case of tungiasis was described in 1526 by Gonzalo Fernández de Oviedo y Valdés, where he discussed the skin infection and its symptoms on crew members from Columbuss Santa Maria after they were shipwrecked on Haiti. Through ship routes and further expeditions, the chigoe flea was spread to the rest of the world, particularly to the rest of Latin America and Africa. The spread to greater Africa occurred throughout the 17th and 19th centuries, specifically in 1872 when the infected crewmen of the ship Thomas Mitchell introduced it into Angola by illegal dumping of sand ballast, having sailed from Brazil. References External links Muehlstaedt, Michael (2008). "Periungual Tungiasis". New England Journal of Medicine. 359 (24): e30. doi:10.1056/NEJMicm074290. PMID 19073971.
II
II is the Roman numeral for 2. II may also refer to: Biology and medicine Image intensifier, medical imaging equipment Invariant chain, a polypeptide involved in the formation and transport of MHC class II protein Optic nerve, the second cranial nerve Economics Income inequality, or the wealth gap, in economics Internationalization Index, used by the UN to rank nations and companies in evaluating their degree of integration with the world economy Institutional Investor (magazine), an American finance magazine Music Supertonic, in music ii, a 2018 song by CHVRCHES Albums II (2 Unlimited album), 1998 II (Aquilo album), 2018 II (Bad Books album), 2012 II (Boyz II Men album), 1994 II (Capital Kings album), 2015 II (Charade album), 2004 II (The Common Linnets album), 2015 II (Compact Disco album), 2011 II (Cursed album), 2005 II (Darna album), 2003 II (Espers album), 2006 II (Fuzz album), 2015 II (Hardline album), 2002 II (High Rise album), 1986 II (Khun Narin album), 2016 II (Kingston Wall album), 1993 II (The Kinleys album), 2000 II (Kurious album), 2009 II (Last in Line album), 2019 II (Lords of Black album), 2016 II (Maylene and the Sons of Disaster album), 2007 II (METZ album), 2015 II (Moderat album), 2013 II (The Presidents of the United States of America album), 1996 II (Sahg album), 2008 II (Seven Thorns album), 2013 II (Unknown Mortal Orchestra album), 2013 II (Xerath album), 2011 II, by Krux, 2006 II, by Majical Cloudz, 2011 II, by Viva Brother, 2017 II, an EP by TNGHT, 2019 Crystal Castles II, 2010 Led Zeppelin II, 1969 Meat Puppets II, 1984 Soul Assassins II, 2000 Viva Koenji!, Koenji Hyakkei album, also known as 弐(II) Iris II, by Iris, 1987 People and social groups Ii is a Japanese surname, daimyō of Hikone: Ii clan, Japanese clan (Sengoku period) Ii Naomasa, one of four Guardians of the Tokugawa clan Ii Naotora, female daimyō and foster mother of Naomasa Ii Naoyuki, a Japanese author John Papa ʻĪʻī, a Hawaiian noble Other uses ii (digraph), a digraph in certain romanized alphabets ii (IRC client), short for IRC It, an Internet Relay Chat client for Unix-like operating systems Ii, Finland, a municipality of Finland Illegal immigrant (especially in Hong Kong vernacular) Index Islamicus, a bibliography database of publications about Islam and the Muslim world Internet Infidels, a discussion forum IBC Airways (IATA code: II) Yi language (ISO 639-1: ii)
Multiple abnormalities
When a patient has multiple abnormalities (multiple anomaly, multiple deformity), they have a congenital abnormality that can not be primarily identified with a single system of the body or single disease process. Most medical conditions can have systemic sequelae, but multiple abnormalities occur when the effects on multiple systems is immediately obvious. References == External links ==
Arm
In human anatomy, the arm refers to the upper limb in common usage, although academically the term specifically means the upper arm between the glenohumeral joint (shoulder joint) and the elbow joint. The distal part of the upper limb between the elbow and the radiocarpal joint (wrist joint) is known as the forearm or "lower" arm, and the extremity beyond the wrist is the hand. By anatomical definitions, the bones, ligaments and skeletal muscles of the shoulder girdle, as well as the axilla between them, is considered parts of the upper limb, and thus also components of the arm. The Latin term brachium, which serves as a root word for naming many anatomical structures, may refer to either the upper limb as a whole or to the upper arm on its own. Anatomy Bones The humerus is one of the three long bones of the arm. It joins with the scapula at the shoulder joint and with the other long bones of the arm, the ulna and radius at the elbow joint. The elbow is a complex hinge joint between the end of the humerus and the ends of the radius and ulna. Muscles The arm is divided by a fascial layer (known as lateral and medial intermuscular septa) separating the muscles into two osteofascial compartments: the anterior and the posterior compartments of the arm. The fascia merges with the periosteum (outer bone layer) of the humerus.The anterior compartment contains three muscles: biceps brachii, brachialis and coracobrachialis muscles. They are all innervated by the musculocutaneous nerve. The posterior compartment contains only the triceps brachii muscle, supplied by the radial nerve. Nerve supply The musculocutaneous nerve, from C5, C6, C7, is the main supplier of muscles of the anterior compartment. It originates from the lateral cord of the brachial plexus of nerves. It pierces the coracobrachialis muscle and gives off branches to the muscle, as well as to brachialis and biceps brachii. It terminates as the anterior cutaneous nerve of the forearm. The radial nerve, which is from the fifth cervical spinal nerve to the first thoracic spinal nerve, originates as the continuation of the posterior cord of the brachial plexus. This nerve enters the lower triangular space (an imaginary space bounded by, amongst others, the shaft of the humerus and the triceps brachii) of the arm and lies deep to the triceps brachii. Here it travels with the deep artery of the arm, which sits in the radial groove of the humerus. This fact is very important clinically as a fracture of the shaft of the bone here can cause lesions or even transections in the nerve. Other nerves passing through give no supply to the arm. These include: The median nerve, nerve origin C5-T1, which is a branch of the lateral and medial cords of the brachial plexus. This nerve continues in the arm, travelling in a plane between the biceps and triceps muscles. At the cubital fossa, this nerve is deep to the pronator teres muscle and is the most medial structure in the fossa. The nerve passes into the forearm. The ulnar nerve, origin C8-T1, is a continuation of the medial cord of the brachial plexus. This nerve passes in the same plane as the median nerve, between the biceps and triceps muscles. At the elbow, this nerve travels posterior to the medial epicondyle of the humerus. This means that condylar fractures can cause lesion to this nerve. Blood supply The main artery in the arm is the brachial artery. This artery is a continuation of the axillary artery. The point at which the axillary becomes the brachial is distal to the lower border of teres major. The brachial artery gives off an unimportant branch, the deep artery of arm. This branching occurs just below the lower border of teres major. The brachial artery continues to the cubital fossa in the anterior compartment of the arm. It travels in a plane between the biceps and triceps muscles, the same as the median nerve and basilic vein. It is accompanied by venae comitantes (accompanying veins). It gives branches to the muscles of the anterior compartment. The artery is in between the median nerve and the tendon of the biceps muscle in the cubital fossa. It then continues into the forearm. The deep artery of the arm travels through the lower triangular space with the radial nerve. From here onwards it has an intimate relationship with the radial nerve. They are both found deep to the triceps muscle and are located on the spiral groove of the humerus. Therefore, fracture of the bone may not only lead to lesion of the radial nerve, but also haematoma of the internal structures of the arm. The artery then continues on to anastamose with the recurrent radial branch of the brachial artery, providing a diffuse blood supply for the elbow joint. Veins The veins of the arm carry blood from the extremities of the limb, as well as drain the arm itself. The two main veins are the basilic and the cephalic veins. There is a connecting vein between the two, the median cubital vein, which passes through the cubital fossa and is clinically important for venepuncture (withdrawing blood). The basilic vein travels on the medial side of the arm and terminates at the level of the seventh rib. The cephalic vein travels on the lateral side of the arm and terminates as the axillary vein. It passes through the deltopectoral triangle, a space between the deltoid and the pectoralis major muscles. Society and culture In Hindu, Buddhist and Egyptian iconography the symbol of the arm is used to illustrate the power of the sovereign. In Hindu tradition gods are depicted with several arms which carry specific symbols of their powers. It is believed that several arms depict omnipotence of gods. In popular culture Thakur did not have arms in the movie Sholay. In West Africa, the Bambara use forearm to symbolize the spirit, which is a link between God and man. Symbolic gestures of raising both hands signal surrender, appeals for mercy, and justice. Clinical significance The cubital fossa is clinically important for venepuncture and for blood pressure measurement.When the arm is fractured this may refer to a fracture of the humerus bone. Veins on the arm may be taken when a coronary artery bypass graft is needed. Other animals In other animals, the term arm can also be used for homologous or analogous structures (such as one of the paired forelimbs of a four-legged animal or the arms of cephalopods, respectively). In anatomical usage, the term arm may sometimes refer specifically to the segment between the shoulder and the elbow, while the segment between the elbow and wrist is the forearm. However, in common, literary, and historical usage, arm refers to the entire upper limb from shoulder to wrist. This article uses the former definition; see upper limb for the wider definition.In primates the arm is adapted for precise positioning of the hand and thus assist in the hands manipulative tasks. The ball and socket shoulder joint allows for movement of the arms in a wide circular plane, while the structure of the two forearm bones which can rotate around each other allows for additional range of motion at that level. Additional images See also Axilla – also known as armpit, underarm or oxter Common flexor tendon == References ==
Borderline intellectual functioning
Borderline intellectual functioning, also called borderline mental retardation (in the ICD-8), is a categorization of intelligence wherein a person has below average cognitive ability (generally an IQ of 70–85), but the deficit is not as severe as intellectual disability (below 70). It is sometimes called below average IQ (BAIQ). This is technically a cognitive impairment; however, this group may not be sufficiently mentally disabled to be eligible for specialized services. Codes The DSM-IV-TR code of borderline intellectual functioning is V62.89. DSM-5 diagnosis codes are V62.89 and R41.83. Learning skills During school years, individuals with borderline intellectual functioning are often "slow learners". Although a large percentage of this group fails to complete high school and can often achieve only a low socioeconomic status, most adults in this group blend in with the rest of the population. Differential diagnosis According to the DSM-5, differentiating borderline intellectual functioning and mild intellectual disability requires careful assessment of adaptive and intellectual functions and their variations, especially in the presence of co-morbid psychiatric disorders that may affect patient compliance with standardized test (for example, attention deficit hyperactivity disorder (ADHD) with severe impulsivity or schizophrenia). See also IQ classification Special education References Further reading Gillberg, Christopher (1995). Clinical child neuropsychiatry. Cambridge: Cambridge University Press. pp. 47–48. ISBN 0-521-54335-5. Harris, James C. (2006). Intellectual disability : understanding its development, causes, classification, evaluation, and treatment. New York: Oxford University Press. ISBN 0-19-517885-8.
Delusion
A delusion is a false fixed belief that is not amenable to change in light of conflicting evidence. As a pathology, it is distinct from a belief based on false or incomplete information, confabulation, dogma, illusion, hallucination, or some other misleading effects of perception, as individuals with those beliefs are able to change or readjust their beliefs upon reviewing the evidence. However: "The distinction between a delusion and a strongly held idea is sometimes difficult to make and depends in part on the degree of conviction with which the belief is held despite clear or reasonable contradictory evidence regarding its veracity."Delusions have been found to occur in the context of many pathological states (both general physical and mental) and are of particular diagnostic importance in psychotic disorders including schizophrenia, paraphrenia, manic episodes of bipolar disorder, and psychotic depression. Types Delusions are categorized into four different groups: Bizarre delusion: Delusions are deemed bizarre if they are clearly implausible and not understandable to same-culture peers and do not derive from ordinary life experiences. An example named by the DSM-5 is a belief that someone replaced all of ones internal organs with someone elses without leaving a scar, depending on the organ in question. Non-bizarre delusion: A delusion that, though false, is at least technically possible, e.g., the affected person mistakenly believes that they are under constant police surveillance. Mood-congruent delusion: Any delusion with content consistent with either a depressive or manic state, e.g., a depressed person believes that news anchors on television highly disapprove of them, or a person in a manic state might believe they are a powerful deity. Mood-neutral delusion: A delusion that does not relate to the patients emotional state; for example, a belief that an extra limb is growing out of the back of ones head is neutral to either depression or mania. Themes In addition to these categories, delusions often manifest according to a consistent theme. Although delusions can have any theme, certain themes are more common. Some of the more common delusion themes are: Delusion of control: False belief that another person, group of people, or external force controls ones general thoughts, feelings, impulses, or behaviors. Cotard delusion: False belief that one does not exist or that one has died. Some cases also include the belief that one is immortal or that one has lost their internal organs, blood, or other body parts. Delusional jealousy: False belief that a spouse or lover is having an affair, with no proof to back up the claim. Delusion of guilt or sin (or delusion of self-accusation): Ungrounded feeling of remorse or guilt of delusional intensity. Delusion of mind being read: False belief that other people can know ones thoughts. Delusion of thought insertion: Belief that another thinks through the mind of the person. Delusion of reference: False belief that insignificant remarks, events, or objects in ones environment have personal meaning or significance. "Usually the meaning assigned to these events is negative, but the messages can also have a grandiose quality." Erotomania: False belief that another person is in love with them. Religious delusion: Belief that the affected person is a god or chosen to act as a god. Somatic delusion: Delusion whose content pertains to bodily functioning, bodily sensations or physical appearance. Usually the false belief is that the body is somehow diseased, abnormal or changed. A specific example of this delusion is delusional parasitosis: Delusion in which one feels infested with insects, bacteria, mites, spiders, lice, fleas, worms, or other organisms. Delusion of poverty: Person strongly believes they are financially incapacitated. Although this type of delusion is less common now, it was particularly widespread in the days preceding state support. Grandiose delusions Grandiose delusions or delusions of grandeur are principally a subtype of delusional disorder but could possibly feature as a symptom of schizophrenia and manic episodes of bipolar disorder. Grandiose delusions are characterized by fantastical beliefs that one is famous, omnipotent or otherwise very powerful. The delusions are generally fantastic, often with a supernatural, science-fictional, or religious bent. In colloquial usage, one who overestimates ones own abilities, talents, stature or situation is sometimes said to have "delusions of grandeur". This is generally due to excessive pride, rather than any actual delusions. Grandiose delusions or delusions of grandeur can also be associated with megalomania. Persecutory delusions Persecutory delusions are the most common type of delusions and involve the theme of being followed, harassed, cheated, poisoned or drugged, conspired against, spied on, attacked, or otherwise obstructed in the pursuit of goals. Persecutory delusions are a condition in which the affected person wrongly believes that they are being persecuted. Specifically, they have been defined as containing two central elements: The individual thinks that: harm is occurring, or is going to occur the persecutors have the intention to cause harmAccording to the DSM-IV-TR, persecutory delusions are the most common form of delusions in schizophrenia, where the person believes they are "being tormented, followed, sabotaged, tricked, spied on, or ridiculed". In the DSM-IV-TR, persecutory delusions are the main feature of the persecutory type of delusional disorder. When the focus is to remedy some injustice by legal action, they are sometimes called "querulous paranoia". Causes Explaining the causes of delusions continues to be challenging and several theories have been developed. One is the genetic or biological theory, which states that close relatives of people with delusional disorder are at increased risk of delusional traits. Another theory is the dysfunctional cognitive processing, which states that delusions may arise from distorted ways people have of explaining life to themselves. A third theory is called motivated or defensive delusions. This one states that some of those persons who are predisposed might experience the onset of delusional disorder in those moments when coping with life and maintaining high self-esteem becomes a significant challenge. In this case, the person views others as the cause of their personal difficulties in order to preserve a positive self-view.This condition is more common among people who have poor hearing or sight. Also, ongoing stressors have been associated with a higher possibility of developing delusions. Examples of such stressors are immigration, low socioeconomic status, and even possibly the accumulation of smaller daily hassles. Specific delusions The top two factors mainly concerned in the germination of delusions are disorder of brain functioning and background influences of temperament and personality.Higher levels of dopamine qualify as a symptom of disorders of brain function. That they are needed to sustain certain delusions was examined by a preliminary study on delusional disorder (a psychotic syndrome) instigated to clarify if schizophrenia had a dopamine psychosis. There were positive results - delusions of jealousy and persecution had different levels of dopamine metabolite HVA and homovanillyl alcohol (which may have been genetic). These can be only regarded as tentative results; the study called for future research with a larger population. It is simplistic to say that a certain measure of dopamine will bring about a specific delusion. Studies show age and gender to be influential and it is most likely that HVA levels change during the life course of some syndromes.On the influence of personality, it has been said: "Jaspers considered there is a subtle change in personality due to the illness itself; and this creates the condition for the development of the delusional atmosphere in which the delusional intuition arises."Cultural factors have "a decisive influence in shaping delusions". For example, delusions of guilt and punishment are frequent in a Western, Christian country like Austria, but not in Pakistan, where it is more likely persecution. Similarly, in a series of case studies, delusions of guilt and punishment were found in Austrian patients with Parkinsons being treated with l-dopa, a dopamine agonist. Pathophysiology The two-factor model of delusions posits that dysfunction in both belief formation systems and belief evaluation systems are necessary for delusions. Dysfunction in evaluations systems localized to the right lateral prefrontal cortex, regardless of delusion content, is supported by neuroimaging studies and is congruent with its role in conflict monitoring in healthy persons. Abnormal activation and reduced volume is seen in people with delusions, as well as in disorders associated with delusions such as frontotemporal dementia, psychosis and Lewy body dementia. Furthermore, lesions to this region are associated with "jumping to conclusions", damage to this region is associated with post-stroke delusions, and hypometabolism this region associated with caudate strokes presenting with delusions.The aberrant salience model suggests that delusions are a result of people assigning excessive importance to irrelevant stimuli. In support of this hypothesis, regions normally associated with the salience network demonstrate reduced grey matter in people with delusions, and the neurotransmitter dopamine, which is widely implicated in salience processing, is also widely implicated in psychotic disorders.Specific regions have been associated with specific types of delusions. The volume of the hippocampus and parahippocampus is related to paranoid delusions in Alzheimers disease, and has been reported to be abnormal post mortem in one person with delusions. Capgras delusions have been associated with occipito-temporal damage and may be related to failure to elicit normal emotions or memories in response to faces. Diagnosis The modern definition and Jaspers original criteria have been criticised, as counter-examples can be shown for every defining feature. Studies on psychiatric patients show that delusions vary in intensity and conviction over time, which suggests that certainty and incorrigibility are not necessary components of a delusional belief.Delusions do not necessarily have to be false or incorrect inferences about external reality. Some religious or spiritual beliefs by their nature may not be falsifiable, and hence cannot be described as false or incorrect, no matter whether the person holding these beliefs was diagnosed as delusional or not. In other situations the delusion may turn out to be true belief. For example, in delusional jealousy, where a person believes that their partner is being unfaithful (and may even follow them into the bathroom believing them to be seeing their lover even during the briefest of partings), it may actually be true that the partner is having sexual relations with another person. In this case, the delusion does not cease to be a delusion because the content later turns out to be verified as true or the partner actually chose to engage in the behavior of which they were being accused. In other cases, the belief may be mistakenly assumed to be false by a doctor or psychiatrist assessing it, just because it seems to be unlikely, bizarre or held with excessive conviction. Psychiatrists rarely have the time or resources to check the validity of a persons claims leading to some true beliefs to be erroneously classified as delusional. This is known as the Martha Mitchell effect, after the wife of the attorney general who alleged that illegal activity was taking place in the White House. At the time, her claims were thought to be signs of mental illness, and only after the Watergate scandal broke was she proved right (and hence sane). Similar factors have led to criticisms of Jaspers definition of true delusions as being ultimately un-understandable. Critics (such as R. D. Laing) have argued that this leads to the diagnosis of delusions being based on the subjective understanding of a particular psychiatrist, who may not have access to all the information that might make a belief otherwise interpretable. R. D. Laings hypothesis has been applied to some forms of projective therapy to "fix" a delusional system so that it cannot be altered by the patient. Psychiatric researchers at Yale University, Ohio State University and the Community Mental Health Center of Middle Georgia have used novels and motion picture films as the focus. Texts, plots and cinematography are discussed and the delusions approached tangentially. This use of fiction to decrease the malleability of a delusion was employed in a joint project by science-fiction author Philip Jose Farmer and Yale psychiatrist A. James Giannini. They wrote the novel Red Orcs Rage, which, recursively, deals with delusional adolescents who are treated with a form of projective therapy. In this novels fictional setting other novels written by Farmer are discussed and the characters are symbolically integrated into the delusions of fictional patients. This particular novel was then applied to real-life clinical settings.Another difficulty with the diagnosis of delusions is that almost all of these features can be found in "normal" beliefs. Many religious beliefs hold exactly the same features, yet are not universally considered delusional. For instance, if a person was holding a true belief then they will of course persist with it. This can cause the disorder to be misdiagnosed by psychiatrists. These factors have led the psychiatrist Anthony David to note that "there is no acceptable (rather than accepted) definition of a delusion." In practice, psychiatrists tend to diagnose a belief as delusional if it is either patently bizarre, causing significant distress, or excessively pre-occupying the patient, especially if the person is subsequently unswayed in belief by counter-evidence or reasonable arguments. Joseph Pierre, M.D. states that one factor that helps differentiate delusions from other kinds of beliefs is that anomalous subjective experiences are often used to justify delusional beliefs. While idiosyncratic and self-referential content often make delusions impossible to share with others, Dr. Pierre suggests that it may be more helpful to emphasize the level of conviction, preoccupation, and extension of a belief rather than the content of the belief when considering whether a belief is delusional.It is important to distinguish true delusions from other symptoms such as anxiety, fear, or paranoia. To diagnose delusions a mental state examination may be used. This test includes appearance, mood, affect, behavior, rate and continuity of speech, evidence of hallucinations or abnormal beliefs, thought content, orientation to time, place and person, attention and concentration, insight and judgment, as well as short-term memory.Johnson-Laird suggests that delusions may be viewed as the natural consequence of failure to distinguish conceptual relevance. That is, irrelevant information would be framed as disconnected experiences, then it is taken to be relevant in a manner that suggests false causal connections. Furthermore, relevant information would be ignored as counterexamples. Definition Although non-specific concepts of madness have been around for several thousand years, the psychiatrist and philosopher Karl Jaspers was the first to define the four main criteria for a belief to be considered delusional in his 1913 book General Psychopathology. These criteria are: certainty (held with absolute conviction) incorrigibility (not changeable by compelling counterargument or proof to the contrary) impossibility or falsity of content (implausible, bizarre, or patently untrue) not amenable to understanding (i.e., belief cannot be explained psychologically)Furthermore, when beliefs involve value judgments, only those which cannot be proven true are considered delusions. For example: a man claiming that he flew into the Sun and flew back home. This would be considered a delusion, unless he were speaking figuratively, or if the belief had a cultural or religious source. Only the first three criteria remain cornerstornes of the current definition of a delusion in the DSM-5. Robert Trivers writes that delusion is a discrepancy in relation to objective reality, but with a firm conviction in reality of delusional ideas, which is manifested in the "affective basis of delusion." Treatment Delusions and other positive symptoms of psychosis are often treated with antipsychotic medication, which exert a medium effect size according to meta-analytic evidence. Cognitive behavioral therapy (CBT) improves delusions relative to control conditions according to a meta-analysis. A meta-analysis of 43 studies reported that metacognitive training (MCT) reduces delusions at a medium to large effect size relative to control conditions. Criticism Some psychiatrists criticize the practice of defining one and the same belief as normal in one culture and pathological in another culture for cultural essentialism. They argue that it is not justified to assume that culture can be simplified to a few traceable, distinguishable and statistically quantifiable factors and that everything outside those factors must be biological since cultural influences are mixed, including not only parents and teachers but also peers, friends, and media, and the same cultural influence can have different effects depending on earlier cultural influences. Other critical psychiatrists argue that just because a persons belief is unshaken by one influence does not prove that it would remain unshaken by another. For example, a person whose beliefs are not changed by verbal correction from a psychiatrist, which is how delusion is usually diagnosed, may still change his or her mind when observing empirical evidence, only that psychiatrists rarely, if ever, present patients with such situations.Anthropologist David Graeber have criticized psychiatrys assumption that an absurd belief goes from being delusional to "being there for a reason" merely because it is shared by many people by arguing that just as genetic pathogens like viruses can take advantage of an organism without benefitting said organism, memetic phenomena can spread while being harmful to societies, implying that entire societies can become ill. David Graeber argued that if somatic medicine did not have higher scientific standards than psychiatrys way of defining delusion, pandemics like the plague would have been considered to transsubstantiate from an illness to "a phenomenon that benefits the people" as soon as it had spread to a sufficiently large portion of the population. It was argued by Graeber that since deinstitutionalisation made sales of psychiatric medication profitable by no longer needing to spend money on keeping the patients in mental hospitals, corrupt incentives for psychiatry to allege "needs" for treatments have increased (in particular with regard to medicines that are said to be needed in daily doses, not so much regarding devices that can be kept for longer periods of time) which may itself be a harmful memetic pandemic in society that leads to diagnosing and medication of criticisms of widespread beliefs that are actually absurd and harmful, making the absurd belief that is not labelled as an illness profitable anyway by attracting criticisms that are labelled as illnesses. See also References Cited textJaspers K (1997). General Psychopathology. Vol. 1. Baltimore: Johns Hopkins University Press. ISBN 0-8018-5775-9. Further reading == External links ==
Hypersomnia
Hypersomnia is a neurological disorder of excessive time spent sleeping or excessive sleepiness. It can have many possible causes (such as seasonal affective disorder) and can cause distress and problems with functioning. In the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), hypersomnolence, of which there are several subtypes, appears under sleep-wake disorders.Hypersomnia is a pathological state characterized by a lack of alertness during the waking episodes of the day. It is not to be confused with fatigue, which is a normal physiological state. Daytime sleepiness appears most commonly during situations where little interaction is needed.Since the patients’ attention levels (wakefulness) are impaired, their quality of life may be impacted as well. This is especially true for people whose jobs request high levels of attention, such as in the healthcare field. Symptoms The main symptom of hypersomnia is excessive daytime sleepiness (EDS), or prolonged nighttime sleep, which has occurred for at least 3 months prior to diagnosis.Sleep drunkenness is also a symptom found in hypersomniac patients. It is a difficulty transitioning from sleep to wake. Individuals experiencing sleep drunkenness report waking with confusion, disorientation, slowness and repeated returns to sleep.It also appears in non-hypersomniac persons, for example after a night of insufficient sleep. Fatigue and consumption of alcohol or hypnotics can cause sleep drunkenness as well. It is also associated with irritability: people who get angry shortly before sleeping tend to experience sleep drunkenness.According to the American Academy of Sleep Medicine, hypersomniac patients often take long naps during the day that are mostly unrefreshing. Researchers found that naps are usually more frequent and longer in patients than in controls. Furthermore, 75% of the patients report that short naps are not refreshing, compared to controls. Diagnosis "The severity of daytime sleepiness needs to be quantified by subjective scales (at least the Epworth Sleepiness Scale) and objective tests such as the multiple sleep latency test (MSLT)." The Stanford sleepiness scale (SSS) is another frequently-used subjective measurement of sleepiness. After it is determined that excessive daytime sleepiness is present, a complete medical examination and full evaluation of potential disorders in the differential diagnosis (which can be tedious, expensive and time-consuming) should be undertaken. Differential diagnosis Hypersomnia can be primary (of central/brain origin), or it can be secondary to any of numerous medical conditions. More than one type of hypersomnia can coexist in a single patient. Even in the presence of a known cause of hypersomnia, the contribution of this cause to the complaint of excessive daytime sleepiness needs to be assessed. When specific treatments of the known condition do not fully suppress excessive daytime sleepiness, additional causes of hypersomnia should be sought. For example, if a patient with sleep apnea is treated with CPAP (continuous positive airway pressure), which resolves their apneas but not their excessive daytime sleepiness, it is necessary to seek other causes for the excessive daytime sleepiness. Obstructive sleep apnea "occurs frequently in narcolepsy and may delay the diagnosis of narcolepsy by several years and interfere with its proper management." Primary hypersomnias The true primary hypersomnias include: Narcolepsy (with and without cataplexy) Idiopathic hypersomnia Recurrent hypersomnias (like Kleine-Levin syndrome) Primary hypersomnia mimics There are also several genetic disorders that may be associated with primary/central hypersomnia. These include the following: Prader-Willi syndrome; Norrie disease; Niemann–Pick disease, type C; and myotonic dystrophy. However, hypersomnia in these syndromes may also be associated with other secondary causes, so it is important to complete a full evaluation. Myotonic dystrophy is often associated with SOREMPs (sleep onset REM periods, such as occur in narcolepsy).There are many neurological disorders that may mimic the primary hypersomnias, narcolepsy and idiopathic hypersomnia: brain tumors; stroke-provoking lesions; and dysfunction in the thalamus, hypothalamus, or brainstem. Also, neurodegenerative conditions such as Alzheimers disease, Parkinsons disease, or multiple system atrophy are frequently associated with primary hypersomnia. However, in these cases, one must still rule out other secondary causes.Early hydrocephalus can also cause severe excessive daytime sleepiness. Additionally, head trauma can be associated with a primary/central hypersomnia, and symptoms similar to those of idiopathic hypersomnia can be seen within 6–18 months following the trauma. However, the associated symptoms of headaches, memory loss, and lack of concentration may be more frequent in head trauma than in idiopathic hypersomnia. "The possibility of secondary narcolepsy following head injury in previously asymptomatic individuals has also been reported." Secondary hypersomnias Secondary hypersomnias are extremely numerous. Hypersomnia can be secondary to disorders such as clinical depression, multiple sclerosis, encephalitis, epilepsy, or obesity. Hypersomnia can also be a symptom of other sleep disorders, like sleep apnea. It may occur as an adverse effect of taking certain medications, of withdrawal from some medications, or of substance use. A genetic predisposition may also be a factor. In some cases it results from a physical problem, such as a tumor, head trauma, or dysfunction of the autonomic or central nervous system.Sleep apnea is the second most frequent cause of secondary hypersomnia, affecting up to 4% of middle-aged adults, mostly men. Upper airway resistance syndrome (UARS) is a clinical variant of sleep apnea that can also cause hypersomnia. Just as other sleep disorders (like narcolepsy) can coexist with sleep apnea, the same is true for UARS. There are many cases of UARS in which excessive daytime sleepiness persists after CPAP treatment, indicating an additional cause, or causes, of the hypersomnia and requiring further evaluation.Sleep movement disorders, such as restless legs syndrome (RLS) and periodic limb movement disorder (PLMD or PLMS) can also cause secondary hypersomnia. Although RLS does commonly cause excessive daytime sleepiness, PLMS does not. There is no evidence that PLMS plays "a role in the etiology of daytime sleepiness. In fact, two studies showed no correlation between PLMS and objective measures of excessive daytime sleepiness. In addition, EDS in these patients is best treated with psychostimulants—and not with dopaminergic agents known to suppress PLMS."Neuromuscular diseases and spinal cord diseases often lead to sleep disturbances due to respiratory dysfunction causing sleep apnea, and they may also cause insomnia related to pain. "Other sleep alterations, such as periodic limb movement disorders in patients with spinal cord disease, have also been uncovered with the widespread use of polysomnography."Primary hypersomnia in diabetes, hepatic encephalopathy, and acromegaly is rarely reported, but these medical conditions may also be associated with hypersomnia secondary to sleep apnea and periodic limb movement disorder (PLMD).Chronic fatigue syndrome and fibromyalgia can also be associated with hypersomnia. Chronic fatigue syndrome is "characterized by persistent or relapsing fatigue that does not resolve with sleep or rest. Polysomnography shows reduced sleep efficiency and may include alpha intrusion into sleep EEG. It is likely that a number of cases labeled as chronic fatigue syndrome are unrecognized cases of upper airway resistance syndrome" or other sleep disorders, such as narcolepsy, sleep apnea, PLMD, etc.As with chronic fatigue syndrome, fibromyalgia may be associated with anomalous alpha wave activity (typically associated with arousal states) during NREM sleep. Also, researchers have shown that disrupting stage IV sleep consistently in young, healthy subjects causes a significant increase in muscle tenderness—similar to that experienced in "neurasthenic musculoskeletal pain syndrome". This pain resolved when the subjects were able to resume their normal sleep patterns.Chronic kidney disease is commonly associated with sleep symptoms and excessive daytime sleepiness. 80% of those on dialysis have sleep disturbances. Sleep apnea can occur 10 times as often in uremic patients than in the general population and can affect up to 30-80% of patients on dialysis, though nighttime dialysis can improve this. About 50% of dialysis patients have hypersomnia, as severe kidney disease can cause uremic encephalopathy, increased sleep-inducing cytokines, and impaired sleep efficiency. About 70% of dialysis patients are affected by insomnia, and RLS and PLMD affect 30%, though these may improve after dialysis or kidney transplant.Most forms of cancer and their therapies can cause fatigue and disturbed sleep, affecting 25-99% of patients and often lasting for years after treatment completion. "Insomnia is common and a predictor of fatigue in cancer patients, and polysomnography demonstrates reduced sleep efficiency, prolonged initial sleep latency, and increased wake time during the night." Paraneoplastic syndromes can also cause insomnia, hypersomnia, and parasomnias.Autoimmune diseases, especially lupus and rheumatoid arthritis, are often associated with hypersomnia. Morvans syndrome is an example of a rarer autoimmune illness that can also lead to hypersomnia. Celiac disease is another autoimmune disease associated with poor sleep quality (which may lead to hypersomnia), "not only at diagnosis but also during treatment with a gluten-free diet." There are also some case reports of central hypersomnia in celiac disease. And RLS "has been shown to be frequent in celiac disease," presumably due to its associated iron deficiency.Hypothyroidism and iron deficiency with or without (iron-deficiency anemia) can also cause secondary hypersomnia. Various tests for these disorders are done so they can be treated. Hypersomnia can also develop within months after viral infections such as Whipples disease, mononucleosis, HIV, and Guillain–Barré syndrome.Behaviorally induced insufficient sleep syndrome must be considered in the differential diagnosis of secondary hypersomnia. This disorder occurs in individuals who fail to get sufficient sleep for at least three months. In this case, the patient has chronic sleep deprivation, although they may not necessarily be aware of it. This situation is becoming more prevalent in western society due to the modern demands and expectations placed upon the individual.Many medications can lead to secondary hypersomnia. Therefore, a patients complete medication list should be carefully reviewed for sleepiness or fatigue as side effects. In these cases, careful withdrawal from the possibly offending medication(s) is needed; then, medication substitution can be undertaken.Mood disorders, like depression, anxiety disorder and bipolar disorder, can also be associated with hypersomnia. The complaint of excessive daytime sleepiness in these conditions is often associated with poor sleep at night. "In that sense, insomnia and EDS are frequently associated, especially in cases of depression." Hypersomnia in mood disorders seems to be primarily related to "lack of interest and decreased energy inherent in the depressed condition rather than an increase in sleep or REM sleep propensity". In all cases with these mood disorders, the MSLT is normal (not too short and no SOREMPs). Posttraumatic hypersomnias In some cases, hypersomnia can be caused by a brain injury. Researchers found that the level of sleepiness is correlated with the severity of the injury. Even if patients reported an improvement, sleepiness remained present for a year in about a quarter of patients with traumatic brain injury. Recurrent hypersomnias Recurrent hypersomnias are defined by several episodes of hypersomnia persisting from a few days to weeks. These episodes can occur weeks or months apart from each other. There are 2 subtypes of recurrent hypersomnias: Kleine-Levin syndrome and menstrual-related hypersomnia.Kleine-Levin syndrome is characterized by the association of episodes of hypersomnias with behavioral, cognitive and mood abnormalities. The behavioral disturbances can be composed of hyperphagia, irritability, or sexual disinhibition. The cognitive disorders consist of confusion, hallucinations or delusions. Mood symptoms are characterized by anxiety or depression.Menstrual-related hypersomnia is characterized by episodes of excessive sleepiness associated with the menstrual cycle. Researchers found that the degree of premenstrual symptoms were correlated with daytime sleepiness. Unlike Kleine-Levin syndrome, hyperphagia and hypersexuality are not reported in people with menstrual-related hypersomnia, but hypophagia could be present. Ordinarily, these episodes appear 2 weeks before menstruation. A few studies have attested that some hormones as prolactin and progesterone could be responsible for Menstrual-Related Hypersomnia. Therefore, different contraceptive pills could improve the symptoms. The sleep architecture changes. There is a decrease of slow-wave sleep and an increase of slow-Theta-wave activity. Assessment tools Polysomnography Polysomnography is an objective sleep assessment method. It comprises a lot of electrodes which measure physiological variables related to sleep. Polysomnography often includes electroencephalography, electromyography, electrocardiography, muscle activity and respiratory function.Polysomnography is helpful to identify the very short sleep onset latency period, the very efficient sleep (more than 90%), the increased slow wave sleep, and sometimes an elevated amount of sleep spindles in idiopathic hypersomnia patients. Multiple sleep latency test (MSLT) The multiple sleep latency test (MSLT) is an objective tool which indicates the degree of sleepiness by measuring the sleep latency (i.e. the speed of falling asleep). It also gives information regarding the presence of abnormal REM sleep onset episodes. During that test, patients have a series of opportunities to sleep at 2-h intervals across the day in a darkened room and with no external alerting influences.The MSLT is often administered the day after recording the polysomnography, and the mean sleep latency score is often found to be around (or less than) 8 minutes in idiopathic hypersomnia patients. Some patients might even have a sleep onset latency of 5 minutes or less. These patients are often even more aware of sleeping during naps than narcolepsy patients. Actigraphy Actigraphy, which operates by analyzing the patients limb movements, is used to record the sleep and wake cycles. In order to report them, the patient has to wear continuously a device on his or her wrist, which looks like a watch and does not contain any electrodes. The advantage actigraphy shows over polysomnography is that it is possible to record for 24-hours a day for weeks. Furthermore, unlike the polysomnography, it is less expensive and non-invasive.An actigraphy over several days can show longer sleep periods, which are characteristic for idiopathic hypersomnia. Actigraphy is also helpful in ruling out other sleep disorders, especially circadian disorders, leading to an excess of sleepiness during the day, too. The maintenance of wakefulness test (MWT) The maintenance of wakefulness test (MWT) is a test that measures the ability to stay awake. It is used to diagnose disorders of excessive somnolence, such as hypersomnia, narcolepsy or obstructive sleep apnea. During that test, patients sit comfortably and are instructed to try to stay awake. The Stanford sleepiness scale (SSS) The Stanford sleepiness scale (SSS) is a self-report scale that measures the different steps of sleepiness. For each statement, patients report their level of sleepiness using a 7-point scale, going from very alert to excessively sleepy. Researchers found that the SSS was highly correlated with performances to monotonous and boring tasks, which are found to be very sensitive to sleepiness. These results suggest that the SSS is a good tool to assess sleepiness in patients. The Epworth sleepiness scale (ESS) The Epworth sleepiness scale (ESS) is also a self-reported questionnaire that measures the general level of sleepiness in a day The patients have to rate specific daily situations by means of a scale going from 0 (would never doze) to 3 (high chance of dozing). The results found in the ESS correlate with the sleep latency indicated by the Multiple Sleep Latency Test. Treatment Although there has been no cure of chronic hypersomnia, there are several treatments that may improve patients quality of life—depending on the specific cause or causes of hypersomnia that are diagnosed.Because the causes of hypersomnia are unknown, it is only possible to treat symptoms and not directly the cause of this disorder. Behavioral treatments, as well as sleep hygiene, have to be discussed with the patient and are recommended. There are several pharmacological agents that have been prescribed to patients with hypersomnia, but few have been found to be efficient. Modafinil has been found to be the most effective drug against the excessive sleepiness, and has even been shown to be helpful in children with hypersomnia. The dosage is started at 100 mg per day, and then slowly increased to 400 mg per day.In general, patients with hypersomnia or excessive sleepiness should only go to bed to sleep or for sexual activity. All other activities, such as eating or watching television, should be done elsewhere. For those patients, it is also important to go to bed only when they feel tired, than trying to fall asleep for hours. In that case, they probably should get out of bed and read or watch television until they get sleepy. Epidemiology Hypersomnia affects approximately 5% to 10% of the general population, "with a higher prevalence for men due to the sleep apnea syndromes". See also Encephalitis lethargica Reticular formation Sleep medicine Somnolence References External links Help: I cant stay awake! - Public Radio Interview with Dr. David Rye med/3129 at eMedicine - "Primary Hypersomnia"
Pectus carinatum
Pectus carinatum, also called pigeon chest, is a malformation of the chest characterized by a protrusion of the sternum and ribs. It is distinct from the related malformation pectus excavatum. Signs and symptoms People with pectus carinatum usually develop normal hearts and lungs, but the malformation may prevent these from functioning optimally. In moderate to severe cases of pectus carinatum, the chest wall is rigidly held in an outward position. Thus, respirations are inefficient and the individual needs to use the accessory muscles for respiration, rather than normal chest muscles, during strenuous exercise. This negatively affects gas exchange and causes a decrease in stamina. Children with pectus malformations often tire sooner than their peers due to shortness of breath and fatigue. Commonly concurrent is mild to moderate asthma. Some children with pectus carinatum also have scoliosis (i.e., curvature of the spine). Some have mitral valve prolapse, a condition in which the heart mitral valve functions abnormally. Connective tissue disorders involving structural abnormalities of the major blood vessels and heart valves are also seen. Although rarely seen, some children have other connective tissue disorders, including arthritis, visual impairment and healing impairment. Apart from the possible physiologic consequences, pectus malformations can have a significant psychologic impact. Some people, especially those with milder cases, live happily with pectus carinatum. For others, though, the shape of the chest can damage their self-image and confidence, possibly disrupting social connections and causing them to feel uncomfortable throughout adolescence and adulthood. As the child grows older, bodybuilding techniques may be useful for balancing visual impact.A less common variant of pectus carinatum is pectus arcuatum (also called type 2 pectus excavatum, chondromanubrial malformation or Currarino–Silverman syndrome or pouter pigeon malformation), which produces a manubrial and upper sternal protrusion, particularly also at the sternal angle. Pectus arcuatum is often confused with a combination of pectus carinatum and pectus excavatum, but in pectus arcuatum the visual appearance is characterized by a protrusion of the costal cartilages and there is no depression of the sternum. Causes Pectus carinatum is an overgrowth of costal cartilage causing the sternum to protrude forward. It primarily occurs among four different patient groups, and males are more frequently affected than females. Most commonly, pectus carinatum develops in 11-to-14-year-old pubertal males undergoing a growth spurt. Some parents report that their childs pectus carinatum seemingly popped up overnight. Second most common is the presence of pectus carinatum at or shortly after birth. The condition may be evident in newborns as a rounded anterior chest wall. As the child reaches age 2 or 3 years of age, the outward sternal protrusion becomes more pronounced. Pectus carinatum can also be caused by vitamin D deficiency in children (Rickets) due to deposition of unmineralized osteoid. Least common is a pectus carinatum malformation following open-heart surgery or in children with poorly controlled bronchial asthma. Pectus carinatum is generally a solitary, non-syndromic abnormality. However, the condition may be present in association with other syndromes: Turner syndrome, Noonan syndrome, Loeys–Dietz syndrome, Marfan syndrome, Ehlers–Danlos syndrome, Morquio syndrome, trisomy 18, trisomy 21, homocystinuria, osteogenesis imperfecta, multiple lentigines syndrome (LEOPARD syndrome), Sly syndrome (mucopolysaccharidosis type VII), and scoliosis. In about 25% of cases of pectus carinatum, the patient has a family member with the condition. Diagnosis The pectus carinatum can be easily diagnosed by certain tests like a CT scan (2D and 3D). It may then be found out that the rib cage is in normal structure. If there is more than average growth of sternum than pectus carinatum protrudes. Also it is of two types, as pectus carinatum is symmetrical or unsymmetrical. On the basis of that further treatment is given to the patient. Treatment External bracing technique The use of orthotic bracing, pioneered by Sydney Haje as of 1977, is finding increasing acceptance as an alternative to surgery in select cases of pectus carinatum. In children, teenagers, and young adults who have pectus carinatum and are motivated to avoid surgery, the use of a customized chest-wall brace that applies direct pressure on the protruding area of the chest produces excellent outcomes. Willingness to wear the brace as required is essential for the success of this treatment approach. The brace works in much the same way as orthodontics (braces that correct the alignment of teeth). The brace consists of front and back compression plates that are anchored to aluminum bars. These bars are bound together by a tightening mechanism which varies from brace to brace. This device is easily hidden under clothing and must be worn from 14 to 24 hours a day. The wearing time varies with each brace manufacturer and the managing physicians protocol, which could be based on the severity of the carinatum malformation (mild moderate severe) and if it is symmetric or asymmetric.Depending on the manufacturer and/or the patients preference, the brace may be worn on the skin or it may be worn over a body sock or sleeve called a Bracemate, specifically designed to be worn under braces. A physician or orthotist or brace manufacturers representative can show how to check to see if the brace is in correct position on the chest.Bracing is becoming more popular over surgery for pectus carinatum, mostly because it eliminates the risks that accompany surgery. The prescribing of bracing as a treatment for pectus carinatum has trickled down from both paediatric and thoracic surgeons to the family physician and pediatricians again due to its lower risks and well-documented very high success results. The pectus carinatum guideline of 2012 of the American Pediatric Surgical Association has stated: "As reconstructive therapy for the compliant pectus [carinatum] malformation, nonoperative compressive orthotic bracing is usually an appropriate first line of therapy as it does not preclude the operative option. For appropriate candidates, orthotic bracing of chest wall malformations can reasonably be expected to prevent worsening of the malformation and often results in a lasting correction of the malformation. Orthotic bracing is often successful in prepubertal children whose chest wall is compliant. Expert opinion suggests that the noncompliant chest wall malformation or significant asymmetry of the pectus carinatum malformation caused by a concomitant excavatum-type malformation may not respond to orthotic bracing."Regular supervision during the bracing period is required for optimal results. Adjustments may be needed to the brace as the child grows and the pectus improves. Surgical For patients with severe pectus carinatum, surgery may be necessary. However bracing could and may still be the first line of treatment. Some severe cases treated with bracing may result in just enough improvement that patient is happy with the outcome and may not want surgery afterwards. If bracing should fail for whatever reason then surgery would be the next step. The two most common procedures are the Ravitch technique and the Reverse Nuss procedure.A modified Ravitch technique uses bioabsorbable material and postoperative bracing, and in some cases a diced rib cartilage graft technique.The Nuss was developed by Donald Nuss at the Childrens Hospital of the Kings Daughters in Norfolk, Va. The Nuss is primarily used for Pectus Excavatum, but has recently been revised for use in some cases of PC, primarily when the malformation is symmetrical. Other options After adolescence, some men and women use bodybuilding as a means to hide their malformation. Some women find that their breasts, if large enough, serve the same purpose. Some plastic surgeons perform breast augmentation to disguise mild to moderate cases in women. Bodybuilding is suggested for people with symmetrical pectus carinatum. Prognosis Pectus malformations usually become more severe during adolescent growth years and may worsen throughout adult life. The secondary effects, such as scoliosis and cardiovascular and pulmonary conditions, may worsen with advancing age.Body building exercises (often attempted to cover the defect with pectoral muscles) will not alter the ribs and cartilage of the chest wall, and are generally considered not harmful.Most insurance companies no longer consider chest wall malformations like pectus carinatum to be purely cosmetic conditions. While the psychologic impact of any malformation is real and must be addressed, the physiological concerns must take precedence. The possibility of lifelong cardiopulmonary difficulties is serious enough to warrant a visit to a thoracic surgeon. Epidemiology Pectus malformations are rare; about 1 in 400 people have a pectus disorder.Pectus carinatum is rarer than pectus excavatum, another pectus disorder, occurring in only about 20% of people with pectus malformations. About four out of five patients are males. See also Pectus excavatum References == External links ==
Transsexual
Transsexual people experience a gender identity that is inconsistent with their assigned sex, and desire to permanently transition to the sex or gender with which they identify, usually seeking medical assistance (including sex reassignment therapies, such as hormone replacement therapy and sex reassignment surgery) to help them align their body with their identified sex or gender. The term transsexual is a subset of transgender, but some transsexual people reject the label of transgender. A medical diagnosis of gender dysphoria can be made if a person experiences marked and persistent incongruence between their experienced gender — their personal sense of their own gender — and their assigned sex. Understanding of transsexuality has changed very quickly in the 21st century. Many 20th century medical beliefs and practices around transsexuality are now considered deeply outdated. It was once classified as a mental disorder and subject to extensive gatekeeping by the medical establishment, and remains so in much of the world. Terminology Transsexual has had different meanings throughout time. In modern usage, it refers to "a person who desires to or who has modified their body to transition from one gender or sex to another through the use of medical technologies such as hormones or surgeries." It is considered an antiquated term, and is sometimes pejorative. The general shift is to instead use the term transgender or the abbreviated form trans, but due to its historical usage, transsexual remains in the modern vernacular.: 742–744  In understanding the subject, it is noted that there is a difference between gender and sex. Gender is defined as a "set of social, cultural, and linguistic norms that can be attributed to someone’s identity, expression, or role as masculine, feminine, androgynous, or nonbinary." Sex is defined as being "assigned at birth by medical professionals based on the appearance of genitalia, and related assumptions about chromosomal makeup, gender identity, expressions, and roles emerge over the life span, sometimes changing over time.": 277–278 Origins Norman Haire reported that in 1921 Dora R of Germany began a surgical transition, under the care of Magnus Hirschfeld, which ended in 1930 with a successful genital reassignment surgery (GRS). In 1930, Hirschfeld supervised the second genital reassignment surgery to be reported in detail in a peer-reviewed journal, that of Lili Elbe of Denmark. In 1923, Hirschfeld introduced the (German) term "Transsexualismus", after which David Oliver Cauldwell introduced "transsexualism" and "transsexual" to English in 1949 and 1950.Cauldwell appears to be the first to use the term to refer to those who desired a change of physiological sex. In 1969, Harry Benjamin claimed to have been the first to use the term "transsexual" in a public lecture, which he gave in December 1953. Benjamin went on to popularize the term in his 1966 book, The Transsexual Phenomenon, in which he described transsexual people on a scale (later called the "Benjamin scale") of three levels of intensity: "Transsexual (nonsurgical)", "Transsexual (moderate intensity)", and "Transsexual (high intensity)". In his book, Benjamin described "true" transsexualism as the following: True transsexuals feel that they belong to the other sex, they want to be and function as members of the opposite sex, not only to appear as such. For them, their sex organs, the primary (testes) as well as the secondary (penis and others) are disgusting deformities that must be changed by the surgeons knife. Relationship to transgender The term transgender was coined by John Oliven in 1965. By the 1990s, transsexual had come to be considered a subset of the umbrella term transgender. The term transgender is now more common, and many transgender people prefer the designation transgender and reject transsexual. Some people who pursue medical assistance (for example, sex reassignment surgery) to change their sexual characteristics to match their gender identity prefer the designation transsexual and reject transgender. One perspective offered by transsexual people who reject a transgender label for that of transsexed is that, for people who have gone through sexual reassignment surgery, their anatomical sex has been altered, whilst their gender remains constant.Historically, one reason some people preferred transsexual to transgender is that the medical community in the 1950s through the 1980s encouraged a distinction between the terms that would only allow the former access to medical treatment. Other self-identified transsexual people state that those who do not seek sex reassignment surgery (SRS) are fundamentally different from those who do, and that the two have different concerns, but this view is controversial, and others argue that merely having some medical procedures does not have such far-reaching consequences as to put those who have them and those who have not (e.g. because they cannot afford them) into such distinctive categories. Some have objected to the term transsexual on the basis that it describes a condition related to gender identity rather than sexuality. For example, Christine Jorgensen, the first person widely known to have sex reassignment surgery (in this case, male-to-female), rejected transsexual and instead identified herself in newsprint as trans-gender, on this basis.The trans community has generally taken issue with the term transsexual because it over-medicalizes the trans experience, and focuses too much on diagnosis.: 743  The term transgender emerged in part in an attempt to break the "medical monopoly" on transitioning that transsexual implied. GLAADs media reference guide offers the following distinction on the use of transsexual:An older term that originated in the medical and psychological communities. As the gay and lesbian community rejected homosexual and replaced it with gay and lesbian, the transgender community rejected transsexual and replaced it with transgender. Some people within the trans community may still call themselves transsexual. Do not use transsexual to describe a person unless it is a word they use to describe themself. If the subject of your news article uses the word transsexual to describe themself, use it as an adjective: transsexual woman or transsexual man. Terminological variance The word transsexual is most often used as an adjective rather than a noun – a "transsexual person" rather than simply "a transsexual". As of 2018, use of the noun form (e.g. referring to people as transsexuals) is often deprecated by those in the transsexual community. Like other trans people, transsexual people prefer to be referred to by the gender pronouns and terms associated with their gender identity. For example, a trans man is a person who was assigned the female sex at birth on the basis of his genitals, but despite that assignment, identifies as a man and is transitioning or has transitioned to a male gender role; in the case of a transsexual man, he furthermore has or will have a masculine body. Transsexual people are sometimes referred to with directional terms, such as "female-to-male" for a transsexual man, abbreviated to "F2M", "FTM", and "F to M", or "male-to-female" for a transsexual woman, abbreviated "M2F", "MTF" and "M to F". Individuals who have undergone and completed sex reassignment surgery are sometimes referred to as transsexed individuals; however, the term transsexed is not to be confused with the term transsexual, which can also refer to individuals who have not yet undergone SRS, and whose anatomical sex (still) does not match their psychological sense of personal gender identity. The terms gender dysphoria and gender identity disorder were not used until the 1970s, when Laub and Fisk published several works on transsexualism using these terms. "Transsexualism" was replaced in the DSM-IV by "gender identity disorder in adolescents and adults". Male-to-female transsexualism has sometimes been called "Harry Benjamins syndrome" after the endocrinologist who pioneered the study of dysphoria. As the present-day medical study of gender variance is much broader than Benjamins early description, there is greater understanding of its aspects, and use of the term Harry Benjamins syndrome has been criticized for delegitimizing gender-variant people with different experiences. Sexual orientation Since the middle of the 20th century, homosexual transsexual and related terms were used to label individuals sexual orientation based on their birth sex. Many sources criticize this choice of wording as confusing, "heterosexist", "archaic", and demeaning because it labels people by sex assigned at birth instead of their gender identity. Sexologist John Bancroft also recently expressed regret for having used this terminology, which was standard when he used it, to refer to transsexual women. He says that he now tries to choose his words more sensitively. Sexologist Charles Allen Moser is likewise critical of the terminology. Sociomedical scientist Rebecca Jordan-Young challenges researchers like Simon LeVay, J. Michael Bailey, and Martin Lalumiere, who she says "have completely failed to appreciate the implications of alternative ways of framing sexual orientation."The terms androphilia and gynephilia to describe a persons sexual orientation without reference to their gender identity were proposed and popularized by psychologist Ron Langevin in the 1980s. The similar specifiers attracted to men, attracted to women, attracted to both or attracted to neither were used in the DSM-IV.Many transsexual people choose the language of how they refer to their sexual orientation based on their gender identity, not their birth assigned sex. Surgical status Several terms are in common use, especially within the community itself relating to the surgical or operative status of someone who is transsexual, depending on whether they have already had sex reassignment surgery (SRS), have not had SRS but still intend to, or do not intend to have SRS. They are, post-op, pre-op, and non-op, respectively. Pre-operative A pre-operative transsexual person, or simply pre-op for short, is someone who intends to have SRS at some point, but has not yet had it. Post-operative A post-operative transsexual person, or post-op for short, is someone who has had SRS. Non-operative A non-operative transsexual person, or non-op, is someone who has not had SRS, and does not intend to have it in the future. There can be various reasons for this, from personal to financial. Having SRS is not a requirement of being transsexual. Evolutionary biologist and trans woman Julia Serano criticizes the societal preoccupation with SRS as phallocentric, objectifying of transsexuals, and an invasion of privacy.: 229–231 Historical understanding Transgender people are known to have existed since ancient times. A wide range of societies had traditional third gender roles, or otherwise accepted trans people in some form. However, a precise history is difficult because the modern concept of being transgender, and gender in general, did not develop until the mid-1900s. Historical understandings are thus inherently filtered through modern principles, and were largely viewed through a medical lens until the late 1900s.Ancient Greek Hippocrates (interpreting the writing of Herodotus) discusses transgender individuals briefly. He describes the "disease of the Scythians" (regarding the Enaree), which he attributes to impotency due to riding on a horse without stirrups. Hippocrates reference was well discussed by medical writings of the 1500s–1700s. Pierre Petit writing in 1596 viewed the "Scythian disease" as natural variation, but by the 1700s writers viewed it as a "melancholy", or "hysterical" psychiatric disease. By the early 1800s, being transgender separate from Hippocrates idea of it was claimed to be widely known, but remained poorly documented. Both MtF and FtM individuals were cited in European insane asylums of the early 1800s. The most complete account of the time came from the life of the Chevalier dÉon (1728–1810). As cross-dressing became more widespread in the late 1800s, discussion of transgender people increased greatly and writers attempted to explain the origins of being transgender. Much study came out of Germany, and was exported to other Western audiences. Cross-dressing was seen in a pragmatic light until the late 1800s; it had previously served a satirical or disguising purpose. But in the latter half of the 1800s, cross-dressing and being transgender became viewed as an increasing societal danger.William A. Hammond wrote an 1882 account of transgender Pueblo shamans (mujerados), comparing them to the Scythian disease. Other writers of the late 1700s and 1800s (including Hammonds associates in the American Neurological Association) had noted the widespread nature of transgender cultural practices among native peoples. Explanations varied, but authors generally did not ascribe native transgender practices to psychiatric causes, instead condemning the practices in a religious and moral sense. Native groups provided much study on the subject, and perhaps the majority of all study until after WWII.Critical studies first began to emerge in the late 1800s in Germany, with the works of Magnus Hirschfeld. Hirschfeld coined the term "transvestite" in 1910 as the scope of transgender study grew. His work would lead to the 1919 founding of the Institut für Sexualwissenschaft in Berlin. Though Hirschelds legacy is disputed, he revolutionized the field of study. The Institut was destroyed when the Nazis seized power in 1933, and its research was infamously burned in the May 1933 Nazi book burnings. Transgender issues went largely out of the public eye until after World War II. Even when they re-emerged, they reflected a forensic psychology approach, unlike the more sexological that had been employed in the lost German research. 20th century medical understanding Benjamin suggested that moderate intensity male to female transsexual people may benefit from estrogen medication as a "substitute for or preliminary to operation." Some people have had sex reassignment surgery (SRS) but do not meet the above definition of transsexual. Other people do not desire SRS although they meet the other elements of Benjamins definition of a "true transsexual". Transsexuality was included for the first time in the DSM-III in 1980 and again in the DSM-III-R in 1987, where it was located under Disorders Usually First Evident in Infancy, Childhood or Adolescence. Beyond Benjamins work, which focused on male-to-female (MTF) transsexual people, there are cases of the female to male transsexual, for whom genital surgery may not be practical. Benjamin gave certifying letters to his MTF transsexual patients that stated "Their anatomical sex, that is to say, the body, is male. Their psychological sex, that is to say, the mind, is female." Starting in 1968 Benjamin abandoned his early terminology and adopted that of "gender identity." Medical diagnosis Transsexualism no longer is classified as a mental disorder in the International Statistical Classification of Diseases and Related Health Problems (ICD). The World Professional Association for Transgender Health (WPATH) and many transsexual people had recommended this removal,: 743  arguing that at least some mental health professionals are being insensitive by labelling transsexualism as a "disease" rather than as an inborn trait, as many transsexuals believe it to be. Now, instead, it is classified as a sexual health condition; this classification continues to enable healthcare systems to provide healthcare needs related to gender. The eleventh edition was released in June 2018. The previous version, ICD-10, had incorporated transsexualism, dual role transvestism, and gender identity disorder of childhood into its gender identity disorder category. It defined transsexualism as "[a] desire to live and be accepted as a member of the opposite sex, usually accompanied by a sense of discomfort with, or inappropriateness of, ones anatomic sex, and a wish to have surgery and hormonal treatment to make ones body as congruent as possible with ones preferred sex." ICD-11 renamed Transexualism as Gender incongruence of adolescence or adulthood (HA60), and Gender identity disorder of childhood was renamed Gender incongruence of childhood (HA61). HA60 of the ICD-11 reads:Gender Incongruence of Adolescence and Adulthood is characterised by a marked and persistent incongruence between an individual´s experienced gender and the assigned sex, which often leads to a desire to ‘transition’, in order to live and be accepted as a person of the experienced gender, through hormonal treatment, surgery or other health care services to make the individual´s body align, as much as desired and to the extent possible, with the experienced gender. The diagnosis cannot be assigned prior the onset of puberty. [HA61 applies before puberty] Gender variant behaviour and preferences alone are not a basis for assigning the diagnosis.Historically, transsexualism has also been included in the American Psychiatric Associations Diagnostic and Statistical Manual of Mental Disorders (DSM). With the DSM-5, transsexualism was removed as a diagnosis, and a diagnosis of gender dysphoria was created in its place. This change was made to reflect the consensus view by members of the APA that transsexuality is not in and of itself a disorder and that transsexual people should not be stigmatized unnecessarily. By including a diagnosis for gender dysphoria, transsexual people are still able to access medical care through the process of transition. The current diagnosis for transsexual people who present themselves for medical treatment is gender dysphoria (leaving out those who have sexual identity disorders without gender concerns). According to the Standards of care formulated by WPATH, formerly the Harry Benjamin International Gender Dysphoria Association, this diagnostic label is often necessary to obtain sex reassignment therapy with health insurance coverage, and the designation of gender identity disorders as mental disorders is not a license for stigmatization or for the deprivation of gender patients civil rights. Causes, studies, and theories Causes Studies conducted on twins suggest that there are likely genetic causes of gender incongruence, although the precise genes involved are not known or fully understood. One study published in the International Journal of Transgender Health found that in 20% of identical twin pairs, if one twin was trans, the other was as well, compared to only 2.6% of non-identical twins where this was the case; researchers attribute this to their shared genetics.Sexologist Ray Blanchard created a taxonomy of male-to-female transsexualism that proposes two distinct etiologies for androphilic and gynephilic individuals; this taxonomy has become controversial, supported by J. Michael Bailey, Anne Lawrence, James Cantor and others, but opposed by Charles Allen Moser, Julia Serano, and the World Professional Association for Transgender Health. Focus on trans women over trans men Historically, formal efforts by the medical community to provide transsexual healthcare were extremely focused on transsexual women, with little thought for transsexual men. Julia Serano suggests that effemimania (the idea that male femininity is more psychopathological than female masculinity) was the driving factor. She sees this as a kind of transmisogyny (hatred of trans women as an extension of sexism).: 126–127  This effimimania conflates male homosexuality, MTF transsexuality, and feminine gender expression, while treating them all as a disease.: 129  She points to the medical communitys long love of now outdated theories such as autogynephilia.: 131 Medical assistance Individuals make different choices regarding sex reassignment therapy, which may include hormones, minor to extensive surgery, social changes, and psychological interventions. The extent of medical intervention is a highly personal decision: there is no one-size-fits-all solution. Hormone replacement therapy Transsexual individuals frequently opt for masculinizing or feminizing hormone replacement therapy (HRT) to modify secondary sex characteristics. Sex reassignment therapy Sex reassignment therapy (SRT) is an umbrella term for all medical treatments related to sex reassignment of both transgender and intersex people. Sex reassignment surgery (such as orchiectomy) alters primary sex characteristics, including chest surgery such as top surgery or breast augmentation, or, in the case of trans women, a trachea shave, facial feminization surgery or permanent hair removal. To obtain sex reassignment therapy, transsexual people are generally required to undergo a psychological evaluation and receive a diagnosis of gender identity disorder in accordance with the Standards of Care (SOC) as published by the World Professional Association for Transgender Health. This assessment is usually accompanied by counseling on issues of adjustment to the desired gender role, effects and risks of medical treatments, and sometimes also by psychological therapy. The SOC are intended as guidelines, not inflexible rules, and are intended to ensure that clients are properly informed and in sound psychological health, and to discourage people from transitioning based on unrealistic expectations. Gender roles and transitioning After an initial psychological evaluation, trans men and trans women may begin medical treatment, starting with hormone replacement therapy or hormone blockers. In these cases, people who change their gender are usually required to live as members of their target gender for at least one year prior to genital surgery, gaining real-life experience, which is sometimes called the "real-life test" (RLT). Transsexual individuals may undergo some, all, or none of the medical procedures available, depending on personal feelings, health, income, and other considerations. Some people posit that transsexualism is a physical condition, not a psychological issue, and assert that sex reassignment therapy should be given on request. (Brown 103) Like other trans people, transsexual people may refer to themselves as trans men or trans women. Transsexual people desire to establish a permanent gender role as a member of the gender with which they identify, and many transsexual people pursue medical interventions as part of the process of expressing their gender. The entire process of switching from one physical sex and social gender presentation to another is often referred to as transitioning, and usually takes several years. Transsexual people who transition usually change their social gender roles, legal names and legal sex designation.Not all transsexual people undergo a physical transition. Some have obstacles or concerns preventing them from doing so, such as the expense of surgery, the risk of medical complications, or medical conditions which make the use of hormones or surgery dangerous. Others may not identify strongly with another binary gender role. Still others may find balance at a midpoint during the process, regardless of whether or not they are binary-identified. Many transsexual people, including binary-identified transsexual people, do not undergo genital surgery, because they are comfortable with their own genitals, or because they are concerned about nerve damage and the potential loss of sexual pleasure, including orgasm. This is especially so in the case of trans men, many of whom are dissatisfied with the current state of phalloplasty, which is typically very expensive, not covered by health insurance, and commonly does not achieve desired results. For example, not only does phalloplasty not result in a completely natural erection, it may not allow for an erection at all, and its results commonly lack penile sexual sensitivity; in other cases, however, phalloplasty results are satisfying for trans men. By contrast, metoidioplasty, which is more popular, is significantly less expensive and has far better sexual results.Transsexual people can be heterosexual, gay, lesbian, or bisexual; many choose the language of how they refer to their sexual orientation based on their gender identity, not their birth assigned sex. Psychological treatment Psychological techniques that attempt to alter gender identity to one considered appropriate for the persons assigned sex, aka conversion therapy, are ineffective. The widely recognized Standards of Care note that sometimes the only reasonable and effective course of treatment for transsexual people is to go through sex reassignment therapy.The need for treatment of transsexual people is emphasized by the high rate of mental health problems, including depression, anxiety, and various addictions, as well as a higher suicide rate among untreated transsexual people than in the general population. These problems are alleviated by a change of gender role and/or physical characteristics.Many transgender and transsexual activists, and many caregivers, note that these problems are not usually related to the gender identity issues themselves, but the social and cultural responses to gender-variant individuals. Some transsexual people reject the counseling that is recommended by the Standards of Care because they do not consider their gender identity to be a cause of psychological problems. Brown and Rounsley noted that "some transsexual people acquiesce to legal and medical expectations in order to gain rights granted through the medical/psychological hierarchy." Legal needs, such as a change of sex on legal documents, and medical needs, such as sex reassignment surgery, are usually difficult to obtain without a doctor or therapists approval. Because of this, some transsexual people feel coerced into affirming outdated concepts of gender to overcome simple legal and medical hurdles. Regrets and detransitions People who undergo sex reassignment surgery can develop regret for the procedure later in life, largely predicted by a lack of support from family or peers, with data from the 1990s suggesting a rate of 3.8%. In a 2001 study of 232 MTF patients who underwent GRS, none of the patients reported complete regret and only 6% reported partial or occasional regrets. A 2009 review of Medline literature suggests the total rate of patients expressing feelings of doubt or regret is estimated to be as high as 8%.A 2010 meta-study, based on 28 previous long-term studies of transsexual men and women, found that the overall psychological functioning of transsexual people after transition was similar to that of the general population and significantly better than that of untreated transsexual people. Prevalence Estimates of the prevalence of transsexual people are highly dependent on the specific case definitions used in the studies, with prevalence rates varying by orders of magnitude. In the United States, the Diagnostic and Statistical Manual of Mental Disorders (DSM-V 2013) gives the following estimates: "For natal adult males [MTF], prevalence ranges from 0.005% to 0.014%, and for natal females [FTM], from 0.002% to 0.003%." It states, however, that these are likely underestimates since the figures are based on referrals to specialty clinics.The Amsterdam Gender Dysphoria Clinic over four decades has treated roughly 95% of Dutch transsexual clients, and it suggests (1997) a prevalence of 1:10,000 among assigned males and 1:30,000 among assigned females.Olyslager and Conway presented a paper at the WPATH 20th International Symposium (2007) arguing that the data from their own and other studies actually imply much higher prevalence, with minimum lower bounds of 1:4,500 male-to-female transsexual people and 1:8,000 female-to-male transsexual people for a number of countries worldwide. They estimate the number of post-op women in the US to be 32,000 and obtain a figure of 1:2500 male-to-female transsexual people. They further compare the annual instances of sex reassignment surgery (SRS) and male birth in the U.S. to obtain a figure of 1:1000 MTF transsexual people and suggest a prevalence of 1:500 extrapolated from the rising rates of SRS in the US and a "common sense" estimate of the number of undiagnosed transsexual people. Olyslager and Conway also argue that the US population of assigned males having already undergone reassignment surgery by the top three US SRS surgeons alone is enough to account for the entire transsexual population implied by the 1:10,000 prevalence number, yet this excludes all other US SRS surgeons, surgeons in countries such as Thailand, Canada, and others, and the high proportion of transsexual people who have not yet sought treatment, suggesting that a prevalence of 1:10,000 is too low. A 2008 study of the number of New Zealand passport holders who changed the sex on their passport estimated that 1:3,639 birth-assigned males and 1:22,714 birth-assigned females were transsexual.A 2008 presentation at the LGBT Health Summit in Bristol, UK, showed that the prevalence of transsexual people in the UK was increasing (14% per year) and that the mean age of transition was rising. Though no direct studies on the prevalence of gender identity disorder (GID) have been done, a variety of clinical papers published in the past
Transsexual
20 years provide estimates ranging from 1:7,400 to 1:42,000 in assigned males and 1:30,040 to 1:104,000 in assigned females.In 2015, the National Center for Transgender Equality conducted a National Transgender Discrimination Survey. Of the 27,715 transgender and genderqueer people who took the survey, 35% identified as "non-binary", 33% identified as transgender women, 29% identified as transgender men, and 3% said that "crossdresser" best described their gender identity.A 2016 systematic review and meta-analysis of "how various definitions of transgender affect prevalence estimates" in 27 studies found a meta-prevalence (mP) estimates per 100,000 population of 9.2 (95% CI = 4.9–13.6), equal to 1:11,000 for surgical or hormonal gender affirmation therapy and 6.8 (95% CI = 4.6–9.1), equal to 1:15,000 for transgender-related medical condition diagnoses. Of studies assessing self-reported transgender identity, prevalence was 355 (95% CI = 144–566), equal to 1 in 282. However, a single outlier study would have influenced the result to 871 (95% CI = 519–1,224), equal to 1 in 115; this study was removed. "Significant heterogeneity was observed in most analyses." Society and culture A number of Native American and First Nations cultures have traditional social and ceremonial roles for individuals who do not fit into the usual roles for males and females in that culture. These roles can vary widely between tribes, because gender roles, when they exist at all, also vary considerably among different Native cultures. However, a modern, pan-Indian status known as Two-Spirit has emerged among LGBT Natives in recent years. Legal and social aspects Laws regarding changes to the legal status of transsexual people are different from country to country. Some jurisdictions allow an individual to change their name, and sometimes, their legal gender, to reflect their gender identity. Within the US, some states allow amendments or complete replacement of the original birth certificates. Some states seal earlier records against all but court orders in order to protect the transsexual persons privacy. In many places, it is not possible to change birth records or other legal designations of sex, although changes are occurring. Estelle Asmodelle’s book documented her struggle to change the Australian birth certificate and passport laws, although there are other individuals who have been instrumental in changing laws and thus attaining more acceptance for transsexual people in general. Medical treatment for transsexual and transgender people is available in most Western countries. However, transsexual and transgender people challenge the "normative" gender roles of many cultures and often face considerable hatred and prejudice. The film Boys Dont Cry chronicles the case of Brandon Teena, a transsexual man who was raped and murdered after his status was discovered. In 1999 Brandon was memoralised in the first Transgender Day of Remembrance. The Transgeder Day of Rembrance is observed annually on November 20 by members of the transgender community and LGBT+ organisations across the world.Jurisdictions allowing changes to birth records generally allow trans people to marry members of the opposite sex to their gender identity and to adopt children. Jurisdictions which prohibit same sex marriage often require pre-transition marriages to be ended before they will issue an amended birth certificate.Health-practitioner manuals, professional journalistic style guides, and LGBT advocacy groups advise the adoption by others of the name and pronouns identified by the person in question, including present references to the transgender or transsexual persons past. Family members and friends who may be confused about pronoun usage or the definitions of sex are commonly instructed in proper pronoun usage, either by the transsexual person or by professionals or other persons familiar with pronoun usage as it relates to transsexual people. Sometimes transsexual people have to correct their friends and family members many times before they begin to use the transsexual persons desired pronouns consistently. According to Julia Serano, deliberate mis-gendering of transsexual people is "an arrogant attempt to belittle and humiliate trans people."Both "transsexualism" and "gender identity disorders not resulting from physical impairments" are specifically excluded from coverage under the Americans with Disabilities Act Section 12211. Gender dysphoria is not excluded. Employment issues Openly transsexual people can have difficulty maintaining employment. Most find it necessary to remain employed during transition in order to cover the costs of living and transition. However, employment discrimination against trans people is rampant and many of them are fired when they come out or are involuntarily outed at work. Transsexual people must decide whether to transition on-the-job, or to find a new job when they make their social transition. Other stresses that transsexual people face in the workplace are being fearful of coworkers negatively responding to their transition, and losing job experience under a previous name—even deciding which rest room to use can prove challenging. Finding employment can be especially challenging for those in mid-transition. Laws regarding name and gender changes in many countries make it difficult for transsexual people to conceal their trans status from their employers. Because the Harry Benjamin Standards of Care requires one-year of real life experience prior to SRS, some feel this creates a Catch-22 situation which makes it difficult for trans people to remain employed or obtain SRS. In many countries, laws provide protection from workplace discrimination based on gender identity or gender expression, including masculine women and feminine men. An increasing number of companies are including "gender identity and expression" in their non-discrimination policies. Often these laws and policies do not cover all situations and are not strictly enforced. Californias anti-discrimination laws protect transsexual persons in the workplace and specifically prohibit employers from terminating or refusing to hire a person based on their transsexuality. The European Union provides employment protection as part of gender discrimination protections following the European Court of Justice decisions in P v S and Cornwall County Council.In the United States National Transgender Discrimination Survey, 44% of respondents reported not getting a job they applied for because of being transgender. 36% of trans women reported losing a job due to discrimination compared to 19% of trans men. 54% of trans women and 50% of trans men report having been harassed in the workplace. Transgender people who have been fired due to bias are more than 34 times likely than members of the general population to attempt suicide. Stealth Many transsexual men and women choose to live completely as members of their gender without disclosing details of their birth-assigned sex. This approach is sometimes called stealth. Stealth transsexuals choose not to disclose their past for numerous reasons, including fear of discrimination and fear of physical violence.: 63  There are examples of people having been denied medical treatment upon discovery of their trans status, whether it was revealed by the patient or inadvertently discovered by the doctors. In the media Before transsexual people were depicted in popular movies and television shows, Aleshia Brevard—a transsexual woman whose surgery took place in 1962: 3 —was actively working as an actress: 141  and model: 200  in Hollywood and New York throughout the 1960s and 70s. Aleshia never portrayed a transsexual person, though she appeared in eight Hollywood-produced films, on most of the popular variety shows of the day, including The Dean Martin Show, and was a regular on The Red Skelton Show and One Life to Live before returning to university to teach drama and acting. In pageantry Since 2004, with the goal of crowning the top transsexual of the world, a beauty pageant by the name of The Worlds Most Beautiful Transsexual Contest was held in Las Vegas, Nevada. The pageant accepted pre-operation and post-operation trans women, but required proof of their gender at birth. The winner of the 2004 pageant was a woman named Mimi Marks.Jenna Talackova, the 23-year-old woman who forced Donald Trump and his Miss Universe Canada pageant to end its ban on transgender contestants, competed in the pageant on May 19, 2012, in Toronto. On January 12, 2013, Kylan Arianna Wenzel was the first transgender woman allowed to compete in a Miss Universe Organization pageant since Donald Trump changed the rules to allow women like Wenzel to enter officially. Wenzel was the first transgender woman to compete in a Miss Universe Organization pageant since officials disqualified 23-year-old Miss Canada Jenna Talackova the previous year after learning she was transgender. See also List of transgender-related topics List of transgender-rights organizations List of LGBT-related organizations List of transgender people Transgender References Bibliography Benjamin, Harry (1966). The Transsexual Phenomenon. Julian Press, Incorporated Publishers. OCLC 1138665289. Brown, Mildred L.; Chloe Ann Rounsley (1996). True Selves: Understanding Transsexualism – For Families, Friends, Coworkers, and Helping Professionals. Jossey-Bass. ISBN 978-0-7879-6702-4. OCLC 51437864. Feinberg, Leslie (1999). Trans Liberation : Beyond Pink or Blue. Beacon Press. ISBN 978-0-8070-7951-5. OCLC 38732343. Standards of Care for the Health of Transsexual, Transgender, and Gender-Nonconforming People (PDF) (Report). 7. World Professional Association for Transgender Health. 2012. Archived (PDF) from the original on 11 May 2022. Kruijver, Frank P. M.; Zhou, Jiang-Ning; Pool, Chris W.; Hofman, Michel A.; Gooren, Louis J. G.; Swaab, Dick F. (1 May 2000). "Male-to-Female Transsexuals Have Female Neuron Numbers in a Limbic Nucleus". The Journal of Clinical Endocrinology and Metabolism. The Endocrine Society. 85 (5): 2034–41. doi:10.1210/jcem.85.5.6564. ISSN 0021-972X. PMID 10843193. Archived from the original on 6 February 2007. Retrieved 25 February 2007. Rathus, Spencer A.; Jeffery S. Nevid, Lois Fichner-Rathus (2002). Human Sexuality in a World of Diversity. Allyn & Bacon. ISBN 978-0-205-40615-9. OCLC 55502508. Schreiber, Gerhard (2016). Transsexuality in Theology and Neuroscience. Findings, Controversies, and Perspectives (in German). Walter de Gruyter. ISBN 978-3-11-044080-5. OCLC 962412457. Pepper, Shanti M.; Lorah, Peggy (2008). "Career Issues and Workplace Considerations for the Transsexual Community: Bridging a Gap of Knowledge for Career Counselors and Mental Health Care Providers". The Career Development Quarterly. Wiley. 56 (4): 330–343. doi:10.1002/j.2161-0045.2008.tb00098.x. ISSN 0889-4019. ProQuest 219546491. External links The International Journal of Transgenderism – The Official Journal of the World Professional Association for Transgender Health (formerly HBIGDA). An archive of IJT Volumes I through V is available, as are several books on transsexualism, including Harry Benjamins The Transsexual Phenomenon
Basophilia
Basophilia is the condition of having greater than 200 basophils/μL in the venous blood.Basophils are the least numerous of the myelogenous cells, and it is rare for their numbers to be abnormally high without changes to other blood components. Rather, basophilia is most often coupled with other white blood cell conditions such as eosinophilia- high levels of eosinophils in the blood. Basophils are easily identifiable by a blue coloration of the granules within each cell, marking them as granulocytes, in addition to segmented nuclei. Causes Basophilia can be attributed to many causes and is typically not sufficient evidence alone to signify a specific condition when isolated as a finding under microscopic examination. Coupled with other findings, such as abnormal levels of neutrophils, it may suggest the need for additional workup. As an example, additional evidence of left-shifted neutrophilia alongside basophilia indicates a potential likelihood primarily of chronic myeloid leukemia (CML), or an alternate myeloproliferative neoplasm. Additionally, basophilia in the presence of numerous circulating blasts suggests the possibility of acute myeloid leukemia. Elevation of basophils may also be representative of multiple other underlying neoplasms such as polycythemia vera (PV), myelofibrosis, thrombocythemia, or, in rare cases, solid tumors. Alternative root causes other than these neoplasmic complications are most commonly allergic reactions or chronic inflammation related to infections such as tuberculosis, influenza, inflammatory bowel disorder, or an inflammatory autoimmune disease. Chronic hemolytic anemia and infectious diseases such as smallpox also demonstrate elevated basophil levels. Certain drug usage and food ingestion can also correlate with symptoms of basophilia. Diagnosis Basophilia can be detected through a complete blood count (CBC). The root cause of basophilia can be determined through a bone marrow biopsy, genetic testing to look for genetic mutations, or ultrasound to determine enlargement of the spleen. A bone marrow aspirate may be utilized to confirm an increase in basophils or significantly high numbers of precursors to the granulocytes. Since basophilia is present in a vast range of clinical conditions, depending on a variety of underlying causes, supplemental signs and symptoms must be investigated for a diagnosis. If splenomegaly is detected, a myeloproliferative syndrome may be suspected. Intrinsically related symptoms such as fever, malaise, pruritus (itching) due to the release of histamine, fatigue, and right upper quadrant pain may be present in the afflicted patient. With some conditions, such as polycythemia vera, erythromelalgia, or burning of the palms and soles, coupled with thrombocytosis is common. This severe symptomatology may require urgent attention. If basophilia and the aforementioned symptoms are present with concurrent eosinophilia greater than 1500 cells/μL, hypereosinophilic syndrome may be considered. In cases of underlying allergic reactions or adverse sensitivity, skin rashes may be present.After symptomatic evaluation, a peripheral blood smear is examined in order to determine cell counts.In cases of a supposed myeloid neoplasm, a bone marrow biopsy will be performed utilizing cytogenetic analysis. This type of testing utilizes the karyotypes of chromosomes for each type of leukocyte and looks for a significant abnormality in any of the conventional karyotypes which could support the diagnosis of a neoplastic process. Basophilia on its own does not cause much complication other than those related to the primary causative condition. However, basophils can degranulate causing tissue damage, but this can be avoided with early detection and intervention. Treatment Basophilia, as it is primarily a secondary condition, is treated by addressing the causative disease or disorder. The underlying condition will determine what treatment is appropriate. Specifically in cases of allergic reactions or associated with chronic inflammation, treating the underlying cause is critical to avoid further, potentially irreparable damage to the bodys organ systems. Common treatments to allergic reactions include cessation of utilization of the offending agent, and the administration of antihistamines. Infection-related basophilia can be remedied by utilizing antibiotics to treat the underlying causative infection, whereas neoplasm related basophilia may have a more complicated clinical course including chemotherapy and periodic phlebotomy. References == External links ==
Egg donation
Egg donation is the process by which a woman donates eggs to enable another woman to conceive as part of an assisted reproduction treatment or for biomedical research. For assisted reproduction purposes, egg donation typically involves in vitro fertilization technology, with the eggs being fertilized in the laboratory; more rarely, unfertilized eggs may be frozen and stored for later use. Egg donation is a third party reproduction as part of assisted reproductive technology. In the United States, the American Society for Reproductive Medicine has issued guidelines for these procedures, and the Food and Drug Administration has a number of guidelines as well. There are boards in countries outside of the US which have the same regulations. However, egg donation agencies in the U.S. can choose whether to abide by the societys regulations or not. History The first child born from egg donation was reported in Australia in 1983. In July 1983, a clinic in Southern California reported a pregnancy using egg donation, which led to the birth of the first American child born from egg donation on 3 February 1984. This procedure was performed at the Harbor UCLA Medical Center and the University of California at Los Angeles School of Medicine. In the procedure, which is no longer used today, a fertilized egg that was just beginning to develop was transferred from one woman in whom it had been conceived by artificial insemination to another woman who gave birth to the infant 38 weeks later. The sperm used in the artificial insemination came from the husband of the woman who bore the baby.Prior to this, thousands of infertile women, single men and same-sex male couples had adoption as the only path to parenthood (those who dont accept sexual contact with a person who is not their constant partner). Advances in IVF and egg donation set the stage to allow open and candid discussion of oocyte and embryo donation as a common practice. This breakthrough has given way to the donation of human oocytes and embryos as a common practice similar to other donations such as blood and major organ donations. At the time of this announcement the event was captured by major news carriers and fueled healthy debate and discussion on this practice which affected the future of reproductive medicine by creating a platform for further advancements in womans health. This scientific breakthrough changed the outlook for those who were unable to have children due to female infertility and for women who are at high risk for passing on genetic disorders. As IVF developed, the procedures used in egg donation paralleled that development: the egg donors eggs are now harvested from her ovaries in an outpatient surgical procedure and fertilized in the laboratory, the same procedure used on IVF patients, but the resulting embryo or embryos is then transferred into the intended mother instead of into the woman who provided the egg. Donor oocytes thus give women a mechanism to become pregnant and give birth to a child that will be their biological child (assuming that the recipient woman carries the baby), but not their genetic child. In cases where the recipients womb is absent or unable to carry a pregnancy, or in cases involving gay male couples, a gestational surrogate is used and the embryos are implanted into her per an agreement with the recipients. The combination of egg donation and surrogacy has enabled gay men, including singer Elton John and his partner, to have biological children. Oocyte and embryo donation now account for approximately 18% of in vitro fertilization recorded births in the US.This work established the technical foundation and legal-ethical framework surrounding the clinical use of human oocyte and embryo donation, a mainstream clinical practice, which has evolved over the past 25 years. Building upon this groundbreaking research and since the initial birth announcement in 1984, well over 47,000 live births resulting from donor oocyte embryo transfer have been and continue to be recorded by the Centers for Disease Control (CDC) in the United States to infertile women, who otherwise would not have had children by any other existing method. The legal status and cost/compensation models of egg donation vary significantly by country. It may be totally illegal (e.g., Italy, Germany, Austria); legal only if anonymous and gratuitous—that is, without any compensation for the egg donor (e.g., France); legal only if non-anonymous and gratuitous (e.g., Canada); legal only if anonymous, but egg donors may be compensated (the compensation is often described as being to offset her inconvenience and expenses) (e.g., Spain, Czech Republic, South Africa, Greece); legal only if non-anonymous, but egg donors may be compensated (e.g., the UK); or legal whether or not it is anonymous, and egg donors may be compensated (e.g., the US). Indication A need for egg donation may arise for a number of reasons. Infertile couples may resort to egg donation when the female partner cannot have genetic children because her own eggs cannot generate a viable pregnancy, or because they could generate a viable pregnancy but the chances are so low that it is not advisable or not financially feasible to do IVF with her own eggs. This situation is often, but not always based on advanced reproductive age. It can also be due to early onset of menopause, which can occur as early as their 20s. In addition, some women are born without ovaries, while some womens reproductive organs have been damaged or surgically removed due to disease or other circumstances. Another indication would be a genetic disorder on part of the woman that either renders her infertile or would be dangerous for any offspring, problems that can be circumvented by using eggs from another woman. Many women have none of these issues, but continue to be unsuccessful using their own eggs—in other words, they have undiagnosed infertility—and thus turn to donor eggs or donor embryos. As stated above, egg donation is also helpful for gay male couples using surrogacy (see LGBT parenting). In the US and UK, if desired (and if the egg donor agrees), the couple can meet and get acquainted with the egg donor, her children and family members. More often, egg donations are anonymous or semi-anonymous (i.e. the egg donor may provide personal and medical information, photographs of herself and/or family members, and an email or third party willing to convey communications between the donor and recipients). In some countries, the law requires non-anonymity (e.g., the UK). In other countries, the law requires anonymity (e.g., France, Spain, the Czech Republic, South Africa). In the US the choice between anonymity, semi-anonymity and non-anonymity is made by the donor and recipient, although some IVF clinics that maintain their own databases of egg donors strongly encourage or require anonymity. Congenital absence of eggs Turner syndrome Gonadal dysgenesis Acquired reduced egg quantity / quality Oophorectomy Premature menopause Chemotherapy Radiation therapy Autoimmunity Advanced maternal age Compromised ovarian reserve Other Diseases of X-Sex linkage Repetitive fertilization or pregnancy failure Ovaries inaccessible for egg retrieval Types of donors Donors includes the following types: Donors unrelated to the recipients who do it for altruistic and/or monetary reasons. In the US they are anonymous donors or semi-anonymous donors recruited by egg donor agencies or IVF clinics. Such donors may also be non-anonymous donors, i.e., they may exchange identifying and contact information with the recipients. In most countries other than the US and UK, the law requires such donors to remain anonymous. US donors are often recruited by agencies who act as intermediaries, typical with promises of money and altruistic rewards. Designated donors, e.g. a friend or relative brought by the patients to serve as a donor specifically to help them. In Sweden and France, couples who can bring such a donor still get another person as a donor, but instead get advanced on the waiting list for the procedure, and that donor rather becomes a "cross donor". In other words, the couple brings a designated donor, she donates anonymously to another couple, and the couple that brought her receives eggs from another anonymous donor much more quickly than they would have if they had not been able to provide a designated donor. Patients taking part in shared oocyte programmes. Women who go through in vitro fertilization may be willing to donate unused eggs to such a program, where the egg recipients together help paying the cost of the In Vitro Fertilisation (IVF) procedure. It is very cost-effective compared to other alternatives. The pregnancy rate with use of shared oocytes is similar to that with altruistic donors. Procedure Egg donors are first recruited, screened, and give consent before participating in the IVF process. Once the egg donor is recruited, she undergoes IVF stimulation therapy, followed by the egg retrieval procedure. After retrieval, the ova are fertilized by the sperm of the male partner (or sperm donor) in the laboratory, and, after several days, the best resulting embryo(s) is/are placed in the uterus of the recipient, whose uterine lining has been appropriately prepared for embryo transfer beforehand. The recipient is usually, but not always, the person who requested the service and then will carry and deliver the pregnancy and keep the baby. The egg donors process in detail Before any intensive medical, psychological, or genetic testing is done on a donor, they must first be chosen by a recipient from the profiles on agency or clinic databases (or, in countries where donors are required to remain anonymous, they are chosen by the recipients doctor based on their physical and temperamental resemblance to the recipient woman). This is due to the fact that all of the mentioned examinations are expensive and the agencies must first confirm that a match is possible or guaranteed before investing in the process. Each egg donor is first referred to a psychologist who will evaluate if she is mentally prepared to undertake and complete the donation process. These evaluations are necessary to ensure that the donor is fully prepared and capable of completing the donation cycle safely and successfully. The donor is then required to undergo a thorough medical examination, including a pelvic exam, AMH blood test to check hormone levels and to test for infectious diseases, Rh factor, blood type, and drugs and an ultrasound to examine her ovaries, uterus and other pelvic organs. A family history of approximately the past three generations is also required, meaning that adoptees are usually not accepted because of the lack of past health knowledge. Genetic testing is also usually done on donors to ensure that they do not carry mutations (e.g., cystic fibrosis) that could harm the resulting children; however, not all clinics automatically perform such testing and thus recipients must clarify with their clinics whether such testing will be done. Once the screening is complete and a legal contract signed, the donor will begin the donation cycle, which typically takes between three and six weeks. An egg retrieval procedure comprises both the Egg Donors Cycle and the Recipients Cycle. Birth control pills are administered during the first few weeks of the egg donation process to synchronize the donors cycle with the recipients, followed by a series of injections which halt the normal functioning of the donors ovaries. These injections may be self-administered on a daily basis for a period of one to three weeks. Next, follicle-stimulating hormones (FSH) are given to the donor to stimulate egg production and increases the number of mature eggs produced by the ovaries. Throughout the cycle the donor is monitored often by a physician using blood tests and ultrasound exams to determine the donors reaction to the hormones and the progress of follicle growth. Once the doctor decides the follicles are mature, they will establish the date and time for the egg retrieval procedure. Approximately 36 hours before retrieval, the donor must administer one last injection of HCG hormone to ensure that her eggs are ready to be harvested. The egg retrieval itself is a minimally invasive surgical procedure lasting 20–30 minutes, performed under sedation by an anaesthetist, to ensure the donor is kept completely pain free. Egg donors may also be advised to take a pain-relieving medicine one hour before egg collection, to ensure minimum discomfort after the procedure. A small ultrasound-guided needle is inserted through the vagina to aspirate the follicles in both ovaries, which extracts the eggs. After resting in a recovery room for an hour or two, the donor is released. Most donors resume regular activities by the next day. Results In the United States, egg donor cycles have a success rate of over 60%. (See statistics at http://www.sart.org.) When a "fresh cycle" is followed by a "frozen cycle", the success rate with donor eggs is approximately 80%. With egg donation, women who are past their reproductive years or menopause can still become pregnant. Adriana Iliescu held the record as the oldest woman to give birth using IVF and donated egg, when she gave birth in 2004 at the age of 66, a record passed in 2006. According to a 2002 study, egg donations had a 38% success rate in cases of women past their reproductive years. Recipient and donor motivation Intended parent motivation Women may resort to egg donation because their ovaries may not be able to produce a substantial number of viable eggs. Women may experience premature ovarian failure and stop producing viable eggs during their reproductive years. Some women may be born without ovaries. Ovaries damaged by chemotherapy or radiotherapy may also no longer produce healthy eggs. Older women with diminished ovarian reserves or older women who are going through menopause could also become pregnant with egg donation.Women who produce healthy eggs may also elect to use a donor egg so they will not pass on genetic diseases. Donor motivation An egg donor may be motivated to donate eggs for altruistic reasons. A survey of 80 American women showed that 30% were motivated by altruism alone, another 20% were attracted only by monetary compensation, while 40% of donors were motivated by both reasons. The same study found that 45% of egg donors were students the first time they donated and averaged $4,000 for each donation.Although the donors may be motivated by both monetary and altruistic reasons, egg agencies desire and prefer to choose donors that are strictly providing eggs for altruistic reasons. The European Union limits any financial compensation for donors to at most $1500. In some countries, most notably Spain and Cyprus, this has limited donors to the poorest segments of society. In the United States, donors are paid regardless of how many eggs she produces. A donors compensation may increase for each additional time she provides eggs, especially if the donors eggs have a history of reliably resulting in the recipient becoming pregnant. In the United States, egg-broker agencies are known for advertising to college students who are more likely to be in financial situations that motivate them to participate for the financial compensation. It is not unusual for one student to donate many times. Often, this is done without consideration of potential long-term health consequences. Such a student is arguably not making the decision to donate her eggs autonomously due to her unfavorable financial situation. Risks Egg donor The procedures for the donor and the medication given to her are identical to the procedures and medications used in autologous IVF (i.e., IVF on patients who are using their own eggs). The egg donor thus has the same low risk of complications from IVF as an autologous IVF patient would, such as bleeding from the oocyte recovery procedure and reaction to the hormones used to induce hyperovulation (producing more than one egg), including ovarian hyperstimulation syndrome (OHSS) and, rarely, liver failure.According to Jansen and Tucker, writing in the same assisted reproductive technologies textbook referenced above, the risk of OHSS varies with the clinic administering the hormones, from 6.6 to 8.4% of cycles, half of them "severe". The most severe form of OHSS is life-threatening. Recent studies have found that donors were at less risk of OHSS when the final maturation of oocytes was induced by GnRH agonist than with recombinant hCG. Both hormones were comparable in the number of mature oocytes produced and fertilization rates. A larger study in the Netherlands found 10 documented cases of deaths from IVF, with a rate of 1:10,000. "All of these patients were treated with GnRH agonists and none of these cases have been published in the scientific literature." The long-term effect of egg donation on donors has not been well studied, but because the same medications and procedures are used, it should be essentially the same as the long-term effect (if any) of IVF on patients using their own eggs. The evidence of increased cancer risk is equivocal; some studies have pointed to a slightly increased risk while other studies have found no such risk or even a slightly reduced risk in most patients (women with a family history of breast cancer, however, may have a higher risk). 1 in 5 women report psychological effects—which may be positive or negative—from donating their eggs, and two-thirds of egg donors were happy with the decision to donate their eggs. The same study found that 20% of women did not recall being aware of any physical risks. In accordance with the American Society for Reproductive Medicine guidelines, female donors are given a limit of 6 cycles that they may donate in order to minimize the possible health risks.However, it appears that repetitive oocyte donation cycles does not cause accelerated ovarian aging, evidenced by absence of decreased anti-Müllerian hormone (AMH) in such women. Intended parent The recipient has a minimal risk of contracting a transmittable disease. While the donor may test negative for HIV, such testing does not exclude the possibility that the donor has contracted HIV very recently, so the recipient faces a residual risk of exposure. However, the FDA governs this and requires full infectious disease testing no more than 30 days prior to retrieval and/or transfer. Most clinics now require, however, that donors be retested a few days prior to retrieval so the risk to the recipient is minimal. Intimate partners of both the egg donor and the recipient are also tested. The recipient also trusts that the medical history of the donor and her family is accurate. This factor of trust should not be underestimated in importance. Donors in the US are paid thousands of dollars; such compensation may attract unscrupulous individuals inclined to conceal their true motivations. However, a full psychological evaluation is required by most IVF clinics, giving an indication if the donor is trustworthy or not. In more cases than not, there is no ongoing relationship between the donor and recipient following the cycle. Both the donor and recipient agree in formal legal documents that the donation of the eggs is final at the time of retrieval, and typically both parties would like any "relationship" to conclude at that point; if they prefer continued contact, they may provide for that in the contract. Even if they prefer anonymity, however, it remains theoretically possible that in the future, some children may be able to identify their donor(s) using DNA databanks and/or registries (e.g., if the donor submits her DNA to a genealogy site and a child born from her donation later submits its DNA to the same site). Multiple birth is a common complication. Incidence of twin births is very high. At the present time, the American Society for Reproductive Medicine recommends that no more than 1 or 2 embryos be transferred in any given cycle. Remaining embryos are frozen, whether for future transfers if the first one fails, for siblings, or for eventual embryo donation. There appears to be a slightly higher risk of pregnancy-induced hypertension in pregnancies of egg donation. Fetus Pregnancies with egg donation are associated with a slightly increased risk of placental pathology. The local and systemic immunologic changes are also more pronounced than in natural pregnancies, so it has been suggested that the association is caused by reduced maternal immune tolerance towards the fetus, as the genetic similarity between the carrier and fetus from an egg donation is less than in a natural pregnancy. In contrast, the incidence of other perinatal complications, such as intrauterine growth restriction, preterm birth and congenital malformations, is comparable to conventional IVF without egg donation. Custody Generally legal documents are signed renouncing rights and responsibilities of custody on the part of the donor. Most IVF doctors will not proceed with administering medication to any donor until these documents are in place and a legal "clearance letter" confirming this understanding is provided to the doctor. Legality and financial issues The legal status and cost/compensation models of egg donation vary significantly by country. It may be totally illegal (e.g., Italy, Germany); legal only if anonymous and gratuitous—that is, without any compensation for the egg donor (e.g., France); legal only if non-anonymous and gratuitous (e.g., Canada); legal only if anonymous, but egg donors may be compensated (the compensation is often described as being to offset her inconvenience and expenses) (e.g., Spain, Czech Republic, South Africa); legal only if non-anonymous, but egg donors may be compensated (e.g., the UK); or legal whether or not it is anonymous, and egg donors may be compensated (e.g., the US). Because most countries prohibit the sale of body parts, egg donors generally are paid for undergoing the necessary medical procedures rather than for their eggs. In other words, if they complete the cycle, they will be paid the agreed price regardless of how many (or how few) eggs are retrieved. In countries that prohibit compensation there is an extreme dearth of young women willing to go through this procedure. Additionally, in most countries where it is legal and compensated, the law places a cap on the compensation and that cap tends to be in the vicinity of $1000–$2000. In the US, no law caps the compensation, but the American Society for Reproductive Medicine requires member clinics to abide by their standards, which provide that "sums of $5,000 or more require justification and sums above $10,000 are not appropriate." The "justification" for payments over $5000 may include previous successful donations, unusually good family health history, or membership in minority ethnicities for which it is more difficult to find donors. As a result of these legal and financial differences around the world, egg donation in the US is much more expensive than it is in other countries. For instance, at one top US clinic it costs more than $26,000 plus the donors medications (another several thousand dollars).Having an attorney draft a contract is recommended in order to ensure that the donor has no possible legal rights or responsibilities over the child or any frozen embryos. Hiring an attorney who specializes in reproductive law is thus strongly recommended, at least in the United States; other countries may have other procedures for clarifying the parties rights, or may simply have legislation that defines the parties rights. In the US, before the egg donors IVF cycle begins she typically must sign the Egg Donor Contract, which specifies the rights of the donor and the recipient(s) with respect to the retrieved eggs, the embryos, and any children conceived from the donation. Such contracts should specify that the recipients are the legal parents of the child and the legal owners of any eggs or embryos resulting from the cycle; in other words, while the donor has the right to cancel the cycle at any time prior to egg donation (although if she does so the contract generally provides that she will not be paid), once the eggs are retrieved they belong to the recipient(s). In individual cases the donors and parents may also wish to negotiate terms relating to any unused embryos (e.g., some donors would prefer that unused embryos be destroyed or donated to science, while others would prefer or allow them to be donated to another infertile couple). Some states have also adopted the Uniform Parentage Act, which provides that the recipient or recipients have complete parental responsibility of the conceived child. In Buzzanca v. Buzzanca, 72 Cal. Rptr.2d 280 (Cal. Ct. App. 1998), the court held that both the recipient and the father of a child conceived through anonymous sperm and egg donation and carried by a surrogate were the legal parents of the child by virtue of their procreative intent. Therefore, the father was required to pay child support even though he sought a divorce before the child was born. Donor registries A donor registry is a registry to facilitate donor conceived people, sperm donors and egg donors to establish contact with genetic kindred. They are mostly used by donor conceived people to find genetic half-siblings from the same egg- or sperm donor. Some donors are non-anonymous, but most are anonymous, i.e. the donor conceived person does not know the true identity of the donor. Still, they may get the donor number from the fertility clinic. If that donor had donated before, then other donor conceived people with the same donor number are thus genetic half-siblings. In short, donor registries match people who type in the same donor number. Alternatively, if the donor number is not available, then known donor characteristics, e.g. hair, eye and skin color may be used in matching. Donors may also register, and therefore, donor registries may also match donors with their genetic children. The largest registry is the Donor Sibling Registry- with more than 25,000 members, the DSR has matched almost 7,000 donor conceived people with their egg and sperm donors, as well as with their half siblings. Alternate methods of providing an information link between the donor and recipient (both agreeing to stay registered on the DSR) are often provided for in the legal document (referred to as the "Egg Donor Agreement".) Embryo donation An alternative to egg donation in some couples, especially those in whom the male partner cannot provide viable sperm, is embryo donation. Embryo donation is the use of embryos remaining after a couples IVF treatments have been completed, to another individual or couple, followed by the placement of those embryos into the recipient womans uterus, to facilitate pregnancy and childbirth. Embryo donation is more cost-effective than egg donation on a "per live birth" basis. Another study has found that embryos created for one couple, using an egg donor, are often made available for donation to another couple if the first couple chooses not to use them. Psychological and social issues Quality of Parent-Infant Relationships Quality of parent-child attachment in early infancy has been recognized as a crucial influencer of a childs socioemotional development. The formation of a quality and secure attachment is largely influenced by parental representations of the parent-child relationship. Concern regarding relationship quality and attachment security in egg donor families is understandable and typically stems from the absence of genetic material shared between the mother and child. In recent years, researchers have begun to question if lack of genetic commonality between mother and child inhibits the ability to form a quality attachment. In a recent study, quality of infant-parent relationships was examined among egg donor families in comparison to in vitro fertilization families. Infants were between the ages of 6–18 months. Through use of the Parent Development Interview (PDI) and observational assessment, the study found few differences between family types on the representational level, yet significant differences between family types on the observational level. Egg donation mothers were less sensitive and structuring than IVF mothers, and egg donation infants were less emotionally responsive, and involving than IVF infants. No differences were found in relationship quality between egg donor fathers and IVF fathers representationally or observationally. Due to the developmental implications of forming healthy parent-child relationships in early infancy, the finding that egg donor mothers were less sensitive and structuring towards their infants raises concern about attachment styles among egg donor families, and the impacts it may have on infants future socioemotional development. Telling the child Most psychologists recommend being open and honest with children from an early age. Groups for donor conceived children make a strong case for the rights of children to have access to information about their genetic background. For donor conceived children who find out after a long period of secrecy, their main grief is usually not the fact that they are not the genetic child of the mother who raised (and, usually, gave birth to) them, but the fact that their parents lied to them, causing loss of trust. Furthermore, assuming that egg-donor conceived children have essentially the same reaction as sperm-donor conceived children, the overturning of ones lifelong understanding of who ones genetic parents were may cause a lasting sense of imbalance and loss of control. Telling the children that they were donor conceived is recommended, based on decades of experience with adoption (and more recent feedback from donor-conceived children) showing that not telling children is harmful to the parent-child relationship and to the child psychologically. Even parents who would normally be extremely reluctant to tell the child should consider telling if any of the following scenarios applies: When anyone other than the parents know about the donation, such that the child might find it out from somebody else. When the recipient carries a significant genetic disease, since telling the child will reassure the child that they do not carry the disease. Where the child is found to have a genetically transmitted disorder and it is necessary to take legal action which then identifies the donor
Egg donation
.Conversely, when the child is being raised in a religion or a culture that strongly disapproves of donor conception (e.g., a Catholic country where egg donation is illegal), that may counsel against telling the child, at least until the child is much older and clearly capable of understanding why they were not told earlier and of keeping that information to themself. A systematic review of factors contributing to parental decision-making in disclosing donor conception has shown that parents cite the childs best interest as the main factor they use to make the decision. Parents who disclose donor conception to the child emphasize the importance of an honest parent-child relationship, while parents who do not disclose express their desire to protect the child from social stigma or other trauma. Health care staff and support groups have been demonstrated to affect the decision to disclose the procedure. It is generally recommended that parents who disclose should do so in age-appropriate ways, ideally starting well before the age of five with a discussion of the fact that their parents needed help to have a child because certain things are needed to make a child—namely, sperm and eggs—and because the parents did not have one of those things, a nice woman gave it to them. Families sharing same donor Having contact and meeting among families sharing the same donor generally has positive effects. It gives the child an additional extended family and may help give the child a sense of identity by answering questions about the donor. It is more common among open identity-families headed by single men/women. Less than 1% of those seeking donor-siblings find it a negative experience, and in such cases it is mostly where the parents disagree with each other about how the relationship should proceed. Other family members Parents of donors may regard the donated eggs as a family asset and may regard the donor conceived people as grandchildren. Donor marketing For a donor to be accepted by an agency and repeatedly used she must be marketable and appealing to the recipients. Although egg donation is a significant, life-giving act, the companies participating in this industry still have to operate with an economical mind-set. Matches between egg recipients and egg donors are what make the profit for the company and achievable to continue these processes for others. The most sought-after donors tend to be those who are (1) proven (i.e., have donated before and produced a pregnancy from it, proving themselves both fertile and reliable); (2) conventionally attractive; (3) healthy, with good family health histories; and (4) smart, well educated. Donor profiles presented on agency websites are their primary marketing tool to find recipients and learn what these future consumers want. On the donor profiles listed on the agency website for recipients, or "clients", to peruse for their desired egg match, "physical characteristics, family health history, educational attainment (in some cases, standardized test scores, GPA, and IQ scores are requested), as well as open-ended questions about hobbies, likes and dislikes, and motivations for donating" are included. Donors are encouraged to submit attractive photos and are advised of what the recipient finds as desirable. Profiles that are at some point deemed unacceptable are deleted, whether it be because their personalities did not stand out or their portrayals were viewed as negative in some way. Overweight volunteers for donation are also most often not accepted, not just because of conventional views on physical attractiveness but also because women with a higher body-mass index tend to respond differently (less well) to ovarian stimulation drugs and IVF clinics thus generally recommend that patients not use donors with higher BMIs. Egg donors also have a higher standard of physical appearance than sperm donors; many sperm donors are not required to provide adult photographs of themselves, or in some cases, any photographs. Religious views Some Christian leaders indicate that IVF is acceptable (provided that no fertilized embryos are discarded in the process). Many Christian couples who cannot have children thus can go for IVF, with both the husbands sperm and the wifes egg and this is in line with the churchs teaching. However the issue is more problematic with donor eggs. There are also some Christian leaders (especially Catholic) who are concerned about all in vitro fertility therapies because they disrupt the natural act of conceiving a child where gamete donations, both egg and sperm donations, are seen to "compromise the marital bond and family integrity". and they encourage infertile couples to consider adoption instead. In the Orthodox Jewish community there is no consensus as to whether an egg donor needs to be Jewish in order for the child to be considered Jewish from birth. In the 1990s religious authorities said that if the birth mother was Jewish that the child would be Jewish as well, but in the past few years rabbis in Israel have begun to reconsider, which in turn is causing more debate around the world. Conservative Rabbi Elliot Dorff has suggested that there are arguments for both sides (birth mother or genetic mother) in religious scripture. Dean of the Center for the Jewish Future at Yeshiva University believes that any child where the birth mother or the genetic mother isnt Jewish should go through a conversion process in infancy, to be sure that their Judaism isnt questioned later in life. This is not an issue in the reform community for two reasons. First, only one parent must be Jewish for the child to be considered Jewish; thus, if the father is Jewish, the mothers religion is irrelevant. Second, if the mother who carries the pregnancy and gives birth is Jewish, reform Jews will generally consider that child to be Jewish from birth because it was born of a Jewish mother. See also Donor conceived person Sperm donation Surrogacy Third-party reproduction References External links Egg Donation at Curlie
Gestational age
In obstetrics, gestational age is a measure of the age of a pregnancy which is taken from the beginning of the womans last menstrual period (LMP), or the corresponding age of the gestation as estimated by a more accurate method if available. Such methods include adding 14 days to a known duration since fertilization (as is possible in in vitro fertilization), or by obstetric ultrasonography. The popularity of using this definition of gestational age is that menstrual periods are essentially always noticed, while there is usually a lack of a convenient way to discern when fertilization occurred. Gestational age is contrasted with fertilization age which takes the date of fertilization as the start date of gestation. The initiation of pregnancy for the calculation of gestational age can differ from definitions of initiation of pregnancy in context of the abortion debate or beginning of human personhood. Methods According to American College of Obstetricians and Gynecologists, the main methods to calculate gestational age are: Directly calculating the days since the beginning of the last menstrual period Early obstetric ultrasound, comparing the size of an embryo or fetus to that of a reference group of pregnancies of known gestational age (such as calculated from last menstrual periods) and using the mean gestational age of other embryos or fetuses of the same size. If the gestational age as calculated from an early ultrasound is contradictory to the one calculated directly from the last menstrual period, it is still the one from the early ultrasound that is used for the rest of the pregnancy. In case of in vitro fertilization, calculating days since oocyte retrieval or co-incubation and adding 14 days.Gestational age can also be estimated by calculating days from ovulation if it was estimated from related signs or ovulation tests, and adding 14 days by convention.A more complete listing of methods is given in following table: As a general rule, the official gestational age should be based on the actual beginning of the last menstrual period, unless any of the above methods gives an estimated date that differs more than the variability for the method, in which case the difference cannot probably be explained by that variability alone. For example, if there is a gestational age based on the beginning of the last menstrual period of 9.0 weeks, and a first-trimester obstetric ultrasonography gives an estimated gestational age of 10.0 weeks (with a 2 SD variability of ±8% of the estimate, thereby giving a variability of ±0.8 weeks), the difference of 1.0 weeks between the tests is larger than the 2 SD variability of the ultrasonography estimate, indicating that the gestational age estimated by ultrasonography should be used as the official gestational age.Once the estimated due date (EDD) is established, it should rarely be changed, as the determination of gestational age is most accurate earlier in the pregnancy.Following are diagrams for estimating gestational age from obstetric ultrasound, by various target parameters: Comparison to fertilization age The fertilization or conceptional age (also called embryonic age and later fetal age) is the time from the fertilization. It usually occurs within a day of ovulation, which, in turn, occurs on average 14.6 days after the beginning of the preceding menstruation (LMP). There is also considerable variability in this interval, with a 95% prediction interval of the ovulation of 9 to 20 days after menstruation even for an average woman who has a mean LMP-to-ovulation time of 14.6. In a reference group representing all women, the 95% prediction interval of the LMP-to-ovulation is 8.2 to 20.5 days. The actual variability between gestational age as estimated from the beginning of the last menstrual period (without the use of any additional method mentioned in previous section) is substantially larger because of uncertainty which menstrual cycle gave rise to the pregnancy. For example, the menstruation may be scarce enough to give the false appearance that an earlier menstruation gave rise to the pregnancy, potentially giving an estimated gestational age that is approximately one month too large. Also, vaginal bleeding occurs during 15-25% of first trimester pregnancies, and may be mistaken as menstruation, potentially giving an estimated gestational age that is too low. Uses Gestational age is used for example for: The events of prenatal development, which usually occur at specific gestational ages. Hence, the gestational timing of a fetal toxin exposure, fetal drug exposure or vertically transmitted infection can be used to predict the potential consequences to the fetus. Estimated date of delivery Scheduling prenatal care Estimation of fetal viability Calculating the results of various prenatal tests, (for example, in the triple test). Birth classification into for example preterm, term or postterm. Classification of infant deaths and stillbirths Postnatally (after birth) to estimate various risk factors Estimation of due date The mean pregnancy length has been estimated to be 283.4 days of gestational age as timed from the first day of the last menstrual period and 280.6 days when retrospectively estimated by obstetric ultrasound measurement of the fetal biparietal diameter (BPD) in the second trimester. Other algorithms take into account other variables, such as whether this is the first or subsequent child, the mothers race, age, length of menstrual cycle, and menstrual regularity. In order to have a standard reference point, the normal pregnancy duration is assumed by medical professionals to be 280 days (or 40 weeks) of gestational age. Furthermore, actual childbirth has only a certain probability of occurring within the limits of the estimated due date. A study of singleton live births came to the result that childbirth has a standard deviation of 14 days when gestational age is estimated by first-trimester ultrasound and 16 days when estimated directly by last menstrual period.The most common system used among healthcare professionals is Naegeles rule, which estimates the expected date of delivery (EDD) by adding a year, subtracting three months, and adding seven days to the first day of a womans last menstrual period (LMP) or corresponding date as estimated from other means. Medical fetal viability There is no sharp limit of development, gestational age, or weight at which a human fetus automatically becomes viable. According to studies between 2003 and 2005, 20 to 35 percent of babies born at 23 weeks of gestation survive, while 50 to 70 percent of babies born at 24 to 25 weeks, and more than 90 percent born at 26 to 27 weeks, survive. It is rare for a baby weighing less than 500 g (17.6 ounces) to survive. A babys chances for survival increases 3-4% per day between 23 and 24 weeks of gestation and about 2-3% per day between 24 and 26 weeks of gestation. After 26 weeks the rate of survival increases at a much slower rate because survival is high already. Prognosis depends also on medical protocols on whether to resuscitate and aggressively treat a very premature newborn, or whether to provide only palliative care, in view of the high risk of severe disability of very preterm babies. Birth classification Using gestational age, births can be classified into broad categories: Using the LMP (last menstrual period) method, a full-term human pregnancy is considered to be 40 weeks (280 days), though pregnancy lengths between 38 and 42 weeks are considered normal. A fetus born prior to the 37th week of gestation is considered to be preterm. A preterm baby is likely to be premature and consequently faces increased risk of morbidity and mortality. An estimated due date is given by Naegeles rule. According to the WHO, a preterm birth is defined as "babies born alive before 37 weeks of pregnancy are completed." According to this classification, there are three sub-categories of preterm birth, based on gestational age: extremely preterm (fewer than 28 weeks), very preterm (28 to 32 weeks), moderate to late preterm (32 to 37 weeks). Various jurisdictions may use different classifications. In classifying perinatal deaths, stillbirths and infant deaths For most of the 20th Century, official definitions of a live birth and infant death in the Soviet Union and Russia differed from common international standards, such as those established by the World Health Organization in the latter part of the century. Babies who were fewer than 28 weeks of gestational age, or weighed fewer than 1000 grams, or fewer than 35 cm in length – even if they showed some sign of life (breathing, heartbeat, voluntary muscle movement) – were classified as "live fetuses" rather than "live births." Only if such newborns survived seven days (168 hours) were they then classified as live births. If, however, they died within that interval, they were classified as stillbirths. If they survived that interval but died within the first 365 days they were classified as infant deaths. More recently, thresholds for "fetal death" continue to vary widely internationally, sometimes incorporating weight as well as gestational age. The gestational age for statistical recording of fetal deaths ranges from 16 weeks in Norway, to 20 weeks in the US and Australia, 24 weeks in the UK, and 26 weeks in Italy and Spain.The WHO defines the perinatal period as "The perinatal period commences at 22 completed weeks (154 days) of gestation and ends seven completed days after birth." Perinatal mortality is the death of fetuses or neonates during the perinatal period. A 2013 study found that "While only a small proportion of births occur before 24 completed weeks of gestation (about 1 per 1000), survival is rare and most of them are either fetal deaths or live births followed by a neonatal death." Postnatal use Gestational age (as well as fertilization age) is sometimes used postnatally (after birth) to estimate various risk factors. For example, it is a better predictor than postnatal age for risk of intraventricular hemorrhage in premature babies treated with extracorporeal membrane oxygenation. Factors affecting pregnancy length Childs gestational age at birth (pregnancy length) is associated with various likely causal maternal non-genetic factors: stress during pregnancy, age, parity, smoking, infection and inflammation, BMI. Also, preexisting maternal medical conditions with genetic component, e.g., diabetes mellitus type 1, systemic lupus erythematosus, anaemia. Parental ancestral background (race) also plays a role in pregnancy duration. Gestational age at birth is on average shortened by various pregnancy aspects: twin pregnancy, prelabor rupture of (fetal) membranes, pre-eclampsia, eclampsia, intrauterine growth restriction. The ratio between fetal growth rate and uterine size (reflecting uterine distension) is suspected to partially determine the pregnancy length. Heritability of pregnancy length Family-based studies showed that gestational age at birth is partially (from 25% to 40%) determined by genetic factors. See also Pregnancy Maternity Prenatal development Gestation periods in mammals Abortion law Reproductive rights Fetal rights == References ==
Tenderness (medicine)
In medicine, tenderness is pain or discomfort when an affected area is touched. It should not be confused with the pain that a patient perceives without touching. Pain is patients perception, while tenderness is a sign that a clinician elicits. See also Rebound tenderness, an indication of peritonitis. == References ==
Mantle cell lymphoma
Mantle cell lymphoma (MCL) is a type of non-Hodgkins lymphoma (NHL), comprising about 6% of NHL cases. There are only about 15,000 patients presently in the United States with mantle cell lymphoma. It is named for the mantle zone of the lymph nodes. MCL is a subtype of B-cell lymphoma, due to CD5 positive antigen-naive pregerminal center B-cell within the mantle zone that surrounds normal germinal center follicles. MCL cells generally over-express cyclin D1 due to the t(11:14) translocation, a chromosomal translocation in the DNA. Signs and symptoms At diagnosis, patients typically are in their 60s and present to their physician with advanced disease. About half have B symptoms such as fever, night sweats, or unexplained weight loss (over 10% of body weight). Enlarged lymph nodes (for example, a "bump" on the neck, armpits or groin) or enlargement of the spleen are usually present. Bone marrow, liver and gastrointestinal tract involvement occurs relatively early in the course of the disease. Mantle cell lymphoma has been reported in rare cases to be associated with severe allergic reactions to mosquito bites. These reactions involve extensive allergic reactions to mosquito bites which range from greatly enlarged bite sites that may be painful and involve necrosis to systemic symptoms (e.g. fever, swollen lymph nodes, abdominal pain, and diarrhea), or, in extremely rare cases, to life-threatening anaphylaxis. In several of these cases, the mosquito bite allergy (MBA) reaction occurred prior to the diagnosis of MCL suggesting that MBA can be a manifestation of early-developing mantle cell lymphoma. Pathogenesis MCL, like most cancers, results from the acquisition of a combination of (non-inherited) genetic mutations in somatic cells. This leads to a clonal expansion of malignant B lymphocytes. The factors that initiate the genetic alterations are typically not identifiable, and usually occur in people with no particular risk factors for lymphoma development. Because it is an acquired genetic disorder, MCL is neither communicable nor inheritable.A defining characteristic of MCL is mutation and overexpression of cyclin D1, a cell cycle gene, that contributes to the abnormal proliferation of the malignant cells. MCL cells may also be resistant to drug-induced apoptosis, making them harder to cure with chemotherapy or radiation. Cells affected by MCL proliferate in a nodular or diffuse pattern with two main cytologic variants, typical or blastic. Typical cases are small to intermediate-sized cells with irregular nuclei. Blastic (aka blastoid) variants have intermediate to large-sized cells with finely dispersed chromatin, and are more aggressive in nature. The tumor cells accumulate in the lymphoid system, including lymph nodes and the spleen, with non-useful cells eventually rendering the system dysfunctional. MCL may also replace normal cells in the bone marrow, which impairs normal blood cell production. Diagnosis Diagnosis generally requires stained slides of a surgically removed part of a lymph node. Other methods are also commonly used, including cytogenetics and fluorescence in situ hybridization (FISH). Polymerase chain reaction (PCR) and CER3 clonotypic primers are additional methods, but are less often used.The immunophenotype profile consists of CD5+ (in about 80% of cases), CD10-/+, and it is usually CD5+ and CD10-. CD20+, CD23-/+ (though plus in rare cases). Generally, cyclin D1 is expressed. Cyclin D1-negative mantle cell lymphoma can be diagnosed by detecting the SOX11 marker. The workup for mantle cell lymphoma is similar to the workup for many indolent lymphomas and certain aggressive lymphomas. Mantle cell lymphoma is a systemic disease with frequent involvement of the bone marrow and gastrointestinal tract (generally showing polyposis in the lining). There is also a not-uncommon leukemic phase, marked by its presence in the blood. For this reason, both the peripheral blood and bone marrow are evaluated for the presence of malignant cells. Chest, abdominal, and pelvic CT scans are routinely performed.Since mantle cell lymphoma may present a lymphomatous polyposis coli and colon involvement is common, colonoscopy is considered a routine part of the evaluation. Upper endoscopy and neck CT scan may be helpful in selected cases. In some patients with the blastic variant, lumbar puncture is done to evaluate the spinal fluid for involvement.CT scanPET scan Treatments There are no proven standards of treatment for MCL, and there is no consensus among specialists on how to treat it optimally. Many regimens are available and often get good response rates, but patients almost always get disease progression after chemotherapy. Each relapse is typically more difficult to treat, and relapse is generally faster. Regimens are available that treat relapses, and new approaches are under test. Because of the aforementioned factors, many MCL patients enroll in clinical trials to get the latest treatments.There are four classes of treatments in general use: chemotherapy, immunotherapy, radioimmunotherapy and biologic agents. The phases of treatment are generally: frontline, following diagnosis, consolidation, after frontline response (to prolong remissions), and relapse. Relapse is usually experienced multiple times. Chemotherapy Chemotherapy is widely used as frontline treatment, and often is not repeated in relapse due to side effects. Alternate chemotherapy is sometimes used at first relapse. For frontline treatment, CHOP with rituximab is the most common chemotherapy, and often given as outpatient by IV. A stronger chemotherapy with greater side effects (mostly hematologic) is HyperCVAD, often given in the hospital setting, with rituximab and generally to fitter patients (some of which are over 65). HyperCVAD is becoming popular and showing promising results, especially with rituximab. It can be used on some elderly (over 65) patients, but seems only beneficial when the baseline Beta-2-MG blood test was normal. It is showing better complete remissions (CR) and progression-free survival (PFS) than CHOP regimens. A less intensive option is bendamustine with rituximab.Second line treatment may include fludarabine, combined with cyclophosphamide and/or mitoxantrone, usually with rituximab. Cladribine and clofarabine are two other medications being investigated in MCL. A relatively new regimen that uses old medications is PEP-C, which includes relatively small, daily doses of prednisone, etoposide, procarbazine, and cyclophosphamide, taken orally, has proven effective for relapsed patients. According to Dr. John Leonard, PEP-C may have anti-angiogenetic properties, something that he and his colleagues are testing through an ongoing drug trial.Another approach involves using very high doses of chemotherapy, sometimes combined with total body irradiation (TBI), in an attempt to destroy all evidence of the disease. The downside to this is the destruction of the patients entire immune system as well, requiring rescue by transplantation of a new immune system (hematopoietic stem cell transplantation), using either autologous stem cell transplantation, or those from a matched donor (an allogeneic stem cell transplant). A presentation at the December 2007 American Society of Hematology (ASH) conference by Christian Geisler, chairman of the Nordic Lymphoma Group claimed that according to trial results, mantle cell lymphoma is potentially curable with very intensive chemo-immunotherapy followed by a stem cell transplant, when treated upon first presentation of the disease.These results seem to be confirmed by a large trial of the European Mantle Cell Lymphoma Network indicating that induction regimens containing monoclonal antibodies and high dose ARA-C (Cytarabine) followed by ASCT should become the new standard of care of MCL patients up to approximately 65 years.A study released in April 2013 showed that patients with previously untreated indolent lymphoma, bendamustine plus rituximab can be considered as a preferred first-line treatment approach to R-CHOP because of increased progression-free survival and fewer toxic effects. Immunotherapy Immune-based therapy is dominated by the use of the rituximab monoclonal antibody, sold under the trade name Rituxan (or as Mabthera in Europe and Australia). Rituximab may have good activity against MCL as a single agent, but it is typically given in combination with chemotherapies, which prolongs response duration. There are newer variations on monoclonal antibodies combined with radioactive molecules known as radioimmunotherapy (RIT). These include Zevalin and Bexxar. Rituximab has also been used in small numbers of patients in combination with thalidomide with some effect. In contrast to these antibody-based passive immunotherapies, the field of active immunotherapy tries to activate a patients immune system to specifically eliminate their own tumor cells. Examples of active immunotherapy include cancer vaccines, adoptive cell transfer, and immunotransplant, which combines vaccination and autologous stem cell transplant. Though no active immunotherapies are currently a standard of care, numerous clinical trials are ongoing. Targeted therapy Two Bruton tyrosine kinase inhibitors (BTKi), one In November 2013, ibrutinib (trade name Imbruvica, Pharmacyclics LLC) and in October 2017, acalabrutinib (trade name Calquence, AstraZeneca Pharmaceuticals LP) were approved in the United States for treating MCL. Other targeted agents include the proteasome inhibitor bortezomib, mTOR inhibitors such as temsirolimus, and the P110δ inhibitor GS-1101.In November 2019, zanubrutinib (Brukinsa) was approved in the United States with an indication for the treatment of adults with mantle cell lymphoma who have received at least one prior therapy. Gene therapy Brexucabtagene autoleucel (Tecartus) was approved for medical use in the United States in July 2020, with an indication for the treatment of adults with relapsed or refractory mantle cell lymphoma. It was approved for medical use in the European Union in December 2020.Each dose of brexucabtagene autoleucel is a customized treatment created using the recipients own immune system to help fight the lymphoma. The recipients T cells, a type of white blood cell, are collected and genetically modified to include a new gene that facilitates the targeting and killing of the lymphoma cells. These modified T cells are then infused back into the recipient. Prognosis Recent clinical advances in mantle cell lymphoma (MCL) have seen standard‐of‐care treatment algorithms transformed. Frontline rituximab combination therapy, high dose cytarabine‐based induction in younger patients and, more recently, Bruton Tyrosine Kinase (BTK) inhibitors in the relapse setting have all demonstrated survival advantage in clinical trials (Wang et al., 2013; Eskelund et al., 2016; Rule et al., 2016). Over the last 15 years these practices have gradually become embedded in clinical practice and real‐world data has observed corresponding improvements in patient survival (Abrahamsson et al., 2014; Leux et al., 2014).The overall 5-year survival rate for MCL is generally 50% (advanced stage MCL) to 70% (for limited-stage MCL). Prognosis for individuals with MCL is problematic and indexes do not work as well due to patients presenting with advanced stage disease. Staging is used but is not very informative, since the malignant B-cells can travel freely though the lymphatic system and therefore most patients are at stage III or IV at diagnosis. Prognosis is not strongly affected by staging in MCL and the concept of metastasis does not really apply.The Mantle Cell Lymphoma International Prognostic Index (MIPI) was derived from a data set of 455 advanced stage MCL patients treated in series of clinical trials in Germany/Europe. Of the evaluable population, approximately 18% were treated with high-dose therapy and stem cell transplantation in first remission. The MIPI is able to classify patients into three risk groups: low risk (median survival not reached after median 32 months follow-up and 5-year OS rate of 60%), intermediate risk (median survival 51 months) and high risk (median survival 29 months). In addition to the 4 independent prognostic factors included in the model, the cell proliferation index (Ki-67) was also shown to have additional prognostic relevance. When the Ki67 is available, a biologic MIPI can be calculated.MCL is one of the few NHLs that can cross the boundary into the brain, yet it can be treated in that event.There are a number of prognostic indicators that have been studied. There is not universal agreement on their importance or usefulness in prognosis.Ki-67 is an indicator of how fast cells mature and is expressed in a range from about 10% to 90%. The lower the percentage, the lower the speed of maturity, and the more indolent the disease. Katzenberger et al. Blood 2006;107:3407 graphs survival versus time for subsets of patients with varying Ki-67 indices. He shows median survival times of about one year for 61-90% Ki-67 and nearly 4 years for 5-20% Ki-67 index. MCL cell types can aid in prognosis in a subjective way. Blastic is a larger cell type. Diffuse is spread through the node. Nodular are small groups of collected cells spread through the node. Diffuse and nodular are similar in behavior. Blastic is faster growing and it is harder to get long remissions. Some thought is that given a long time, some non-blastic MCL transforms to blastic. Although survival of most blastic patients is shorter, some data shows that 25% of blastic MCL patients survive to 5 years. That is longer than diffuse type and almost as long as nodular (almost 7 yrs).Beta-2 microglobulin is another risk factor in MCL used primarily for transplant patients. Values less than three have yielded 95% overall survival to six years for auto SCT where over three yields a median of 44 most overall survival for auto SCT (Khouri 03). This is not yet fully validated.Testing for high levels of Lactate dehydrogenase (LDH) in NHL patients is useful because LDH is released when body tissues break down for any reason. While it cannot be used as a sole means of diagnosing NHL, it is a surrogate for tracking tumor burden in those diagnosed by other means. The normal range is approximately 100–190. Epidemiology 6% of non-Hodgkin lymphoma cases are mantle cell lymphoma. As of 2015, the ratio of males to females affected is about 4:1. See also In situ mantle cell lymphoma List of hematologic conditions References Further reading Cohen JB, Zain JM, Kahl BS (2017). "Current Approaches to Mantle Cell Lymphoma: Diagnosis, Prognosis, and Therapies". Am Soc Clin Oncol Educ Book. 37 (37): 512–25. doi:10.1200/EDBK_175448. PMID 28561694. Dreyling M, Ferrero S, Hermine O (November 2014). "How to manage mantle cell lymphoma". Leukemia. 28 (11): 2117–30. doi:10.1038/leu.2014.171. PMID 24854989. S2CID 22105743. Schieber M, Gordon LI, Karmali R (2018). "Current overview and treatment of mantle cell lymphoma". F1000Res. 7: 1136. doi:10.12688/f1000research.14122.1. PMC 6069726. PMID 30109020. == External links ==
Surgery
Surgery is a medical specialty that uses operative manual and instrumental techniques on a person to investigate or treat a pathological condition such as a disease or injury, to help improve bodily function, appearance, or to repair unwanted ruptured areas. The act of performing surgery may be called a surgical procedure, operation, or simply "surgery". In this context, the verb "operate" means to perform surgery. The adjective surgical means pertaining to surgery; e.g. surgical instruments or surgical nurse. The person or subject on which the surgery is performed can be a person or an animal. A surgeon is a person who practices surgery and a surgeons assistant is a person who practices surgical assistance. A surgical team is made up of the surgeon, the surgeons assistant, an anaesthetist, a circulating nurse and a surgical technologist. Surgery usually spans from minutes to hours, but it is typically not an ongoing or periodic type of treatment. The term "surgery" can also refer to the place where surgery is performed, or, in British English, simply the office of a physician, dentist, or veterinarian. Definitions Surgery is an invasive technique with the fundamental principle of physical intervention on organs/organ systems/tissues for diagnostic or therapeutic reasons. As a general rule, a procedure is considered surgical when it involves cutting of a persons tissues or closure of a previously sustained wound. Other procedures that do not necessarily fall under this rubric, such as angioplasty or endoscopy, may be considered surgery if they involve "common" surgical procedure or settings, such as use of a sterile environment, anesthesia, antiseptic conditions, typical surgical instruments, and suturing or stapling. All forms of surgery are considered invasive procedures; so-called "noninvasive surgery" usually refers to an excision that does not penetrate the structure being excised (e.g. laser ablation of the cornea) or to a radiosurgical procedure (e.g. irradiation of a tumor). Types of surgery Surgical procedures are commonly categorized by urgency, type of procedure, body system involved, the degree of invasiveness, and special instrumentation. Based on timing: Elective surgery is done to correct a non-life-threatening condition, and is carried out at the persons request, subject to the surgeons and the surgical facilitys availability. A semi-elective surgery is one that must be done to avoid permanent disability or death, but can be postponed for a short time. Emergency surgery is surgery which must be done without any delay to prevent death or serious disabilities and/or loss of limbs and functions. Based on purpose: Exploratory surgery is performed to aid or confirm a diagnosis. Therapeutic surgery treats a previously diagnosed condition. Cosmetic surgery is done to subjectively improve the appearance of an otherwise normal structure. By type of procedure: Amputation involves cutting off a body part, usually a limb or digit; castration is also an example. Resection is the removal of all of an internal organ or body part, or a key part (lung lobe; liver quadrant) of such an organ or body part that has its own name or code designation. A segmental resection can be of a smaller region of an organ such as a hepatic segment or a bronchopulmonary segment. Excision is the cutting out or removal of only part of an organ, tissue, or other body part from the person. Extirpation is the complete excision or surgical destruction of a body part. Replantation involves reattaching a severed body part. Reconstructive surgery involves reconstruction of an injured, mutilated, or deformed part of the body. Transplant surgery is the replacement of an organ or body part by insertion of another from different human (or animal) into the person undergoing surgery. Removing an organ or body part from a live human or animal for use in transplant is also a type of surgery. By body part: When surgery is performed on one organ system or structure, it may be classed by the organ, organ system or tissue involved. Examples include cardiac surgery (performed on the heart), gastrointestinal surgery (performed within the digestive tract and its accessory organs), and orthopedic surgery (performed on bones or muscles). By degree of invasiveness of surgical procedures: Minimally-invasive surgery involves smaller outer incisions to insert miniaturized instruments within a body cavity or structure, as in laparoscopic surgery or angioplasty. By contrast, an open surgical procedure such as a laparotomy requires a large incision to access the area of interest. By equipment used: Laser surgery involves use of a laser for cutting tissue instead of a scalpel or similar surgical instruments. Microsurgery involves the use of an operating microscope for the surgeon to see small structures. Robotic surgery makes use of a surgical robot, such as the Da Vinci or the ZEUS robotic surgical systems, to control the instrumentation under the direction of the surgeon. Terminology Excision surgery names often start with a name for the organ to be excised (cut out) and end in -ectomy. Procedures involving cutting into an organ or tissue end in -otomy. A surgical procedure cutting through the abdominal wall to gain access to the abdominal cavity is a laparotomy. Minimally invasive procedures, involving small incisions through which an endoscope is inserted, end in -oscopy. For example, such surgery in the abdominal cavity is called laparoscopy. Procedures for formation of a permanent or semi-permanent opening called a stoma in the body end in -ostomy. Reconstruction, plastic or cosmetic surgery of a body part starts with a name for the body part to be reconstructed and ends in -oplasty. Rhino is used as a prefix for "nose", therefore a rhinoplasty is reconstructive or cosmetic surgery for the nose. Repair of damaged or congenital abnormal structure ends in -rraphy. Reoperation (return to the operating room) refers to a return to the operating theater after an initial surgery is performed to re-address an aspect of patient care best treated surgically. Reasons for reoperation include persistent bleeding after surgery, development of or persistence of infection. Description of surgical procedure Location Inpatient surgery is performed in a hospital, and the person undergoing surgery stays at least one night in the hospital after the surgery. Outpatient surgery occurs in a hospital outpatient department or freestanding ambulatory surgery center, and the person who had surgery is discharged the same working day. Office surgery occurs in a physicians office, and the person is discharged the same working day.At a hospital, modern surgery is often performed in an operating theater using surgical instruments, an operating table, and other equipment. Among United States hospitalizations for non-maternal and non-neonatal conditions in 2012, more than one-fourth of stays and half of hospital costs involved stays that included operating room (OR) procedures. The environment and procedures used in surgery are governed by the principles of aseptic technique: the strict separation of "sterile" (free of microorganisms) things from "unsterile" or "contaminated" things. All surgical instruments must be sterilized, and an instrument must be replaced or re-sterilized if it becomes contaminated (i.e. handled in an unsterile manner, or allowed to touch an unsterile surface). Operating room staff must wear sterile attire (scrubs, a scrub cap, a sterile surgical gown, sterile latex or non-latex polymer gloves and a surgical mask), and they must scrub hands and arms with an approved disinfectant agent before each procedure. Preoperative care Prior to surgery, the person is given a medical examination, receives certain pre-operative tests, and their physical status is rated according to the ASA physical status classification system. If these results are satisfactory, the person requiring surgery signs a consent form and is given a surgical clearance. If the procedure is expected to result in significant blood loss, an autologous blood donation may be made some weeks prior to surgery. If the surgery involves the digestive system, the person requiring surgery may be instructed to perform a bowel prep by drinking a solution of polyethylene glycol the night before the procedure. People preparing for surgery are also instructed to abstain from food or drink (an NPO order after midnight on the night before the procedure), to minimize the effect of stomach contents on pre-operative medications and reduce the risk of aspiration if the person vomits during or after the procedure. Some medical systems have a practice of routinely performing chest x-rays before surgery. The premise behind this practice is that the physician might discover some unknown medical condition which would complicate the surgery, and that upon discovering this with the chest x-ray, the physician would adapt the surgery practice accordingly. However, medical specialty professional organizations recommend against routine pre-operative chest x-rays for people who have an unremarkable medical history and presented with a physical exam which did not indicate a chest x-ray. Routine x-ray examination is more likely to result in problems like misdiagnosis, overtreatment, or other negative outcomes than it is to result in a benefit to the person. Likewise, other tests including complete blood count, prothrombin time, partial thromboplastin time, basic metabolic panel, and urinalysis should not be done unless the results of these tests can help evaluate surgical risk. Staging for surgery The pre-operative holding area is so important in the surgical phase since here is where most of the family members can see who the staff of the surgery will be, also this area is where the nurses in charge to give information to the family members of the patient. In the pre-operative holding area, the person preparing for surgery changes out of his or her street clothes and is asked to confirm the details of his or her surgery. A set of vital signs are recorded, a peripheral IV line is placed, and pre-operative medications (antibiotics, sedatives, etc.) are given. When the person enters the operating room, the skin surface to be operated on, called the operating field, is cleaned and prepared by applying an antiseptic (ideally chlorhexidine gluconate in alcohol, as this is twice as effective as povidone-iodine at reducing the risk of infection). If hair is present at the surgical site, it is clipped off prior to prep application. The person is assisted by an anesthesiologist or resident to make a specific surgical position, then sterile drapes are used to cover the surgical site or at least a wide area surrounding the operating field; the drapes are clipped to a pair of poles near the head of the bed to form an "ether screen", which separates the anesthetist/anesthesiologists working area (unsterile) from the surgical site (sterile).Anesthesia is administered to prevent pain from an incision, tissue manipulation and suturing. Depending on the kind of operation, anesthesia may be provided locally or as general anesthesia. Spinal anesthesia may be used when the surgical site is too large or deep for a local block, but general anesthesia may not be desirable. With local and spinal anesthesia, the surgical site is anesthetized, but the person can remain conscious or minimally sedated. In contrast, general anesthesia renders the person unconscious and paralyzed during surgery. The person is intubated and is placed on a mechanical ventilator, and anesthesia is produced by a combination of injected and inhaled agents. Choice of surgical method and anesthetic technique aims to reduce the risk of complications, shorten the time needed for recovery and minimise the surgical stress response. Intraoperative phase The intraoperative phase begins when the surgery subject is received in the surgical area (such as the operating theater or surgical department), and lasts until the subject is transferred to a recovery area (such as a post-anesthesia care unit).An incision is made to access the surgical site. Blood vessels may be clamped or cauterized to prevent bleeding, and retractors may be used to expose the site or keep the incision open. The approach to the surgical site may involve several layers of incision and dissection, as in abdominal surgery, where the incision must traverse skin, subcutaneous tissue, three layers of muscle and then the peritoneum. In certain cases, bone may be cut to further access the interior of the body; for example, cutting the skull for brain surgery or cutting the sternum for thoracic (chest) surgery to open up the rib cage. Whilst in surgery aseptic technique is used to prevent infection or further spreading of the disease. The surgeons and assistants hands, wrists and forearms are washed thoroughly for at least 4 minutes to prevent germs getting into the operative field, then sterile gloves are placed onto their hands. An antiseptic solution is applied to the area of the persons body that will be operated on. Sterile drapes are placed around the operative site. Surgical masks are worn by the surgical team to avoid germs on droplets of liquid from their mouths and noses from contaminating the operative site. Work to correct the problem in body then proceeds. This work may involve: excision – cutting out an organ, tumor, or other tissue. resection – partial removal of an organ or other bodily structure. reconnection of organs, tissues, etc., particularly if severed. Resection of organs such as intestines involves reconnection. Internal suturing or stapling may be used. Surgical connection between blood vessels or other tubular or hollow structures such as loops of intestine is called anastomosis. reduction – the movement or realignment of a body part to its normal position. e.g. Reduction of a broken nose involves the physical manipulation of the bone or cartilage from their displaced state back to their original position to restore normal airflow and aesthetics. ligation – tying off blood vessels, ducts, or "tubes". grafts – may be severed pieces of tissue cut from the same (or different) body or flaps of tissue still partly connected to the body but resewn for rearranging or restructuring of the area of the body in question. Although grafting is often used in cosmetic surgery, it is also used in other surgery. Grafts may be taken from one area of the persons body and inserted to another area of the body. An example is bypass surgery, where clogged blood vessels are bypassed with a graft from another part of the body. Alternatively, grafts may be from other persons, cadavers, or animals. insertion of prosthetic parts when needed. Pins or screws to set and hold bones may be used. Sections of bone may be replaced with prosthetic rods or other parts. Sometimes a plate is inserted to replace a damaged area of skull. Artificial hip replacement has become more common. Heart pacemakers or valves may be inserted. Many other types of prostheses are used. creation of a stoma, a permanent or semi-permanent opening in the body in transplant surgery, the donor organ (taken out of the donors body) is inserted into the recipients body and reconnected to the recipient in all necessary ways (blood vessels, ducts, etc.). arthrodesis – surgical connection of adjacent bones so the bones can grow together into one. Spinal fusion is an example of adjacent vertebrae connected allowing them to grow together into one piece. modifying the digestive tract in bariatric surgery for weight loss. repair of a fistula, hernia, or prolapse. repair according to the ICD-10-PCS, in the Medical and Surgical Section 0, root operation Q, means restoring, to the extent possible, a body part to its normal anatomic structure and function. This definition, repair, is used only when the method used to accomplish the repair is not one of the other root operations. Examples would be colostomy takedown, herniorrhaphy of a hernia, and the surgical suture of a laceration. other procedures, including:clearing clogged ducts, blood or other vessels removal of calculi (stones) draining of accumulated fluids debridement – removal of dead, damaged, or diseased tissueBlood or blood expanders may be administered to compensate for blood lost during surgery. Once the procedure is complete, sutures or staples are used to close the incision. Once the incision is closed, the anesthetic agents are stopped or reversed, and the person is taken off ventilation and extubated (if general anesthesia was administered). Postoperative care After completion of surgery, the person is transferred to the post anesthesia care unit and closely monitored. When the person is judged to have recovered from the anesthesia, he/she is either transferred to a surgical ward elsewhere in the hospital or discharged home. During the post-operative period, the persons general function is assessed, the outcome of the procedure is assessed, and the surgical site is checked for signs of infection. There are several risk factors associated with postoperative complications, such as immune deficiency and obesity. Obesity has long been considered a risk factor for adverse post-surgical outcomes. It has been linked to many disorders such as obesity hypoventilation syndrome, atelectasis and pulmonary embolism, adverse cardiovascular effects, and wound healing complications. If removable skin closures are used, they are removed after 7 to 10 days post-operatively, or after healing of the incision is well under way. It is not uncommon for surgical drains to be required to remove blood or fluid from the surgical wound during recovery. Mostly these drains stay in until the volume tapers off, then they are removed. These drains can become clogged, leading to abscess. Postoperative therapy may include adjuvant treatment such as chemotherapy, radiation therapy, or administration of medication such as anti-rejection medication for transplants. For postoperative nausea and vomiting (PONV), solutions like saline, water, controlled breathing placebo and aromatherapy can be used in addition to medication. Other follow-up studies or rehabilitation may be prescribed during and after the recovery period. A recent post-operative care philosophy has been early ambulation. Ambulation is getting the patient moving around. This can be as simple as sitting up or even walking around. The goal is to get the patient moving as early as possible. It has been found to shorten the patients length of stay. Length of stay is the amount of time a patient spends in the hospital after surgery before they are discharged. In a recent study done with lumbar decompressions, the patients length of stay was decreased by 1–3 days. The use of topical antibiotics on surgical wounds to reduce infection rates has been questioned. Antibiotic ointments are likely to irritate the skin, slow healing, and could increase risk of developing contact dermatitis and antibiotic resistance. It has also been suggested that topical antibiotics should only be used when a person shows signs of infection and not as a preventative. A systematic review published by Cochrane (organisation) in 2016, though, concluded that topical antibiotics applied over certain types of surgical wounds reduce the risk of surgical site infections, when compared to no treatment or use of antiseptics. The review also did not find conclusive evidence to suggest that topical antibiotics increased the risk of local skin reactions or antibiotic resistance. Through a retrospective analysis of national administrative data, the association between mortality and day of elective surgical procedure suggests a higher risk in procedures carried out later in the working week and on weekends. The odds of death were 44% and 82% higher respectively when comparing procedures on a Friday to a weekend procedure. This "weekday effect" has been postulated to be from several factors including poorer availability of services on a weekend, and also, decrease number and level of experience over a weekend.Postoperative pain affects an estimated 80% of people who underwent surgery. While pain is expected after surgery, there is growing evidence that pain may be inadequately treated in many people in the acute period immediately after surgery. It has been reported that incidence of inadequately controlled pain after surgery ranged from 25.1% to 78.4% across all surgical disciplines. There is insufficient evidence to determine if giving opioid pain medication pre-emptively (before surgery) reduces postoperative pain the amount of medication needed after surgery.Postoperative recovery has been defined as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities. Moreover, it has been identified that patients who have undergone surgery are often not fully recovered on discharge. Epidemiology United States In 2011, of the 38.6 million hospital stays in U.S. hospitals, 29% included at least one operating room procedure. These stays accounted for 48% of the total $387 billion in hospital costs.The overall number of procedures remained stable from 2001 to 2011. In 2011, over 15 million operating room procedures were performed in U.S. hospitals.Data from 2003 to 2011 showed that U.S. hospital costs were highest for the surgical service line; the surgical service line costs were $17,600 in 2003 and projected to be $22,500 in 2013. For hospital stays in 2012 in the United States, private insurance had the highest percentage of surgical expenditure. in 2012, mean hospital costs in the United States were highest for surgical stays. Special populations Elderly people Older adults have widely varying physical health. Frail elderly people are at significant risk of post-surgical complications and the need for extended care. Assessment of older people before elective surgery can accurately predict the persons recovery trajectories. One frailty scale uses five items: unintentional weight loss, muscle weakness, exhaustion, low physical activity, and slowed walking speed. A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. People who are frail and elderly (score of 4 or 5) have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people. Children Surgery on children requires considerations that are not common in adult surgery. Children and adolescents are still developing physically and mentally making it difficult for them to make informed decisions and give consent for surgical treatments. Bariatric surgery in youth is among the controversial topics related to surgery in children. Vulnerable populations Doctors perform surgery with the consent of the person undergoing surgery. Some people are able to give better informed consent than others. Populations such as incarcerated persons, people living with dementia, the mentally incompetent, persons subject to coercion, and other people who are not able to make decisions with the same authority as others, have special needs when making decisions about their personal healthcare, including surgery. Global surgery Global surgery has been defined as the multidisciplinary enterprise of providing improved and equitable surgical care to the worlds population, with its core belief as the issues of need, access and quality. Halfdan T. Mahler, the 3rd Director-General of the World Health Organization (WHO), first brought attention to the disparities in surgery and surgical care in 1980 when he stated in his address to the World Congress of the International College of Surgeons, "the vast majority of the worlds population has no access whatsoever to skilled surgical care and little is being done to find a solution." As such, surgical care globally has been described as the neglected stepchild of global health, a term coined by Dr. Paul Farmer to highlight the urgent need for further work in this area. Furthermore, Jim Young Kim, the former President of the World Bank, proclaimed in 2014 that "surgery is an indivisible, indispensable part of health care and of progress towards universal health coverage."In 2015, the Lancet Commission on Global Surgery (LCoGS) published the landmark report titled "Global Surgery 2030: evidence and solutions for achieving health, welfare, and economic development," describing the large, pre-existing burden of surgical diseases in low- and middle-income countries (LMICs) and future directions for increasing universal access to safe surgery by the year 2030. The Commission highlighted that about 5 billion people lack access to safe and affordable surgical and anesthesia care and 143 million additional procedures were needed every year to prevent further morbidity and mortality from treatable surgical conditions as well as a $12.3 trillion loss in economic productivity by the year 2030. This was especially true in the poorest countries, which account for over one-third of the population but only 3.5% of all surgeries that occur worldwide. It emphasized the need to significantly improve the capacity for Bellwether procedures – laparotomy, caesarean section, open fracture care – which are considered a minimum level of care that first-level hospitals should be able to provide in order to capture the most basic emergency surgical care. In terms of the financial impact on the patients, the lack of adequate surgical and anesthesia care has resulted in 33 million individuals every year facing catastrophic health expenditure – the out-of-pocket healthcare cost exceeding 40% of a given households income.In alignment with the LCoGS call for action, the World Health Assembly adopted the resolution WHA68.15 in 2015 that stated, "Strengthening emergency and essential surgical care and anesthesia as a component of universal health coverage." This not only mandated the WHO to prioritize strengthening the surgical and anesthesia care globally, but also led to governments of the member states recognizing the urgent need for increasing capacity in surgery and anesthesia. Additionally, the third edition of Disease Control Priorities (DCP3), published in 2015 by the World Bank, declared surgery as essential and featured an entire volume dedicated to building surgical capacity.Data from WHO and the World Bank indicate that scaling up infrastructure to enable access to surgical care in regions where it is currently limited or is non-existent is a low-cost measure relative to the significant morbidity and mortality caused by lack of surgical treatment. In fact, a systematic review found that the cost-effectiveness ratio – dollars spent per DALYs averted – for surgical interventions is on par or exceeds those of major public health interventions such as oral rehydration therapy, breastfeeding promotion, and even HIV/AIDS antiretroviral therapy. This finding challenged the common misconception that surgical care is financially prohibitive endeavor not worth pursuing in LMICs. A key policy framework that arose from this renewed global commitment towards surgical care worldwide is the National Surgical Obstetric and Anesthesia Plan (NSOAP). NSOAP focuses on policy-to-action capacity building for surgical care with tangible steps as follows: (1) analysis of baseline indicators, (2) partnership with local champions, (3) broad stakeholder engagement, (4) consensus building and synthesis of ideas, (5) language refinement, (6) costing, (7) dissemination, and (8) implementation. This approach has been widely adopted and has served as guiding principles between international collaborators and local institutions and governments. Successful implementations have allowed for sustainability in terms of longterm monitoring, quality improvement, and continued political and financial support. Human rights Access to surgical care is increasingly recognized as an integral aspect of healthcare, and therefore is evolving into a normative derivation of human right to health. The ICESCR Article 12.1 and 12.2 define the human right to health as "the right of everyone to the enjoyment of the highest attainable standard of physical and mental health" In the August 2000, the UN Committee on Economic, Social and Cultural Rights (CESCR) interpreted this to mean "right to the enjoyment of a variety of facilities, goods, services, and conditions necessary for the realization of the highest attainable health". Surgical care can be thereby viewed as a positive right – an entitlement to protective healthcare.Woven through the International Human and Health Rights literature is the right to be free from surgical disease. The 1966 ICESCR Article 12.2a described the need for "provision for the reduction of the stillbirth-rate and of infant mortality and for the healthy development of the child" which was subsequently interpreted to mean "requiring measures to improve… emergency obstetric services". Article 12.2d of the ICESCR stipulates the need for "the creation of conditions which would assure to all medical service and medical attention in the event of sickness", and is interpreted in the 2000 comment to include timely access to "basic preventative, curative services… for appropriate treatment of injury and disability.". Obstetric care shares close ties with reproductive rights, which includes access to reproductive health.Surgeons and public health advocates, such as Kelly McQueen, have described surgery as "Integral to the right to health". This is reflected in the establishment of the WHO Global Initiative for Emergency and Essential Surgical Care in 2005, the 2013 formation of the Lancet Commission for Global Surgery, the 2015 World Bank Publication of Volume
Surgery
1 of its Disease Control Priorities Project "Essential Surgery", and the 2015 World Health Assembly 68.15 passing of the Resolution for Strengthening Emergency and Essential Surgical Care and Anesthesia as a Component of Universal Health Coverage. The Lancet Commission for Global Surgery outlined the need for access to "available, affordable, timely and safe" surgical and anesthesia care; dimensions paralleled in ICESCR General Comment No. 14, which similarly outlines need for available, accessible, affordable and timely healthcare. History Trepanation Surgical treatments date back to the prehistoric era. The oldest for which there is evidence is trepanation, in which a hole is drilled or scraped into the skull, thus exposing the dura mater in order to treat health problems related to intracranial pressure and other diseases. Ancient Egypt Prehistoric surgical techniques are seen in Ancient Egypt, where a mandible dated to approximately 2650 BC shows two perforations just below the root of the first molar, indicating the draining of an abscessed tooth. Surgical texts from ancient Egypt date back about 3500 years ago. Surgical operations were performed by priests, specialized in medical treatments similar to today, and used sutures to close wounds. Infections were treated with honey. India Remains from the early Harappan periods of the Indus Valley civilization (c. 3300 BC) show evidence of teeth having been drilled dating back 9,000 years. Sushruta Samhita is one of the oldest known surgical texts and its period is usually placed around 1200–600 BC. It describes in detail the examination, diagnosis, treatment, and prognosis of numerous ailments, as well as procedures for various forms of cosmetic surgery, plastic surgery and rhinoplasty. Ancient and Medieval Greece In ancient Greece, temples dedicated to the healer-god Asclepius, known as Asclepieia (Greek: Ασκληπιεία, sing. Asclepieion Ασκληπιείον), functioned as centers of medical advice, prognosis, and healing. In the Asclepieion of Epidaurus, some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place. The Greek Galen was one of the greatest surgeons of the ancient world and performed many audacious operations – including brain and eye surgery – that were not tried again for almost two millennia. Researchers from the Adelphi University discovered in the Paliokastro on Thasos ten skeletal remains, four women and six men, who were buried between the fourth and seventh centuries A.D. Their bones illuminated their physical activities, traumas, and even a complex form of brain surgery. According to the researchers: "The very serious trauma cases sustained by both males and females had been treated surgically or orthopedically by a very experienced physician/surgeon with great training in trauma care. We believe it to have been a military physician". The researchers were impressed by the complexity of the brain surgical operation.In 1991 at the Polystylon fort in Greece, researchers discovered the head of a Byzantine warrior of the 14th century. Analysis of the lower jaw revealed that a surgery has been performed, when the warrior was alive, to the jaw which had been badly fractured and it tied back together until it healed. Islamic world During the Islamic Golden Age, largely based upon Paul of Aeginas Pragmateia, the writings of Abulcasis (Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi), an Andalusian-Arab physician and scientist who practiced in the Zahra suburb of Córdoba, were influential. Al-Zahrawi specialized in curing disease by cauterization. He invented several surgical instruments for purposes such as inspection of the interior of the urethra and for removing foreign bodies from the throat, the ear, and other body organs. He was also the first to illustrate the various cannulae and to treat warts with an iron tube and caustic metal as a boring instrument. He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia and the first mastectomy to treat breast cancer. He is credited with the performance of the first thyroidectomy. Al-Zahrawi pioneered techniques of neurosurgery and neurological diagnosis, treating head injuries, skull fractures, spinal injuries, hydrocephalus, subdural effusions and headache. The first clinical description of an operative procedure for hydrocephalus was given by Al-Zahrawi, who clearly describes the evacuation of superficial intracranial fluid in hydrocephalic children. Early modern Europe In Europe, the demand grew for surgeons to formally study for many years before practicing; universities such as Montpellier, Padua and Bologna were particularly renowned. In the 12th century, Rogerius Salernitanus composed his Chirurgia, laying the foundation for modern Western surgical manuals. Barber-surgeons generally had a bad reputation that was not to improve until the development of academic surgery as a specialty of medicine, rather than an accessory field. Basic surgical principles for asepsis etc., are known as Halsteads principles. There were some important advances to the art of surgery during this period. The professor of anatomy at the University of Padua, Andreas Vesalius, was a pivotal figure in the Renaissance transition from classical medicine and anatomy based on the works of Galen, to an empirical approach of hands-on dissection. In his anatomic treaties De humani corporis fabrica, he exposed the many anatomical errors in Galen and advocated that all surgeons should train by engaging in practical dissections themselves. The second figure of importance in this era was Ambroise Paré (sometimes spelled "Ambrose"), a French army surgeon from the 1530s until his death in 1590. The practice for cauterizing gunshot wounds on the battlefield had been to use boiling oil; an extremely dangerous and painful procedure. Paré began to employ a less irritating emollient, made of egg yolk, rose oil and turpentine. He also described more efficient techniques for the effective ligation of the blood vessels during an amputation. Modern surgery The discipline of surgery was put on a sound, scientific footing during the Age of Enlightenment in Europe. An important figure in this regard was the Scottish surgical scientist, John Hunter, generally regarded as the father of modern scientific surgery. He brought an empirical and experimental approach to the science and was renowned around Europe for the quality of his research and his written works. Hunter reconstructed surgical knowledge from scratch; refusing to rely on the testimonies of others, he conducted his own surgical experiments to determine the truth of the matter. To aid comparative analysis, he built up a collection of over 13,000 specimens of separate organ systems, from the simplest plants and animals to humans. He greatly advanced knowledge of venereal disease and introduced many new techniques of surgery, including new methods for repairing damage to the Achilles tendon and a more effective method for applying ligature of the arteries in case of an aneurysm. He was also one of the first to understand the importance of pathology, the danger of the spread of infection and how the problem of inflammation of the wound, bone lesions and even tuberculosis often undid any benefit that was gained from the intervention. He consequently adopted the position that all surgical procedures should be used only as a last resort.Other important 18th- and early 19th-century surgeons included Percival Pott (1713–1788) who described tuberculosis on the spine and first demonstrated that a cancer may be caused by an environmental carcinogen (he noticed a connection between chimney sweeps exposure to soot and their high incidence of scrotal cancer). Astley Paston Cooper (1768–1841) first performed a successful ligation of the abdominal aorta, and James Syme (1799–1870) pioneered the Symes Amputation for the ankle joint and successfully carried out the first hip disarticulation. Modern pain control through anesthesia was discovered in the mid-19th century. Before the advent of anesthesia, surgery was a traumatically painful procedure and surgeons were encouraged to be as swift as possible to minimize patient suffering. This also meant that operations were largely restricted to amputations and external growth removals. Beginning in the 1840s, surgery began to change dramatically in character with the discovery of effective and practical anaesthetic chemicals such as ether, first used by the American surgeon Crawford Long, and chloroform, discovered by Scottish obstetrician James Young Simpson and later pioneered by John Snow, physician to Queen Victoria. In addition to relieving patient suffering, anaesthesia allowed more intricate operations in the internal regions of the human body. In addition, the discovery of muscle relaxants such as curare allowed for safer applications. Infection and antisepsis Unfortunately, the introduction of anesthetics encouraged more surgery, which inadvertently caused more dangerous patient post-operative infections. The concept of infection was unknown until relatively modern times. The first progress in combating infection was made in 1847 by the Hungarian doctor Ignaz Semmelweis who noticed that medical students fresh from the dissecting room were causing excess maternal death compared to midwives. Semmelweis, despite ridicule and opposition, introduced compulsory handwashing for everyone entering the maternal wards and was rewarded with a plunge in maternal and fetal deaths; however, the Royal Society dismissed his advice. Until the pioneering work of British surgeon Joseph Lister in the 1860s, most medical men believed that chemical damage from exposures to bad air (see "miasma") was responsible for infections in wounds, and facilities for washing hands or a patients wounds were not available. Lister became aware of the work of French chemist Louis Pasteur, who showed that rotting and fermentation could occur under anaerobic conditions if micro-organisms were present. Pasteur suggested three methods to eliminate the micro-organisms responsible for gangrene: filtration, exposure to heat, or exposure to chemical solutions. Lister confirmed Pasteurs conclusions with his own experiments and decided to use his findings to develop antiseptic techniques for wounds. As the first two methods suggested by Pasteur were inappropriate for the treatment of human tissue, Lister experimented with the third, spraying carbolic acid on his instruments. He found that this remarkably reduced the incidence of gangrene and he published his results in The Lancet. Later, on 9 August 1867, he read a paper before the British Medical Association in Dublin, on the Antiseptic Principle of the Practice of Surgery, which was reprinted in the British Medical Journal. His work was groundbreaking and laid the foundations for a rapid advance in infection control that saw modern antiseptic operating theatres widely used within 50 years. Lister continued to develop improved methods of antisepsis and asepsis when he realised that infection could be better avoided by preventing bacteria from getting into wounds in the first place. This led to the rise of sterile surgery. Lister introduced the Steam Steriliser to sterilize equipment, instituted rigorous hand washing and later implemented the wearing of rubber gloves. These three crucial advances – the adoption of a scientific methodology toward surgical operations, the use of anaesthetic and the introduction of sterilised equipment – laid the groundwork for the modern invasive surgical techniques of today. The use of X-rays as an important medical diagnostic tool began with their discovery in 1895 by German physicist Wilhelm Röntgen. He noticed that these rays could penetrate the skin, allowing the skeletal structure to be captured on a specially treated photographic plate. Surgical specialties Learned societies See also List of Surgery-related fields Notes References Further reading Bartolo, M., Bargellesi, S., Castioni, C. A., Intiso, D., Fontana, A., Copetti, M., Scarponi, F., Bonaiuti, D., & Intensive Care and Neurorehabilitation Italian Study Group (2017). Mobilization in early rehabilitation in intensive care unit patients with severe acquired brain injury: An observational study. Journal of rehabilitation medicine, 49(9), 715–722. https://doi.org/10.2340/16501977-2269 Ni, C.-yan, Wang, Z.-hong, Huang, Z.-ping, Zhou, H., Fu, L.-juan, Cai, H., Huang, X.-xuan, Yang, Y., Li, H.-fen, & Zhou, W.-ping. (2018). Early enforced mobilization after liver resection: A prospective randomized controlled trial. International Journal of Surgery, 54, 254–258. https://doi.org/10.1016/j.ijsu.2018.04.060 Lei, Y. T., Xie, J. W., Huang, Q., Huang, W., & Pei, F. X. (2021). Benefits of early ambulation within 24 h after total knee arthroplasty: a multicenter retrospective cohort study in China. Military Medical Research, 8(1), 17. https://doi.org/10.1186/s40779-021-00310-x Stethen, T. W., Ghazi, Y. A., Heidel, R. E., Daley, B. J., Barnes, L., Patterson, D., & McLoughlin, J. M. (2018). Walking to recovery: the effects of missed ambulation events on postsurgical recovery after bowel resection. Journal of gastrointestinal oncology, 9(5), 953–961. 10.21037/jgo.2017.11.05 Yakkanti, R. R., Miller, A. J., Smith, L. S., Feher, A. W., Mont, M. A., & Malkani, A. L. (2019). Impact of early mobilization on length of stay after primary total knee arthroplasty. Annals of translational medicine, 7(4), 69. https://doi.org/10.21037/atm.2019.02.02
Edema
Edema, also spelled oedema, and also known as fluid retention, dropsy, hydropsy and swelling, is the build-up of fluid in the bodys tissue. Most commonly, the legs or arms are affected. Symptoms may include skin which feels tight, the area may feel heavy, and joint stiffness. Other symptoms depend on the underlying cause.Causes may include venous insufficiency, heart failure, kidney problems, low protein levels, liver problems, deep vein thrombosis, infections, angioedema, certain medications, and lymphedema. It may also occur after prolonged sitting or standing and during menstruation or pregnancy. The condition is more concerning if it starts suddenly, or pain or shortness of breath is present.Treatment depends on the underlying cause. If the underlying mechanism involves sodium retention, decreased salt intake and a diuretic may be used. Elevating the legs and support stockings may be useful for edema of the legs. Older people are more commonly affected. The word is from the Greek οἴδημα oídēma meaning swelling. Signs and symptoms Specific area An edema will occur in specific organs as part of inflammations, tendinitis or pancreatitis, for instance. Certain organs develop edema through tissue specific mechanisms. Examples of edema in specific organs: Peripheral edema (dependent edema of legs) is extracellular fluid accumulation in the legs. This can occur in otherwise healthy people due to hypervolemia or maintaining a standing or seated posture for an extended period of time. It can occur due to diminished venous return of blood to the heart due to congestive heart failure or pulmonary hypertension. It can also occur in patients with increased hydrostatic venous pressure or decreased oncotic venous pressure, due to obstruction of lymphatic or venous vessels draining the lower extremity. Certain drugs (for example, amlodipine) can cause pedal edema. Cerebral edema is extracellular fluid accumulation in the brain. It can occur in toxic or abnormal metabolic states and conditions such as systemic lupus or reduced oxygen at high altitudes. It causes drowsiness or loss of consciousness, leading to brain herniation and death. Pulmonary edema occurs when the pressure in blood vessels in the lung is raised because of obstruction to the removal of blood via the pulmonary veins. This is usually due to failure of the left ventricle of the heart. It can also occur in altitude sickness or on inhalation of toxic chemicals. Pulmonary edema produces shortness of breath. Pleural effusions may occur when fluid also accumulates in the pleural cavity. Edema may also be found in the cornea of the eye with glaucoma, severe conjunctivitis or keratitis or after surgery. Affected people may perceive coloured haloes around bright lights. Edema surrounding the eyes is called periorbital edema (puffy eyes) . The periorbital tissues are most noticeably swollen immediately after waking, perhaps as a result of the gravitational redistribution of fluid in the horizontal position. Common appearances of cutaneous edema are observed with mosquito bites, spider bites, bee stings (wheal and flare), and skin contact with certain plants such as poison ivy or western poison oak, the latter of which are termed contact dermatitis. Another cutaneous form of edema is myxedema, which is caused by increased deposition of connective tissue. In myxedema (and a variety of other rarer conditions) edema is caused by an increased tendency of the tissue to hold water within its extracellular space. In myxedema, this is due to an increase in hydrophilic carbohydrate-rich molecules (perhaps mostly hyaluronin) deposited in the tissue matrix. Edema forms more easily in dependent areas in the elderly (sitting in chairs at home or on aeroplanes) and this is not well understood. Estrogens alter body weight in part through changes in tissue water content. There may be a variety of poorly understood situations in which transfer of water from tissue matrix to lymphatics is impaired because of changes in the hydrophilicity of the tissue or failure of the wicking function of terminal lymphatic capillaries. In lymphedema, abnormal removal of interstitial fluid is caused by failure of the lymphatic system. This may be due to obstruction from, for example, pressure from a cancer or enlarged lymph nodes, destruction of lymph vessels by radiotherapy, or infiltration of the lymphatics by infection (such as elephantiasis). It is most commonly due to a failure of the pumping action of muscles due to immobility, most strikingly in conditions such as multiple sclerosis, or paraplegia. It has been suggested that the edema that occurs in some people following use of aspirin-like cyclo-oxygenase inhibitors such as ibuprofen or indomethacin may be due to inhibition of lymph heart action. Generalized A rise in hydrostatic pressure occurs in cardiac failure. A fall in osmotic pressure occurs in nephrotic syndrome and liver failure.Causes of edema which are generalized to the whole body can cause edema in multiple organs and peripherally. For example, severe heart failure can cause pulmonary edema, pleural effusions, ascites and peripheral edema. Such severe systemic edema is called anasarca. In rare cases, a parvovirus B19 infection may cause generalized edemas.Although a low plasma oncotic pressure is widely cited for the edema of nephrotic syndrome, most physicians note that the edema may occur before there is any significant protein in the urine (proteinuria) or fall in plasma protein level. Most forms of nephrotic syndrome are due to biochemical and structural changes in the basement membrane of capillaries in the kidney glomeruli, and these changes occur, if to a lesser degree, in the vessels of most other tissues of the body. Thus the resulting increase in permeability that leads to protein in the urine can explain the edema if all other vessels are more permeable as well.As well as the previously mentioned conditions, edemas often occur during the late stages of pregnancy in some women. This is more common with those of a history of pulmonary problems or poor circulation also being intensified if arthritis is already present in that particular woman. Women who already have arthritic problems most often have to seek medical help for pain caused from over-reactive swelling. Edemas that occur during pregnancy are usually found in the lower part of the leg, usually from the calf down. Hydrops fetalis is a condition in a baby characterized by an accumulation of fluid in at least two body compartments. Cause Heart The pumping force of the heart should help to keep a normal pressure within the blood vessels. But if the heart begins to fail (a condition known as congestive heart failure) the pressure changes can cause very severe water retention. In this condition water retention is mostly visible in the legs, feet and ankles, but water also collects in the lungs, where it causes a chronic cough. This condition is usually treated with diuretics; otherwise, the water retention may cause breathing problems and additional stress on the heart. Kidneys Another cause of severe water retention is kidney failure, where the kidneys are no longer able to filter fluid out of the blood and turn it into urine. Kidney disease often starts with inflammation, for instance in the case of diseases such as nephrotic syndrome or lupus. This type of water retention is usually visible in the form of swollen legs and ankles. Protein Protein attracts water and plays an important role in water balance. In cases of severe protein deficiency, the blood may not contain enough protein to attract water from the tissue spaces back into the capillaries. This is why starvation often shows an enlarged abdomen. The abdomen is swollen with edema or water retention caused by the lack of protein in the diet.When the capillary walls are too permeable, protein can leak out of the blood and settle in the tissue spaces. It will then act like a magnet for water, continuously attracting more water from the blood to accumulate in the tissue spaces. Others Swollen legs, feet and ankles are common in late pregnancy. The problem is partly caused by the weight of the uterus on the major veins of the pelvis. It usually clears up after delivery of the baby, and is mostly not a cause for concern, though it should always be reported to a doctor. Lack of exercise is another common cause of water retention in the legs. Exercise helps the leg veins work against gravity to return blood to the heart. If blood travels too slowly and starts to pool in the leg veins, the pressure can force too much fluid out of the leg capillaries into the tissue spaces. The capillaries may break, leaving small blood marks under the skin. The veins themselves can become swollen, painful and distorted – a condition known as varicose veins. Muscle action is needed not only to keep blood flowing through the veins but also to stimulate the lymphatic system to fulfil its "overflow" function. Long-haul flights, lengthy bed-rest, immobility caused by disability and so on, are all potential causes of water retention. Even very small exercises such as rotating ankles and wiggling toes can help to reduce it.Certain medications are prone to causing water retention. These include estrogens, thereby including drugs for hormone replacement therapy or the combined oral contraceptive pill, as well as non-steroidal anti-inflammatory drugs and beta-blockers.Premenstrual water retention, causing bloating and breast tenderness, is common. Mechanism Six factors can contribute to the formation of edema: increased hydrostatic pressure; reduced colloidal or oncotic pressure within blood vessels; increased tissue colloidal or oncotic pressure; increased blood vessel wall permeability (such as inflammation); obstruction of fluid clearance in the lymphatic system; changes in the water-retaining properties of the tissues themselves. Raised hydrostatic pressure often reflects retention of water and sodium by the kidneys.Generation of interstitial fluid is regulated by the forces of the Starling equation. Hydrostatic pressure within blood vessels tends to cause water to filter out into the tissue. This leads to a difference in protein concentration between blood plasma and tissue. As a result, the colloidal or oncotic pressure of the higher level of protein in the plasma tends to draw water back into the blood vessels from the tissue. Starlings equation states that the rate of leakage of fluid is determined by the difference between the two forces and also by the permeability of the vessel wall to water, which determines the rate of flow for a given force imbalance. Most water leakage occurs in capillaries or post capillary venules, which have a semi-permeable membrane wall that allows water to pass more freely than protein. (The protein is said to be reflected and the efficiency of reflection is given by a reflection constant of up to 1.) If the gaps between the cells of the vessel wall open up then permeability to water is increased first, but as the gaps increase in size permeability to protein also increases with a fall in reflection coefficient.Changes in the variables in Starlings equation can contribute to the formation of edemas either by an increase in hydrostatic pressure within the blood vessel, a decrease in the oncotic pressure within the blood vessel or an increase in vessel wall permeability. The latter has two effects. It allows water to flow more freely and it reduces the colloidal or oncotic pressure difference by allowing protein to leave the vessel more easily.Another set of vessels known as the lymphatic system acts like an "overflow" and can return much excess fluid to the bloodstream. But even the lymphatic system can be overwhelmed, and if there is simply too much fluid, or if the lymphatic system is congested, then the fluid will remain in the tissues, causing swellings in legs, ankles, feet, abdomen or any other part of the body. Diagnosis Cutaneous edema is referred to as "pitting" when, after pressure is applied to a small area, the indentation persists after the release of the pressure. Peripheral pitting edema, as shown in the illustration, is the more common type, resulting from water retention. It can be caused by systemic diseases, pregnancy in some women, either directly or as a result of heart failure, or local conditions such as varicose veins, thrombophlebitis, insect bites, and dermatitis.Non-pitting edema is observed when the indentation does not persist. It is associated with such conditions as lymphedema, lipedema, and myxedema.Edema caused by malnutrition defines kwashiorkor, an acute form of childhood protein-energy malnutrition characterized by edema, irritability, anorexia, ulcerating dermatoses, and an enlarged liver with fatty infiltrates. Treatment When possible, treatment involves resolving the underlying cause. Many cases of heart or kidney disease are treated with diuretics.Treatment may also involve positioning the affected body parts to improve drainage. For example, swelling in feet or ankles may be reduced by having the person lie down in bed or sit with the feet propped up on cushions. Intermittent pneumatic compression can be used to pressurize tissue in a limb, forcing fluids—both blood and lymph—to flow out of the compressed area. References == External links ==
Sinus
Sinus may refer to: Anatomy Sinus (anatomy), a sac or cavity in any organ or tissue Paranasal sinuses, air cavities in the cranial bones, especially those near the nose, including: Maxillary sinus, is the largest of the paranasal sinuses, under the eyes, in the maxillary bones Frontal sinus, superior to the eyes, in the frontal bone, which forms the hard part of the forehead Ethmoid sinus, formed from several discrete air cells within the ethmoid bone between the eyes and under the nose Sphenoidal sinus, in the sphenoid bone at the center of the skull base under the pituitary gland Anal sinuses, the furrows which separate the columns in the rectum Dural venous sinuses, venous channels found between layers of dura mater in the brain Sinus (botany), a space or indentation, usually on a leaf Heart Sinus node, a structure in the superior part of the right atrium Sinus rhythm, normal beating on an ECG Coronary sinus, a vein collecting blood from the heart Sinus venosus, a cavity in the heart of embryos Sinus venarum, a part of the wall of the right atrium in adults, developed from the sinus venosus Other uses Sinus (Chalcidice), a town of ancient Chalcidice, Greece Sinus, gulf or bay in Latin, used in numerous toponyms in ancient writing (e.g., Sinus Magnus, Sinus Flanaticus, etc.) Sine, a trigonometric math function (Latin sinus) See also Sinus Medii, a small lunar mare Sinus Successus, a lunar feature Sines, a municipality in Alentejo, Portugal (Latin Sinus) Sinusitis, a common ailment resulting in the inflammation of the paranasal sinuses
Myokymia
Myokymia is an involuntary, spontaneous, localized quivering of a few muscles, or bundles within a muscle, but which are insufficient to move a joint. One type is superior oblique myokymia. Myokymia is commonly used to describe an involuntary eyelid muscle contraction, typically involving the lower eyelid or less often the upper eyelid. It occurs in normal individuals and typically starts and disappears spontaneously. However, it can sometimes last up to three weeks. Since the condition typically resolves itself, medical professionals do not consider it to be serious or a cause for concern. In contrast, facial myokymia is a fine rippling of muscles on one side of the face and may reflect an underlying tumor in the brainstem (typically a brainstem glioma), loss of myelin in the brainstem (associated with multiple sclerosis) or in the recovery stage of Miller-Fisher syndrome, a variant of Guillain–Barré syndrome, an inflammatory polyneuropathy that may affect the facial nerve.Myokymia in otherwise unrelated body parts may occur in neuromyotonia. Causes Frequent contributing factors include: too much caffeine, high levels of anxiety, fatigue, dehydration, stress, overwork, and a lack of sleep. Use of certain drugs or alcohol may also be factors, as can magnesium deficiency. It can be also seen in patients with multiple sclerosis. Treatment Many doctors commonly recommend a combined treatment of a warm compress applied to the eyes (to relieve muscle tension, relax the muscles, and reduce swelling), a small dosage of antihistamine (to reduce any swelling that may be caused by an allergic reaction), increased bed rest and decreased exposure to computer screens, televisions, and harsh lighting (to allow muscles to rest), and monitoring caffeine intake (as too much caffeine can cause an adverse reaction such as eye twitching, but a controlled dose can serve as an effective treatment by increasing blood flow). Etymology The term comes from the Greek -mŷs – "muscle," + kŷm, -kŷmia – "something swollen" or -kŷmos – "wave"). See also Blepharospasm Carnitine palmitoyltransferase II deficiency Fasciculation Myoclonic jerk (myoclonus) References == External links ==
Acute promyelocytic leukemia
Acute promyelocytic leukemia (APML, APL) is a subtype of acute myeloid leukemia (AML), a cancer of the white blood cells. In APL, there is an abnormal accumulation of immature granulocytes called promyelocytes. The disease is characterized by a chromosomal translocation involving the retinoic acid receptor alpha (RARA) gene and is distinguished from other forms of AML by its responsiveness to all-trans retinoic acid (ATRA; also known as tretinoin) therapy. Acute promyelocytic leukemia was first characterized in 1957 by French and Norwegian physicians as a hyperacute fatal illness, with a median survival time of less than a week. Today, prognoses have drastically improved; 10-year survival rates are estimated to be approximately 80-90% according to one study. Signs and symptoms The symptoms tend to be similar to AML in general with the following being possible symptoms: Easy bleeding from low platelets may include: Bruising (ecchymosis) Gingival bleeding Nose bleeds (epistaxis) Increased menstrual bleeding (menorrhagia) Brain bleed (intracerebral hemorrhage) Pathogenesis Acute promyelocytic leukemia is characterized by a chromosomal translocation involving the retinoic acid receptor alpha (RARA) gene on chromosome 17. In 95% of cases of APL, the RARA gene on chromosome 17 is involved in a reciprocal translocation with the promyelocytic leukemia gene (PML) on chromosome 15, a translocation denoted as t(15;17)(q24;q21). The RAR receptor is dependent on retinoic acid for regulation of transcription.Eight other rare gene rearrangements have been described in APL fusing RARA to promyelocytic leukemia zinc finger (PLZF), nucleophosmin, nuclear matrix associated, signal transducer and activator of transcription 5b (STAT5B), protein kinase A regulatory subunit 1α (PRKAR1A), factor interacting with PAPOLA and CPSF1 (FIP1L1), BCL-6 corepressor or oligonucleotide/oligosaccharide-binding fold containing 2A (NABP1) genes. Some of these rearrangements are ATRA-sensitive or have unknown sensitivity to ATRA because they are so rare; STAT5B/RARA and PLZF/RARA are known to be resistant to ATRA.The fusion of PML and RARA results in expression of a hybrid protein with altered functions. This fusion protein binds with enhanced affinity to sites on the cells DNA, blocking transcription and differentiation of granulocytes. It does so by enhancing interaction of nuclear co-repressor (NCOR) molecule and histone deacetylase (HDAC). Although the chromosomal translocation involving RARA is believed to be the initiating event, additional mutations are required for the development of leukemia.RARA/PLZF gene fusion produces a subtype of APL that is unresponsive to tretinoin therapy and less responsive to standard anthracycline chemotherapy hence leading to poorer long-term outcomes in this subset of patients. Diagnosis Acute promyelocytic leukemia can be distinguished from other types of AML based on microscopic examination of the blood film or a bone marrow aspirate or biopsy as well as finding the characteristic rearrangement. The presence of promyelocytes containing multiple Auer rods (termed faggot cells) on the peripheral blood smear is highly suggestive of acute promyelocytic leukemia. Definitive diagnosis requires testing for the PML/RARA fusion gene. This may be done by polymerase chain reaction (PCR), fluorescence in situ hybridization, or conventional cytogenetics of peripheral blood or bone marrow. This mutation involves a translocation of the long arm of chromosomes 15 and 17. On rare occasions, a cryptic translocation may occur which cannot be detected by cytogenetic testing; on these occasions PCR testing is essential to confirm the diagnosis. Treatment Initial treatment APL is unique among leukemias due to its sensitivity to all-trans retinoic acid (ATRA; tretinoin), the acid form of vitamin A. Treatment with ATRA dissociates the NCOR-HDACL complex from RAR and allows DNA transcription and differentiation of the immature leukemic promyelocytes into mature granulocytes by targeting the oncogenic transcription factor and its aberrant action. Unlike other chemotherapies, ATRA does not directly kill the malignant cells. ATRA induces the terminal differentiation of the leukemic promyelocytes, after which these differentiated malignant cells undergo spontaneous apoptosis on their own. ATRA alone is capable of inducing remission but it is short-lived in the absence of concurrent "traditional" chemotherapy. As of 2013 the standard of treatment for concurrent chemotherapy has become arsenic trioxide, which combined with ATRA is referred to ATRA-ATO; before 2013 the standard of treatment was anthracycline (e.g. daunorubicin, doxorubicin, idarubicin or mitoxantrone)-based chemotherapy. Both chemotherapies result in a clinical remission in approximately 90% of patients with arsenic trioxide having a more favorable side effect profile.ATRA therapy is associated with the unique side effect of differentiation syndrome. This is associated with the development of dyspnea, fever, weight gain, peripheral edema and is treated with dexamethasone. The etiology of retinoic acid syndrome has been attributed to capillary leak syndrome from cytokine release from the differentiating promyelocytes.The monoclonal antibody, gemtuzumab ozogamicin, has been used successfully as a treatment for APL, although it has been withdrawn from the US market due to concerns regarding potential toxicity of the drug and it is not currently marketed in Australia, Canada or the UK. Given in conjunction with ATRA, it produces a response in around 84% of patients with APL, which is comparable to the rate seen in patients treated with ATRA and anthracycline-based therapy. It produces less cardiotoxicity than anthracycline-based treatments and hence may be preferable in these patients. Maintenance therapy After stable remission was induced, the standard of care previously was to undergo 2 years of maintenance chemotherapy with methotrexate, mercaptopurine and ATRA. A significant portion of patients relapsed without consolidation therapy. In the 2000 European APL study, the 2-year relapse rate for those that did not receive consolidation chemotherapy (ATRA not included) therapy was 27% compared to 11% in those that did receive consolidation therapy (p<0.01). Likewise in the 2000 US APL study, the survival rates in those receiving ATRA maintenance was 61% compared to just 36% without ATRA maintenance.However, recent research on consolidation therapy following ATRA-ATO, which became the standard treatment in 2013, has found that maintenance therapy in low-risk patients following this therapy may be unnecessary, although this is controversial. Relapsed or refractory disease Arsenic trioxide (As2O3) is currently being evaluated for treatment of relapsed/refractory disease. Remission with arsenic trioxide has been reported. Studies have shown arsenic reorganizes nuclear bodies and degrades the mutant PML-RAR fusion protein. Arsenic also increases caspase activity which then induces apoptosis. It does reduce the relapse rate for high risk patients. In Japan a synthetic retinoid, tamibarotene, is licensed for use as a treatment for ATRA-resistant APL. Investigational agents Some evidence supports the potential therapeutic utility of histone deacetylase inhibitors such as valproic acid or vorinostat in treating APL. According to one study, a cinnamon extract has effect on the apoptotic process in acute myeloid leukemia HL-60 cells. Prognosis Prognosis is generally good relative to other leukemias. Because of the acuteness of onset compared to other leukemias, early death is comparatively more common. If untreated, it has median survival of less than a month. It has been transformed from a highly fatal disease to a highly curable one. The cause of early death is most commonly severe bleeding, often intracranial hemorrhage. Early death from hemorrhage occurs in 5–10% of patients in countries with adequate access to healthcare and 20–30% of patients in less developed countries. Risk factors for early death due to hemorrhage include delayed diagnosis, late treatment initiation, and high white blood cell count on admission. Despite advances in treatment, early death rates have remained relatively constant, as described by several groups including Scott McClellan, Bruno Medeiros, and Ash Alizadeh at Stanford University.Relapse rates are extremely low. Most deaths following remission are from other causes, such as second malignancies, which in one study occurred in 8% of patients. In this study, second malignancies accounted for 41% of deaths, and heart disease, 29%. Survival rates were 88% at 6.3 years and 82% at 7.9 years.In another study, 10-year survival rate was estimated to be approximately 77%. Epidemiology Acute promyelocytic leukemia represents 10–12% of AML cases. The median age is approximately 30–40 years, which is considerably younger than the other subtypes of AML (70 years), however in elderly population APL has peculiar characteristics. Incidence is higher among individuals of Latin American or South European origin. It can also occur as a secondary malignancy in those that receive treatment with topoisomerase II inhibitors (such as the anthracyclines and etoposide) due to the carcinogenic effects of these agents, with patients with breast cancer representing the majority of such patients. Around 40% of patients with APL also have a chromosomal abnormality such as trisomy 8 or isochromosome 17 which do not appear to impact on long-term outcomes. References External links Sanz, Miguel A.; Grimwade, David; Tallman, Martin S.; Lowenberg, Bob; Fenaux, Pierre; Estey, Elihu H.; Naoe, Tomoki; Lengfelder, Eva; Büchner, Thomas; Döhner, Hartmut; Burnett, Alan K.; Lo-Coco, Francesco (2009). "Management of acute promyelocytic leukemia: Recommendations from an expert panel on behalf of the European Leukemia Net". Blood. 113 (9): 1875–1891. doi:10.1182/blood-2008-04-150250. hdl:1765/18239. PMID 18812465. PDQ Adult Treatment Editorial Board (2002–2020). "Adult Acute Myeloid Leukemia Treatment (PDQ®): Patient Version". Adult Acute Myeloid Leukemia Treatment (PDQ®). National Cancer Institute (US). PMID 26389377.
Post-void dribbling
Post-void dribbling occurs when urine remaining in the urethra after voiding the bladder slowly leaks out after urination. A common and usually benign complaint, it may be a symptom of urethral diverticulum, prostatitis and other medical problems.Some men who experience dribbling, especially after prostate cancer surgery, will choose to wear incontinence pads to stay dry. Also known as guards for men, these incontinence pads conform to the male body. Some of the most popular male guards are from TENA, Depend, and Prevail. Simple ways to prevent dribbling include: strengthening pelvic muscles with Kegel exercises, changing position while urinating, or pressing on the perineum to evacuate the remaining urine from the urethra. Sitting down while urinating is also shown to alleviate complaints: a meta-analysis on the effects of voiding position in elderly males with benign prostate hyperplasia found an improvement of urologic parameters in this position, while in healthy males no such influence was found. References External links eMedicine
Odynorgasmia
Odynorgasmia, or painful ejaculation, is a physical syndrome described by pain or burning sensation of the urethra or perineum during or following ejaculation. Causes include infections associated with urethritis, prostatitis, epididymitis, as well as use of anti-depressants. == References ==
Cough
A cough is a sudden expulsion of air through the large breathing passages that can help clear them of fluids, irritants, foreign particles and microbes. As a protective reflex, coughing can be repetitive with the cough reflex following three phases: an inhalation, a forced exhalation against a closed glottis, and a violent release of air from the lungs following opening of the glottis, usually accompanied by a distinctive sound.Frequent coughing usually indicates the presence of a disease. Many viruses and bacteria benefit, from an evolutionary perspective, by causing the host to cough, which helps to spread the disease to new hosts. Most of the time, irregular coughing is caused by a respiratory tract infection but can also be triggered by choking, smoking, air pollution, asthma, gastroesophageal reflux disease, post-nasal drip, chronic bronchitis, lung tumors, heart failure and medications such as angiotensin-converting-enzyme inhibitors (ACE inhibitors). Treatment should target the cause; for example, smoking cessation or discontinuing ACE inhibitors. Cough suppressants such as codeine or dextromethorphan are frequently prescribed, but have been demonstrated to have little effect. Other treatment options may target airway inflammation or may promote mucus expectoration. As it is a natural protective reflex, suppressing the cough reflex might have damaging effects, especially if the cough is productive. Presentation Complications The complications of coughing can be classified as either acute or chronic. Acute complications include cough syncope (fainting spells due to decreased blood flow to the brain when coughs are prolonged and forceful), insomnia, cough-induced vomiting, subconjunctival hemorrhage or "red eye", coughing defecation and in women with a prolapsed uterus, cough urination. Chronic complications are common and include abdominal or pelvic hernias, fatigue fractures of lower ribs and costochondritis. Chronic or violent coughing can contribute to damage to the pelvic floor and a possible cystocele. Differential diagnosis A cough in children may be either a normal physiological reflex or due to an underlying cause. In healthy children it may be normal in the absence of any disease to cough ten times a day. The most common cause of an acute or subacute cough is a viral respiratory tract infection. A healthy adult also coughs 18.8 times a day on average, but in the population with respiratory disease the geometric mean frequency is 275 times a day. In adults with a chronic cough, i.e. a cough longer than 8 weeks, more than 90% of cases are due to post-nasal drip, asthma, eosinophilic bronchitis, and gastroesophageal reflux disease. The causes of chronic cough are similar in children with the addition of bacterial bronchitis. Infections A cough can be the result of a respiratory tract infection such as the common cold, COVID-19, acute bronchitis, pneumonia, pertussis, or tuberculosis. In the vast majority of cases, acute coughs, i.e. coughs shorter than 3 weeks, are due to the common cold. In people with a normal chest X-ray, tuberculosis is a rare finding. Pertussis is increasingly being recognised as a cause of troublesome coughing in adults. After a respiratory tract infection has cleared, the person may be left with a postinfectious cough. This typically is a dry, non-productive cough that produces no phlegm. Symptoms may include a tightness in the chest, and a tickle in the throat. This cough may often persist for weeks after an illness. The cause of the cough may be inflammation similar to that observed in repetitive stress disorders such as carpal tunnel syndrome. The repetition of coughing produces inflammation which produces discomfort, which in turn produces more coughing. Postinfectious cough typically does not respond to conventional cough treatments. Treatment consists of any anti-inflammatory medicine (such as ipratropium) to treat the inflammation, and a cough suppressant to reduce frequency of the cough until inflammation clears. Inflammation may increase sensitivity to other existing issues such as allergies, and treatment of other causes of coughs (such as use of an air purifier or allergy medicines) may help speed recovery. Reactive airway disease When coughing is the only complaint of a person who meets the criteria for asthma (bronchial hyperresponsiveness and reversibility), this is termed cough-variant asthma. Atopic cough and eosinophilic bronchitis are related conditions. Atopic cough occurs in individuals with a family history of atopy (an allergic condition), abundant eosinophils in the sputum, but with normal airway function and responsiveness. Eosinophilic bronchitis is characterized by eosinophils in sputum and in bronchoalveolar lavage fluid without airway hyperresponsiveness or an atopic background. This condition responds to treatment with corticosteroids. Cough can also worsen in an acute exacerbation of chronic obstructive pulmonary disease. Asthma is a common cause of chronic cough in adults and children. Coughing may be the only symptom the person has from their asthma, or asthma symptoms may also include wheezing, shortness of breath, and a tight feeling in their chest. Depending on how severe the asthma is, it can be treated with bronchodilators (medicine which causes the airways to open up) or inhaled steroids. Treatment of the asthma should make the cough go away. Chronic bronchitis is defined clinically as a persistent cough that produces sputum (phlegm) and mucus, for at least three months in two consecutive years. Chronic bronchitis is often the cause of "smokers cough". The tobacco smoke causes inflammation, secretion of mucus into the airway, and difficulty clearing that mucus out of the airways. Coughing helps clear those secretions out. May be treated by quitting smoking. May also be caused by pneumoconiosis and long-term fume inhalation. Gastroesophageal reflux In people with unexplained cough, gastroesophageal reflux disease should be considered. This occurs when acidic contents of the stomach come back up into the esophagus. Symptoms usually associated with GERD include heartburn, sour taste in the mouth, or a feeling of acid reflux in the chest, although, more than half of the people with cough from GERD dont have any other symptoms. An esophageal pH monitor can confirm the diagnosis of GERD. Sometimes GERD can complicate respiratory ailments related to cough, such as asthma or bronchitis. The treatment involves anti-acid medications and lifestyle changes with surgery indicated in cases not manageable with conservative measures. Air pollution Coughing may be caused by air pollution including tobacco smoke, particulate matter, irritant gases, and dampness in a home. The human health effects of poor air quality are far reaching, but principally affect the bodys respiratory system and the cardiovascular system. Individual reactions to air pollutants depend on the type of pollutant a person is exposed to, the degree of exposure, the individuals health status and genetics. People who exercise outdoors on hot, smoggy days, for example, increase their exposure to pollutants in the air. Foreign body A foreign body can sometimes be suspected, for example if the cough started suddenly when the patient was eating. Rarely, sutures left behind inside the airway branches can cause coughing. A cough can be triggered by dryness from mouth breathing or recurrent aspiration of food into the windpipe in people with swallowing difficulties. Angiotensin-converting enzyme inhibitor ACE inhibitors are drugs often used to treat high blood pressure that can sometimes be the cause of a cough as a side effect, and stopping their use will stop the cough. Tic cough A tic cough, previously called a habit cough, is one that responds to behavioral or psychiatric therapy after organic causes have been excluded. Absence of the cough during sleep is common, but not diagnostic. A tic cough is thought to be more common in children than in adults. A similar disorder is the somatic cough syndrome previously called the psychogenic cough. Neurogenic cough Some cases of chronic cough may be attributed to a sensory neuropathic disorder. Treatment for neurogenic cough may include the use of certain neuralgia medications. Coughing may occur in tic disorders such as Tourette syndrome, although it should be distinguished from throat-clearing in this disorder. Other Cough may also be caused by conditions affecting the lung tissue such as bronchiectasis, cystic fibrosis, interstitial lung diseases and sarcoidosis. Coughing can also be triggered by benign or malignant lung tumors or mediastinal masses. Through irritation of the nerve, diseases of the external auditory canal (wax, for example) can also cause cough. Cardiovascular diseases associated with cough are heart failure, pulmonary infarction and aortic aneurysm. Nocturnal cough is associated with heart failure, as the heart does not compensate for the increased volume shift to the pulmonary circulation, in turn causing pulmonary edema and resultant cough. Other causes of nocturnal cough include asthma, post-nasal drip and gastroesophageal reflux disease (GERD). Another cause of cough occurring preferentially in supine position is recurrent aspiration.Given its irritant nature to mammal tissues, capsaicin is widely used to determine the cough threshold and as a tussive stimulant in clinical research of cough suppressants. Capsaicin is what makes chili peppers spicy, and might explain why workers in factories with these fruits can develop a cough. Coughing may also be used for social reasons, and as such is not always involuntary. A voluntary cough, often written as "ahem", can be used to attract attention or express displeasure, as a form of nonverbal, paralingual metacommunication. Airway clearance Coughing, and huffing are important ways of removing mucus as sputum in many conditions such as cystic fibrosis, and chronic bronchitis. Pathophysiology A cough is a protective reflex in healthy individuals which is influenced by psychological factors. The cough reflex is initiated by stimulation of two different classes of afferent nerves, namely the myelinated rapidly adapting receptors, and nonmyelinated C-fibers with endings in the lung. Diagnostic approach The type of cough may help in the diagnosis. For instance, an inspiratory "whooping" sound on coughing almost doubles the likelihood that the illness is pertussis. Blood may occur in small amounts with severe cough of many causes, but larger amounts suggests bronchitis, bronchiectasis, tuberculosis, or primary lung cancer.Further workup may include labs, x-rays, and spirometry. Classification A cough can be classified by its duration, character, quality, and timing. The duration can be either acute (of sudden onset) if it is present less than three weeks, subacute if it is present between three or eight weeks, and chronic when lasting longer than eight weeks. A cough can be non-productive (dry) or productive (when phlegm is produced that may be coughed up as sputum). It may occur only at night (then called nocturnal cough), during both night and day, or just during the day.A number of characteristic coughs exist. While these have not been found to be diagnostically useful in adults, they are of use in children. A barky cough is part of the common presentation of croup. A staccato cough has been classically described with neonatal chlamydial pneumonia. Treatment The treatment of a cough in children is based on the underlying cause. In children half of cases go away without treatment in 10 days and 90% in 25 days.According to the American Academy of Pediatrics the use of cough medicine to relieve cough symptoms is supported by little evidence and thus not recommended for treating cough symptoms in children. There is tentative evidence that the use of honey is better than no treatment or diphenhydramine in decreasing coughing. It does not alleviate coughing to the same extent as dextromethorphan but it shortens the cough duration better than placebo and salbutamol. A trial of antibiotics or inhaled corticosteroids may be tried in children with a chronic cough in an attempt to treat protracted bacterial bronchitis or asthma respectively. There is insufficient evidence to recommend treating children who have a cough that is not related to a specific condition with inhaled anti-cholinergics.Because coughing can spread disease through infectious aerosol droplets, it is recommended to cover ones mouth and nose with the forearm, the inside of the elbow, a tissue or a handkerchief while coughing. Epidemiology A cough is the most common reason for visiting a primary care physician in the United States. Other animals Marine mammals such as dolphins and whales cannot cough. Some invertebrates such as insects cannot cough or sneeze. Domestic animals such as dogs and cats can cough, because of diseases, allergies, dust or choking. In particular, cats are known for coughing before spitting up a hairball.In other domestic animals, horses can cough because of infections, or due to poor ventilation and dust in enclosed spaces. Kennel cough in dogs can result from a viral or bacterial infection. Deer can cough similarly to humans as a result of respiratory tract infections, such as parasitic bronchitis caused by a species of Dictyocaulus. References As of this edit, this article uses content from "Acute cough: a diagnostic and therapeutic challenge", which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 3.0 Unported License, but not under the GFDL. All relevant terms must be followed. Further reading Carroll, Thomas L., ed. (2019). Chronic Cough. Plural Publishing. ISBN 9781635500707. LCCN 2018055141. == External links ==
Palpitations
Palpitations are perceived abnormalities of the heartbeat characterized by awareness of cardiac muscle contractions in the chest, which is further characterized by the hard, fast and/or irregular beatings of the heart.Symptoms include a rapid pulsation, an abnormally rapid or irregular beating of the heart. Palpitations are a sensory symptom and are often described as a skipped beat, rapid fluttering in the chest, pounding sensation in the chest or neck, or a flip-flopping in the chest.Palpitation can be associated with anxiety and does not necessarily indicate a structural or functional abnormality of the heart, but it can be a symptom arising from an objectively rapid or irregular heartbeat. Palpitation can be intermittent and of variable frequency and duration, or continuous. Associated symptoms include dizziness, shortness of breath, sweating, headaches and chest pain. Palpitation may be associated with coronary heart disease, hyperthyroidism, diseases affecting cardiac muscle such as hypertrophic cardiomyopathy, diseases causing low blood oxygen such as asthma and emphysema; previous chest surgery; kidney disease; blood loss and pain; anemia; drugs such as antidepressants, statins, alcohol, nicotine, caffeine, cocaine and amphetamines; electrolyte imbalances of magnesium, potassium and calcium; and deficiencies of nutrients such as taurine, arginine, iron, vitamin B12. Signs and symptoms Three common descriptions of palpitation are "flip-flopping" (or "stop and start"), often caused by premature contraction of the atrium or ventricle, with the perceived "stop" from the pause following the contraction, and the "start" from the subsequent forceful contraction; rapid "fluttering in the chest", with regular "fluttering" suggesting supraventricular or ventricular arrhythmias (including sinus tachycardia) and irregular "fluttering" suggesting atrial fibrillation, atrial flutter, or tachycardia with variable block; and "pounding in the neck" or neck pulsations, often due to cannon A waves in the jugular venous, pulsations that occur when the right atrium contracts against a closed tricuspid valve.Palpitation associated with chest pain suggests coronary artery disease, or if the chest pain is relieved by leaning forward, pericardial disease is suspected. Palpitation associated with light-headedness, fainting or near fainting suggest low blood pressure and may signify a life-threatening abnormal heart rhythm. Palpitation that occurs regularly with exertion suggests a rate-dependent bypass tract or hypertrophic cardiomyopathy. If a benign cause for these concerning symptoms cannot be found at the initial visit, then ambulatory monitoring or prolonged heart monitoring in the hospital might be warranted. Noncardiac symptoms should also be elicited since the palpitations may be caused by a normal heart responding to a metabolic or inflammatory condition. Weight loss suggests hyperthyroidism. Palpitation can be precipitated by vomiting or diarrhea that leads to electrolyte disorders and hypovolemia. Hyperventilation, hand tingling, and nervousness are common when anxiety or panic disorder is the cause of the palpitations. Causes The current knowledge of the neural pathways responsible for the perception of the heartbeat is not clearly elucidated. It has been hypothesized that these pathways include different structures located both at the intra-cardiac and extra-cardiac level. Palpitations are a widely diffused complaint and particularly in subjects affected by structural heart disease. The list of etiologies of palpitations is long, and in some cases, the etiology is unable to be determined. In one study reporting the etiology of palpitations, 43% were found to be of cardiac etiology, 31% of psychiatric etiology and approximately 10% were classified as miscellaneous (medication induced, thyrotoxicosis, caffeine, cocaine, anemia, amphetamine, mastocytosis).The cardiac etiologies of palpitations are the most life-threatening and include ventricular sources (premature ventricular contractions (PVC), ventricular tachycardia and ventricular fibrillation), atrial sources (atrial fibrillation, atrial flutter) high output states (anemia, AV fistula, Pagets disease of bone or pregnancy), structural abnormalities (congenital heart disease, cardiomegaly, aortic aneurysm, or acute left ventricular failure), and miscellaneous sources (postural orthostatic tachycardia syndrome abbreviated as POTS, Brugada syndrome, and sinus tachycardia).Palpitation can be attributed to one of four main causes: Extra-cardiac stimulation of the sympathetic nervous system (inappropriate stimulation of the sympathetic and parasympathetic, particularly the vagus nerve, (which innervates the heart), can be caused by anxiety and stress due to acute or chronic elevations in glucocorticoids and catecholamines. Gastrointestinal distress such as bloating or indigestion, along with muscular imbalances and poor posture, can also irritate the vagus nerve causing palpitations) Sympathetic overdrive (panic disorder, low blood sugar, hypoxia, antihistamines (levocetirizine), low red blood cell count, heart failure, mitral valve prolapse). Hyperdynamic circulation (valvular incompetence, thyrotoxicosis, hypercapnia, high body temperature, low red blood cell count, pregnancy). Abnormal heart rhythms (ectopic beat, premature atrial contraction, junctional escape beat, premature ventricular contraction, atrial fibrillation, supraventricular tachycardia, ventricular tachycardia, ventricular fibrillation, heart block).Palpitations can occur during times of catecholamine excess, such as during exercise or at times of stress. The cause of the palpitations during these conditions is often a sustained supraventricular tachycardia or ventricular tachyarrhythmia. Supraventricular tachycardias can also be induced at the termination of exercise when the withdrawal of catecholamines is coupled with a surge in the vagal tone. Palpitations secondary to catecholamine excess may also occur during emotionally startling experiences, especially in patients with a long QT syndrome. Psychiatric problems Anxiety and stress elevate the bodys level of cortisol and adrenaline, which in turn can interfere with the normal functioning of the parasympathetic nervous system resulting in overstimulation of the vagus nerve. Vagus nerve induced palpitation is felt as a thud, a hollow fluttery sensation, or a skipped beat, depending on at what point during the hearts normal rhythm the vagus nerve fires. In many cases, the anxiety and panic of experiencing palpitations cause a patient to experience further anxiety and increased vagus nerve stimulation. The link between anxiety and palpitation may also explain why many panic attacks involve an impending sense of cardiac arrest. Similarly, physical and mental stress may contribute to the occurrence of palpitation, possibly due to the depletion of certain micronutrients involved in maintaining healthy psychological and physiological function. Gastrointestinal bloating, indigestion and hiccups have also been associated with overstimulation of the vagus nerve causing palpitations, due to branches of the vagus nerve innervating the GI tract, diaphragm, and lungs.Many psychiatric conditions can result in palpitations including depression, generalized anxiety disorder, panic attacks, and somatization. However one study noted that up to 67% of patients diagnosed with a mental health condition had an underlying arrhythmia. There are many metabolic conditions that can result in palpitations including, hyperthyroidism, hypoglycemia, hypocalcemia, hyperkalemia, hypokalemia, hypermagnesemia, hypomagnesemia, and pheochromocytoma. Medication The medications most likely to result in palpitations include sympathomimetic agents, anticholinergic drugs, vasodilators and withdrawal from beta blockers.Common etiologies also include excess caffeine, or marijuana. Cocaine, amphetamines, 3-4 methylenedioxymethamphetamine (Ecstasy or MDMA) can also cause palpitations. Pathophysiology The sensation of palpitations can arise from extra-systoles or tachyarrhythmia. It is very rarely noted due to bradycardia. Palpitations can be described in many ways. The most common descriptions include a flip-flopping in the chest, a rapid fluttering in the chest, or pounding in the neck. The description of the symptoms may provide a clue regarding the etiology of the palpitations, and the pathophysiology of each of these descriptions is thought to be different. In patients who describe the palpitations as a brief flip-flopping in the chest, the palpitations are thought to be caused by extra- systoles such as supraventricular or ventricular premature contractions. The flip-flop sensation is thought to result from the forceful contraction following the pause, and the sensation that the heart is stopped results from the pause. The sensation of rapid fluttering in the chest is thought to result from a sustained ventricular or supraventricular arrhythmia. Furthermore, the sudden cessation of this arrythmia can suggest paroxysmal supraventricular tachycardia. This is further supported if the patient can stop the palpitations by using Valsalva maneuvers. The rhythm of the palpitations may indicate the etiology of the palpitations (irregular palpitations indicate atrial fibrillation as a source of the palpitations). An irregular pounding sensation in the neck can be caused by the dissociation of mitral valve and tricuspid valve, and the subsequent atria are contracting against a closed tricuspid and mitral valves, thereby producing cannon A waves. Palpitations induced by exercise could be suggestive of cardiomyopathy, ischemia or channelopathies. Diagnosis The most important initial clue to the diagnosis is ones description of palpitation. The approximate age of the person when first noticed and the circumstances under which they occur are important, as is information about caffeine intake (tea or coffee drinking), and whether continual palpitations can be stopped by deep breathing or changing body positions. It is also very helpful to know how they start and stop (abruptly or not), whether or not they are regular, and approximately how fast the pulse rate is during an attack. If the person has discovered a way of stopping the palpitations, that is also helpful information.A complete and detailed history and physical examination are two essential elements of the evaluation of a patient with palpitations. The key components of a detailed history include age of onset, description of the symptoms including rhythm, situations that commonly result in the symptoms, mode of onset (rapid or gradual), duration of symptoms, factors that relieve symptoms (rest, Valsalva), positions and other associated symptoms such as chest pain, lightheadedness or syncope. A patient can tap out the rhythm to help demonstrate if they are not currently experiencing the symptoms. The patient should be questioned regarding all medications, including over-the-counter medications. Social history, including exercise habits, caffeine consumption, alcohol and illicit drug use, should also be determined. Also, past medical history and family history may provide indications to the etiology of the palpitations.Palpitations that have been a condition since childhood are most likely caused by a supraventricular tachycardia, whereas palpitations that first occur later in life are more likely to be secondary to structural heart disease. A rapid regular rhythm is more likely to be secondary to paroxysmal supraventricular tachycardia or ventricular tachycardia, and a rapid and irregular rhythm is more likely to be an indication of atrial fibrillation, atrial flutter, or tachycardia with variable block. Supraventricular and ventricular tachycardia is thought to result in palpitations with abrupt onset and abrupt termination. In patients who can terminate their palpitations with a Valsalva maneuver, this is thought to indicate possibly a supraventricular tachycardia. Palpitations associated with chest pain may suggest myocardial ischemia. Lastly, when lightheadedness or syncope accompanies the palpitations, ventricular tachycardia, supraventricular tachycardia, or other arrhythmias should be considered.The diagnosis is usually not made by a routine medical examination and scheduled electrical tracing of the hearts activity (ECG) because most people cannot arrange to have their symptoms be present while visiting the hospital. Nevertheless, findings such as a heart murmur or an abnormality of the ECG might be indicative of probable diagnosis. In particular, ECG changes that are associated with specific disturbances of the heart rhythm may be noticed; thus physical examination and ECG remain important in the assessment of palpitation. Moreover, a complete physical exam should be performed including vital signs (with orthostatic vital signs), cardiac auscultation, lung auscultation, and examination of extremities. A patient can tap out the rhythm to help demonstrate what they felt previously, if they are not currently experiencing the symptoms.Positive orthostatic vital signs may indicate dehydration or an electrolyte abnormality. A mid-systolic click and heart murmur may indicate mitral valve prolapse. A harsh holo-systolic murmur best heard at the left sternal border which increases with Valsalva may indicate hypertrophic obstructive cardiomyopathy. An irregular rhythm indicates atrial fibrillation or atrial flutter. Evidence of cardiomegaly and peripheral edema may indicate heart failure and ischemia or a valvular abnormality.Blood tests, particularly tests of thyroid gland function, are also important baseline investigations (an overactive thyroid gland is a potential cause for palpitations; the treatment, in that case, is to treat the thyroid gland over-activity).The next level of diagnostic testing is usually 24-hour (or longer) ECG monitoring, using a recorder called a Holter monitor, which can record the ECG continuously during a 24-hour or 48-hour period. If symptoms occur during monitoring it is a simple matter to examine the ECG recording and see what the cardiac rhythm was at the time. For this type of monitoring to be helpful, the symptoms must be occurring at least once a day. If they are less frequent, the chances of detecting anything with continuous 24- or even 48-hour monitoring are substantially lowered. More recent technology such as the Zio Patch allows continuous recording for up to 14 days; the patient indicates when symptoms occur by pushing a button on the device and keeps a log of the events.Other forms of monitoring are available, and these can be useful when symptoms are infrequent. A continuous-loop event recorder monitors the ECG continuously, but only saves the data when the wearer activates it. Once activated, it will save the ECG data for a period of time before the activation and for a period of time afterwards – the cardiologist who is investigating the palpitations can program the length of these periods. An implantable loop recorder may be helpful in people with very infrequent but disabling symptoms. This recorder is implanted under the skin on the front of the chest, like a pacemaker. It can be programmed and the data examined using an external device that communicates with it by means of a radio signal.Investigation of heart structure can also be important. The heart in most people with palpitation is completely normal in its physical structure, but occasionally abnormalities such as valve problems may be present. Usually, but not always, the cardiologist will be able to detect a murmur in such cases, and an ultrasound scan of the heart (echocardiogram) will often be performed to document the hearts structure. This is a painless test performed using sound waves and is virtually identical to the scanning done in pregnancy to look at the fetus. Evaluation A 12-lead electrocardiogram must be performed on every patient complaining of palpitations. The presence of a short PR interval and a delta wave (Wolff-Parkinson-White syndrome) is an indication of the existence of ventricular pre-excitation. Significant left ventricular hypertrophy with deep septal Q waves in I, L, and V4 through V6 may indicate hypertrophic obstructive cardiomyopathy. The presence of Q waves may indicate a prior myocardial infarction as the etiology of the palpitations, and a prolonged QT interval may indicate the presence of the long QT syndrome.Laboratory studies should be limited initially. Complete blood count can assess for anemia and infection. Serum urea, creatinine and electrolytes to assess for electrolyte imbalances and renal dysfunction. Thyroid function tests may demonstrate a hyperthyroid state.Most patients have benign conditions as the etiology for their palpitations. The goal of further evaluation is to identify those patients who are at high risk for an arrhythmia. Recommended laboratory studies include an investigation for anemia, hyperthyroidism and electrolyte abnormalities. Echocardiograms are indicated for patients in whom structural heart disease is a concern.Further diagnostic testing is recommended for those in whom the initial diagnostic evaluation (history, physical examination, and EKG) suggest an arrhythmia, those who are at high risk for an arrhythmia, and those who remain anxious to have a specific explanation of their symptoms. People considered to be at high risk for an arrhythmia include those with organic heart disease or any myocardial abnormality that may lead to serious arrhythmias. These conditions include a scar from myocardial infarction, idiopathic dilated cardiomyopathy, clinically significant valvular regurgitant, or stenotic lesions and hypertrophic cardiomyopathies.An aggressive diagnostic approach is recommended for those at high risk and can include ambulatory monitoring or electrophysiologic studies. There are three types of ambulatory EKG monitoring devices: Holter monitor, continuous-loop event recorder, and an implantable loop recorder.People who are going to have these devices checked should be made aware of the properties of the devices and the accompanying course of the examination for each device. The Holter monitor is a 24-hour monitoring system that is worn by exam takers themselves and records and continuously saves data. Holter monitors are typically worn for a few days. The continuous-loop event recorders are also worn by the exam taker and continuously record data, but the data is saved only when someone manually activates the monitor. The continuous-loop recorders can be long worn for longer periods of time than the Holter monitors and therefore have been proven to be more cost-effective and efficacious than Holter monitors. Also, because the person triggers the device when he/she feel the symptoms, they are more likely to record data during palpitations. An implantable loop recorder is a device that is placed subcutaneously and continuously monitors for cardiac arrhythmias. These are most often used in those with unexplained syncope and can be used for longer periods of time than the continuous loop event recorders. An implantable loop recorder is a device that is placed subcutaneously and continuously monitors for the detection of cardiac arrhythmias. These are most often used in those with unexplained syncope and are a used for longer periods of time than the continuous loop event recorders. Electrophysiology testing enables a detailed analysis of the underlying mechanism of the cardiac arrhythmia as well as the site of origin. EPS studies are usually indicated in those with a high pretest likelihood of a serious arrhythmia. The level of evidence for evaluation techniques is based upon consensus expert opinion. Treatment Treating palpitation will depend on the severity and cause of the condition. Radiofrequency ablation can cure most types of supraventricular and many types of ventricular tachycardias. While catheter ablation is currently a common treatment approach, there have been advances in stereotactic radioablation for certain arrythmias. This technique is commonly used for solid tumors and has been applied with success in management of difficult to treat Ventricular Tachycardia and Atrial Fibrillation.The most challenging cases involve palpitations that are secondary to supraventricular or ventricular ectopy or associated with normal sinus rhythm. These conditions are thought to be benign, and the management involves reassurance of the patient that these arrhythmias are not life-threatening. In these situations when the symptoms are unbearable or incapacitating, treatment with beta-blocking medications could be considered, and may provide a protective effect for otherwise healthy individuals.People who present to the emergency department who are asymptomatic, with unremarkable physical exams, have non-diagnostic EKGs and normal laboratory studies, can safely be sent home and instructed to follow up with their primary care provider or cardiologist. Patients whose palpitations are associated with syncope, uncontrolled arrhythmias, hemodynamic compromise, or angina should be admitted for further evaluation.Palpitation that is caused by heart muscle defects will require specialist examination and assessment. Palpitation that is caused by vagus nerve stimulation rarely involves physical defects of the heart. Such palpitations are extra-cardiac in nature, that is, palpitation originating from outside the heart itself. Accordingly, vagus nerve induced palpitation is not evidence of an unhealthy heart muscle.Treatment of vagus nerve induced palpitation will need to address the cause of irritation to the vagus nerve or the parasympathetic nervous system generally. It is of significance that anxiety and stress are strongly associated with increased frequency and severity of vagus nerve induced palpitation. Anxiety and stress reduction techniques such as meditation and massage may prove extremely beneficial to reduce or eliminate symptoms temporarily. Changing body position (e.g. sitting upright rather than lying down) may also help reduce symptoms due to the vagus nerves innervation of several structures within the body such as the GI tract, diaphragm and lungs. Prognosis Direct-to-consumer options for monitoring heart rate and heart rate variability have become increasingly prevalent using smartphones and smartwatches. These monitoring systems have become increasingly validated and may help provide early identification for those at risk for a serious arrhythmia such as atrial fibrillation.Palpitations can be a very concerning symptom for people. The etiology of the palpitations in most patients is benign. Therefore, comprehensive workups are not indicated. However appropriate follow up with the primary care provider can provide the ability to monitor symptoms over time and determine if consultation with cardiologist is required. People who are determined to be at high risk for palpitations of serious or life-threatening etiologies require a more extensive workup and comprehensive management.Once a cause is determined, the recommendations for treatment are quite strong with moderate to high quality therapies studied. Partnership with the people who have the chief complaint of palpitation, using a shared decision-making model and involving an interprofessional team including a nurse, nurse practitioner, physician assistant, and physician can help best direct therapy and provide good followup. Prevalence Palpitations are a common complaint in the general population, particularly in those affected by structural heart disease. Clinical presentation is divided into four groups: extra-systolic, tachycardic, anxiety-related, and intense. Anxiety-related is the most common. See also Cardiac dysrhythmia References External links MedlinePlus Medical Encyclopedia, NIH
ABO blood group system
The ABO blood group system is used to denote the presence of one, both, or neither of the A and B antigens on erythrocytes. For human blood transfusions, it is the most important of the 43 different blood type (or group) classification systems currently recognized by the International Society of Blood Transfusions (ISBT) as of June 2021. A mismatch (very rare in modern medicine) in this, or any other serotype, can cause a potentially fatal adverse reaction after a transfusion, or an unwanted immune response to an organ transplant. The associated anti-A and anti-B antibodies are usually IgM antibodies, produced in the first years of life by sensitization to environmental substances such as food, bacteria, and viruses. The ABO blood types were discovered by Karl Landsteiner in 1901; he received the Nobel Prize in Physiology or Medicine in 1930 for this discovery. ABO blood types are also present in other primates such as apes and Old World monkeys. History Discovery The ABO blood types were first discovered by an Austrian physician, Karl Landsteiner, working at the Pathological-Anatomical Institute of the University of Vienna (now Medical University of Vienna). In 1900, he found that red blood cells would clump together (agglutinate) when mixed in test tubes with sera from different persons, and that some human blood also agglutinated with animal blood. He wrote a two-sentence footnote: The serum of healthy human beings not only agglutinates animal red cells, but also often those of human origin, from other individuals. It remains to be seen whether this appearance is related to inborn differences between individuals or it is the result of some damage of bacterial kind. This was the first evidence that blood variations exist in humans – it was believed that all humans have similar blood. The next year, in 1901, he made a definitive observation that blood serum of an individual would agglutinate with only those of certain individuals. Based on this he classified human blood into three groups, namely group A, group B, and group C. He defined that group A blood agglutinates with group B, but never with its own type. Similarly, group B blood agglutinates with group A. Group C blood is different in that it agglutinates with both A and B.This was the discovery of blood groups for which Landsteiner was awarded the Nobel Prize in Physiology or Medicine in 1930. In his paper, he referred to the specific blood group interactions as isoagglutination, and also introduced the concept of agglutinins (antibodies), which is the actual basis of antigen-antibody reaction in the ABO system. He asserted: [It] may be said that there exist at least two different types of agglutinins, one in A, another one in B, and both together in C. The red blood cells are inert to the agglutinins which are present in the same serum. Thus, he discovered two antigens (agglutinogens A and B) and two antibodies (agglutinins – anti-A and anti-B). His third group (C) indicated absence of both A and B antigens, but contains anti-A and anti-B. The following year, his students Adriano Sturli and Alfred von Decastello discovered the fourth type (but not naming it, and simply referred to it as "no particular type"). In 1910, Ludwik Hirszfeld and Emil Freiherr von Dungern introduced the term O (null) for the group Landsteiner designated as C, and AB for the type discovered by Sturli and von Decastello. They were also the first to explain the genetic inheritance of the blood groups. Classification systems Czech serologist Jan Janský independently introduced blood type classification in 1907 in a local journal. He used the Roman numerical I, II, III, and IV (corresponding to modern O, A, B, and AB). Unknown to Janský, an American physician William L. Moss devised a slightly different classification using the same numerical; his I, II, III, and IV corresponding to modern AB, A, B, and O.These two systems created confusion and potential danger in medical practice. Mosss system was adopted in Britain, France, and US, while Janskýs was preferred in most European countries and some parts of US. To resolve the chaos, the American Association of Immunologists, the Society of American Bacteriologists, and the Association of Pathologists and Bacteriologists made a joint recommendation in 1921 that the Jansky classification be adopted based on priority. But it was not followed particularly where Mosss system had been used.In 1927, Landsteiner had moved to the Rockefeller Institute for Medical Research in New York. As a member of a committee of the National Research Council concerned with blood grouping, he suggested to substitute Janskýs and Mosss systems with the letters O, A, B, and AB. (There was another confusion on the use of figure 0 for German null as introduced by Hirszfeld and von Dungern, because others used the letter O for ohne, meaning without or zero; Landsteiner chose the latter.) This classification was adopted by the National Research Council and became variously known as the National Research Council classification, the International classification, and most popularly the "new" Landsteiner classification. The new system was gradually accepted and by the early 1950s, it was universally followed. Other developments The first practical use of blood typing in transfusion was by an American physician Reuben Ottenberg in 1907. Large-scale application began during the First World War (1914–1915) when citric acid began to be used for blood clot prevention. Felix Bernstein demonstrated the correct blood group inheritance pattern of multiple alleles at one locus in 1924. Watkins and Morgan, in England, discovered that the ABO epitopes were conferred by sugars, to be specific, N-acetylgalactosamine for the A-type and galactose for the B-type. After much published literature claiming that the ABH substances were all attached to glycosphingolipids, Finne et al. (1978) found that the human erythrocyte glycoproteins contain polylactosamine chains that contains ABH substances attached and represent the majority of the antigens. The main glycoproteins carrying the ABH antigens were identified to be the Band 3 and Band 4.5 proteins and glycophorin. Later, Yamamotos group showed the precise glycosyl transferase set that confers the A, B and O epitopes. Genetics Blood groups are inherited from both parents. The ABO blood type is controlled by a single gene (the ABO gene) with three types of alleles inferred from classical genetics: i, IA, and IB. The I designation stands for isoagglutinogen, another term for antigen. The gene encodes a glycosyltransferase—that is, an enzyme that modifies the carbohydrate content of the red blood cell antigens. The gene is located on the long arm of the ninth chromosome (9q34).The IA allele gives type A, IB gives type B, and i gives type O. As both IA and IB are dominant over i, only ii people have type O blood. Individuals with IAIA or IAi have type A blood, and individuals with IBIB or IBi have type B. IAIB people have both phenotypes, because A and B express a special dominance relationship: codominance, which means that type A and B parents can have an AB child. A couple with type A and type B can also have a type O child if they are both heterozygous (IBi,IAi). The cis-AB phenotype has a single enzyme that creates both A and B antigens. The resulting red blood cells do not usually express A or B antigen at the same level that would be expected on common group A1 or B red blood cells, which can help solve the problem of an apparently genetically impossible blood group. The table above summarizes the various blood groups that children may inherit from their parents. Genotypes are shown in the second column and in small print for the offspring: AO and AA both test as type A; BO and BB test as type B. The four possibilities represent the combinations obtained when one allele is taken from each parent; each has a 25% chance, but some occur more than once. The text above them summarizes the outcomes. Historically, ABO blood tests were used in paternity testing, but in 1957 only 50% of American men falsely accused were able to use them as evidence against paternity. Occasionally, the blood types of children are not consistent with expectations—for example, a type O child can be born to an AB parent—due to rare situations, such as Bombay phenotype and cis AB. Subgroups The A blood type contains about 20 subgroups, of which A1 and A2 are the most common (over 99%). A1 makes up about 80% of all A-type blood, with A2 making up almost all of the rest. These two subgroups are not always interchangeable as far as transfusion is concerned, as some A2 individuals produce antibodies against the A1 antigen. Complications can sometimes arise in rare cases when typing the blood.With the development of DNA sequencing, it has been possible to identify a much larger number of alleles at the ABO locus, each of which can be categorized as A, B, or O in terms of the reaction to transfusion, but which can be distinguished by variations in the DNA sequence. There are six common alleles in white individuals of the ABO gene that produce ones blood type: The same study also identified 18 rare alleles, which generally have a weaker glycosylation activity. People with weak alleles of A can sometimes express anti-A antibodies, though these are usually not clinically significant as they do not stably interact with the antigen at body temperature.Cis AB is another rare variant, in which A and B genes are transmitted together from a single parent. Distribution and evolutionary history The distribution of the blood groups A, B, O and AB varies across the world according to the population. There are also variations in blood type distribution within human subpopulations.In the UK, the distribution of blood type frequencies through the population still shows some correlation to the distribution of placenames and to the successive invasions and migrations including Celts, Norsemen, Danes, Anglo-Saxons, and Normans who contributed the morphemes to the placenames and the genes to the population. The native Celts tended to have more type O blood, while the other populations tended to have more type A.The two common O alleles, O01 and O02, share their first 261 nucleotides with the group A allele A01. However, unlike the group A allele, a guanosine base is subsequently deleted. A premature stop codon results from this frame-shift mutation. This variant is found worldwide, and likely predates human migration from Africa. The O01 allele is considered to predate the O02 allele.Some evolutionary biologists theorize that there are four main lineages of the ABO gene and that mutations creating type O have occurred at least three times in humans. From oldest to youngest, these lineages comprise the following alleles: A101/A201/O09, B101, O02 and O01. The continued presence of the O alleles is hypothesized to be the result of balancing selection. Both theories contradict the previously held theory that type O blood evolved first. Origin theories It is possible that food and environmental antigens (bacterial, viral, or plant antigens) have epitopes similar enough to A and B glycoprotein antigens. The antibodies created against these environmental antigens in the first years of life can cross-react with ABO-incompatible red blood cells that it comes in contact with during blood transfusion later in life. Anti-A antibodies are hypothesized to originate from immune response towards influenza virus, whose epitopes are similar enough to the α-D-N-galactosamine on the A glycoprotein to be able to elicit a cross-reaction. Anti-B antibodies are hypothesized to originate from antibodies produced against Gram-negative bacteria, such as E. coli, cross-reacting with the α-D-galactose on the B glycoprotein.However, it is more likely that the force driving evolution of allele diversity is simply negative frequency-dependent selection; cells with rare variants of membrane antigens are more easily distinguished by the immune system from pathogens carrying antigens from other hosts. Thus, individuals possessing rare types are better equipped to detect pathogens. The high within-population diversity observed in human populations would, then, be a consequence of natural selection on individuals. Clinical relevance The carbohydrate molecules on the surfaces of red blood cells have roles in cell membrane integrity, cell adhesion, membrane transportation of molecules, and acting as receptors for extracellular ligands, and enzymes. ABO antigens are found having similar roles on epithelial cells as well as red blood cells. Bleeding and thrombosis (von Willebrand factor) The ABO antigen is also expressed on the von Willebrand factor (vWF) glycoprotein, which participates in hemostasis (control of bleeding). In fact, having type O blood predisposes to bleeding, as 30% of the total genetic variation observed in plasma vWF is explained by the effect of the ABO blood group, and individuals with group O blood normally have significantly lower plasma levels of vWF (and Factor VIII) than do non-O individuals. In addition, vWF is degraded more rapidly due to the higher prevalence of blood group O with the Cys1584 variant of vWF (an amino acid polymorphism in VWF): the gene for ADAMTS13 (vWF-cleaving protease) maps to human chromosome 9 band q34.2, the same locus as ABO blood type. Higher levels of vWF are more common amongst people who have had ischemic stroke (from blood clotting) for the first time. The results of this study found that the occurrence was not affected by ADAMTS13 polymorphism, and the only significant genetic factor was the persons blood group. ABO hemolytic disease of the newborn ABO blood group incompatibilities between the mother and child do not usually cause hemolytic disease of the newborn (HDN) because antibodies to the ABO blood groups are usually of the IgM type, which do not cross the placenta. However, in an O-type mother, IgG ABO antibodies are produced and the baby can potentially develop ABO hemolytic disease of the newborn. Clinical applications In human cells, the ABO alleles and their encoded glycosyltransferases have been described in several oncologic conditions. Using anti-GTA/GTB monoclonal antibodies, it was demonstrated that a loss of these enzymes was correlated to malignant bladder and oral epithelia. Furthermore, the expression of ABO blood group antigens in normal human tissues is dependent the type of differentiation of the epithelium. In most human carcinomas, including oral carcinoma, a significant event as part of the underlying mechanism is decreased expression of the A and B antigens. Several studies have observed that a relative down-regulation of GTA and GTB occurs in oral carcinomas in association with tumor development. More recently, a genome wide association study (GWAS) has identified variants in the ABO locus associated with susceptibility to pancreatic cancer. In addition, another large GWAS study has associated ABO-histo blood groups as well as FUT2 secretor status with the presence in the intestinal microbiome of specific bacterial species. In this case the association was with Bacteroides and Faecalibacterium spp. Bacteroides of the same OTU (operational taxonomic unit) have been shown to be associated with inflammatory bowel disease, thus the study suggests an important role for the ABO histo-blood group antigens as candidates for direct modulation of the human microbiome in health and disease. Clinical marker A multi-locus genetic risk score study based on a combination of 27 loci, including the ABO gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22). Alteration of ABO antigens for transfusion In April 2007, an international team of researchers announced in the journal Nature Biotechnology an inexpensive and efficient way to convert types A, B, and AB blood into type O. This is done by using glycosidase enzymes from specific bacteria to strip the blood group antigens from red blood cells. The removal of A and B antigens still does not address the problem of the Rh blood group antigen on the blood cells of Rh positive individuals, and so blood from Rh negative donors must be used. The sort of blood is named "enzyme converted to O" (ECO) blood. Patient trials will be conducted before the method can be relied on in live situations. One such Phase II trial was done on B-to-O blood in 2002.Another approach to the blood antigen problem is the manufacture of artificial blood, which could act as a substitute in emergencies. Pseudoscience During the 1930s, connecting blood groups to personality types became popular in Japan and other areas of the world. Studies of this association have yet to confirm its existence definitively.Other popular but unsupported ideas include the use of a blood type diet, claims that group A causes severe hangovers, group O is associated with perfect teeth, and those with blood group A2 have the highest IQs. Scientific evidence in support of these concepts is limited at best. See also Secretor status – secretion of ABO antigens in body fluids References Further reading Dean L (2005). "Chapter 5: The ABO blood group". Blood Groups and Red Cell Antigens. Retrieved 24 March 2007. Farr A (1 April 1979). "Blood group serology--the first four decades (1900–1939)". Med Hist. 23 (2): 215–26. doi:10.1017/s0025727300051383. PMC 1082436. PMID 381816. External links ABO at BGMUT Blood Group Antigen Gene Mutation Database at NCBI, NIH Encyclopædia Britannica, ABO blood group system National Blood Transfusion Service
Herpesviridae
Herpesviridae is a large family of DNA viruses that cause infections and certain diseases in animals, including humans. The members of this family are also known as herpesviruses. The family name is derived from the Greek word ἕρπειν (herpein to creep), referring to spreading cutaneous lesions, usually involving blisters, seen in flares of herpes simplex 1, herpes simplex 2 and herpes zoster (shingles). In 1971, the International Committee on the Taxonomy of Viruses (ICTV) established Herpesvirus as a genus with 23 viruses among four groups. As of 2020, 115 species are recognized, all but one of which are in one of the three subfamilies. Herpesviruses can cause both latent and lytic infections. Nine herpesvirus types are known to primarily infect humans, at least five of which – herpes simplex viruses 1 and 2 (HSV-1 and HSV-2, also known as HHV-1 and HHV-2; both of which can cause orolabial herpes and genital herpes), varicella zoster virus (or HHV-3; the cause of chickenpox and shingles), Epstein–Barr virus (EBV or HHV-4; implicated in several diseases, including mononucleosis and some cancers), and human cytomegalovirus (HCMV or HHV-5) – are extremely common among humans. More than 90% of adults have been infected with at least one of these, and a latent form of the virus remains in almost all humans who have been infected. Other human herpesviruses are human herpesvirus 6A and 6B (HHV-6A and HHV-6B), human herpesvirus 7 (HHV-7), and Kaposis sarcoma-associated herpesvirus (KSHV, also known as HHV-8).In total, more than 130 herpesviruses are known, some of them from mammals, birds, fish, reptiles, amphibians, and molluscs. Among the animal herpesviruses are pseudorabies virus, the causative agent of Aujeszkys disease in pigs, and bovine herpesvirus 1, the causative agent of bovine infectious rhinotracheitis and pustular vulvovaginitis. Taxonomy Subfamily Alphaherpesvirinae Iltovirus Mardivirus Scutavirus Simplexvirus Varicellovirus Subfamily Betaherpesvirinae Cytomegalovirus Muromegalovirus Proboscivirus Quwivirus Roseolovirus Subfamily Gammaherpesvirinae Bossavirus Lymphocryptovirus Macavirus Manticavirus Patagivirus Percavirus RhadinovirusAdditionally, the species Iguanid herpesvirus 2 is currently unassigned to a genus and subfamily.See Herpesvirales#Taxonomy for information on taxonomic history, phylogenetic research, and the nomenclatural system. Structure All members of the Herpesviridae share a common structure; a relatively large, monopartite, double-stranded, linear DNA genome encoding 100-200 genes encased within an icosahedral protein cage (with T=16 symmetry) called the capsid, which is itself wrapped in a protein layer called the tegument containing both viral proteins and viral mRNAs and a lipid bilayer membrane called the envelope. This whole particle is known as a virion. The structural components of a typical HSV virion are the Lipid bilayer envelope, Tegument, DNA, Glycoprotein spikes and Nucleocapsid. The four-component Herpes simplex virion encompasses the double-stranded DNA genome into an icosahedral nucleocapsid. There is tegument around. Tegument contains filaments, each 7 nm wide. It is an amorphous layer with some structured regions. Finally, it is covered with a lipoprotein envelope. There are spikes made of glycoprotein protruding from each virion. These can expand the diameter of the virus to 225 nm. The diameters of virions without spikes are around 186 nm. There are at least two unglycosylated membrane proteins in the outer envelope of the virion. There are also 11 glycoproteins. These are gB, gC, gD, gE, gG, gH, gI, gJ, gK, gL and gM. Tegument contains 26 proteins. They have duties such as capsid transport to the nucleus and other organelles, activation of early gene transcription, and mRNA degradation. The icosahedral nucleocapsid is similar to that of tailed bacteriophage in the order Caudovirales. This capsid has 161 capsomers consisting of 150 hexons and 11 pentons, as well as a portal complex that allows entry and exit of DNA into the capsid. Life cycle All herpesviruses are nuclear-replicating—the viral DNA is transcribed to mRNA within the infected cells nucleus.Infection is initiated when a viral particle contacts a cell with specific types of receptor molecules on the cell surface. Following binding of viral envelope glycoproteins to cell membrane receptors, the virion is internalized and dismantled, allowing viral DNA to migrate to the cell nucleus. Within the nucleus, replication of viral DNA and transcription of viral genes occurs.During symptomatic infection, infected cells transcribe lytic viral genes. In some host cells, a small number of viral genes termed latency-associated transcript (LAT) accumulate, instead. In this fashion, the virus can persist in the cell (and thus the host) indefinitely. While primary infection is often accompanied by a self-limited period of clinical illness, long-term latency is symptom-free.Chromatin dynamics regulate the transcription competency of entire herpes virus genomes. When the virus enters a cell, the cellular immune response is to protect the cell. The cell does so by wrapping the viral DNA around histones and condensing it into chromatin, causing the virus to become dormant, or latent. If cells are unsuccessful and the chromatin is loosely bundled, the viral DNA is still accessible. The viral particles can turn on their genes and replicate using cellular machinery to reactivate, starting a lytic infection.Reactivation of latent viruses has been implicated in a number of diseases (e.g. shingles, pityriasis rosea). Following activation, transcription of viral genes transitions from LAT to multiple lytic genes; these lead to enhanced replication and virus production. Often, lytic activation leads to cell death. Clinically, lytic activation is often accompanied by emergence of nonspecific symptoms, such as low-grade fever, headache, sore throat, malaise, and rash, as well as clinical signs such as swollen or tender lymph nodes and immunological findings such as reduced levels of natural killer cells.In animal models, local trauma and system stress have been found to induce reactivation of latent herpesvirus infection. Cellular stressors like transient interruption of protein synthesis and hypoxia are also sufficient to induce viral reactivation. Evolution The three mammalian subfamilies – Alpha-, Beta- and Gamma-herpesviridae – arose approximately 180 to 220 mya. The major sublineages within these subfamilies were probably generated before the mammalian radiation of 80 to 60 mya. Speciations within sublineages took place in the last 80 million years probably with a major component of cospeciation with host lineages.All the currently known bird and reptile species are alphaherpesviruses. Although the branching order of the herpes viruses has not yet been resolved, because herpes viruses and their hosts tend to coevolve this is suggestive that the alphaherpesviruses may have been the earliest branch.The time of origin of the genus Iltovirus has been estimated to be 200 mya while those of the mardivirus and simplex genera have been estimated to be between 150 and 100 mya. Immune system evasions Herpesviruses are known for their ability to establish lifelong infections. One way this is possible is through immune evasion. Herpesviruses have many different ways of evading the immune system. One such way is by encoding a protein mimicking human interleukin 10 (hIL-10) and another is by downregulation of the major histocompatibility complex II (MHC II) in infected cells. cmvIL-10 Research conducted on cytomegalovirus (CMV) indicates that the viral human IL-10 homolog, cmvIL-10, is important in inhibiting pro-inflammatory cytokine synthesis. The cmvIL-10 protein has 27% identity with hIL-10 and only one conserved residue out of the nine amino acids that make up the functional site for cytokine synthesis inhibition on hIL-10. There is, however, much similarity in the functions of hIL-10 and cmvIL-10. Both have been shown to down regulate IFN-γ, IL-1α, GM-CSF, IL-6 and TNF-α, which are all pro-inflammatory cytokines. They have also been shown to play a role in downregulating MHC I and MHC II and up regulating HLA-G (non-classical MHC I). These two events allow for immune evasion by suppressing the cell-mediated immune response and natural killer cell response, respectively. The similarities between hIL-10 and cmvIL-10 may be explained by the fact that hIL-10 and cmvIL-10 both use the same cell surface receptor, the hIL-10 receptor. One difference in the function of hIL-10 and cmvIL-10 is that hIL-10 causes human peripheral blood mononuclear cells (PBMC) to both increase and decrease in proliferation whereas cmvIL-10 only causes a decrease in proliferation of PBMCs. This indicates that cmvIL-10 may lack the stimulatory effects that hIL-10 has on these cells.It was found that cmvIL-10 functions through phosphorylation of the Stat3 protein. It was originally thought that this phosphorylation was a result of the JAK-STAT pathway. However, despite evidence that JAK does indeed phosphorylate Stat3, its inhibition has no significant influence on cytokine synthesis inhibition. Another protein, PI3K, was also found to phosphorylate Stat3. PI3K inhibition, unlike JAK inhibition, did have a significant impact on cytokine synthesis. The difference between PI3K and JAK in Stat3 phosphorylation is that PI3K phosphorylates Stat3 on the S727 residue whereas JAK phosphorylates Stat3 on the Y705 residue. This difference in phosphorylation positions seems to be the key factor in Stat3 activation leading to inhibition of pro-inflammatory cytokine synthesis. In fact, when a PI3K inhibitor is added to cells, the cytokine synthesis levels are significantly restored. The fact that cytokine levels are not completely restored indicates there is another pathway activated by cmvIL-10 that is inhibiting cytokine system synthesis. The proposed mechanism is that cmvIL-10 activates PI3K which in turn activates PKB (Akt). PKB may then activate mTOR, which may target Stat3 for phosphorylation on the S727 residue. MHC downregulation Another one of the many ways in which herpes viruses evade the immune system is by down regulation of MHC I and MHC II. This is observed in almost every human herpesvirus. Down regulation of MHC I and MHC II can come about by many different mechanisms, most causing the MHC to be absent from the cell surface. As discussed above, one way is by a viral chemokine homolog such as IL-10. Another mechanism to down regulate MHCs is to encode viral proteins that detain the newly formed MHC in the endoplasmic reticulum (ER). The MHC cannot reach the cell surface and therefore cannot activate the T cell response. The MHCs can also be targeted for destruction in the proteasome or lysosome. The ER protein TAP also plays a role in MHC down regulation. Viral proteins inhibit TAP preventing the MHC from picking up a viral antigen peptide. This prevents proper folding of the MHC and therefore the MHC does not reach the cell surface. Human herpesvirus types Below are the nine (9) distinct viruses in this family known to cause disease in humans. Zoonotic herpesviruses In addition to the herpesviruses considered endemic in humans, some viruses associated primarily with animals may infect humans. These are zoonotic infections: Animal herpesviruses In animal virology, the best known herpesviruses belong to the subfamily Alphaherpesvirinae. Research on pseudorabies virus (PrV), the causative agent of Aujeszkys disease in pigs, has pioneered animal disease control with genetically modified vaccines. PrV is now extensively studied as a model for basic processes during lytic herpesvirus infection, and for unraveling molecular mechanisms of herpesvirus neurotropism, whereas bovine herpesvirus 1, the causative agent of bovine infectious rhinotracheitis and pustular vulvovaginitis, is analyzed to elucidate molecular mechanisms of latency. The avian infectious laryngotracheitis virus is phylogenetically distant from these two viruses and serves to underline similarity and diversity within the Alphaherpesvirinae. Research Research is currently ongoing into a variety of side-effect or co-conditions related to the herpesviruses. These include: See also Acciptrid herpesvirus 1 Agua Preta virus, a potential herpesvirus References External links ICTV International Committee on Taxonomy of Viruses (official site) Viralzone: Herpesviridae Animal viruses Article on Cercopithecine herpesvirus National B Virus Resource Center Pityriasis Rosea overview Herpes simplex: Host viral protein interactions.A database of Host/HSV-1 interactions Virus Pathogen Database and Analysis Resource (ViPR): Herpesviridae
Plague
Plague or The Plague may refer to: Agriculture, fauna, and medicine Plague (disease), a disease caused by Yersinia pestis An epidemic of infectious disease (medical or agricultural) A pandemic caused by such a disease A swarm of pest insects such as locusts A massive attack of other pests afflicting agriculture Overpopulation in wild animals afflicting the environment and/or agriculture Plague, collective noun for common grackles Historical plagues List of epidemics Antonine Plague, an ancient pandemic in 165–189 CE brought to the Roman Empire by troops returning from campaigns in the Near East Black Death, the Eurasian pandemic beginning in the 14th century Great Northern War plague outbreak, a European outbreak in the early 18th century Great Plague of London, a massive outbreak in England that killed an estimated 20% of Londons population in 1665–1666 Plague of Athens, a devastating epidemic which hit Athens in ancient Greece in 430 BCE Plague of Justinian, a pandemic in 541–542 CE in the Byzantine Empire Plague Riot, a riot in Moscow in 1771 caused by an outbreak of bubonic plague Modern plagues Third plague pandemic or Third Plague, a major plague pandemic that began in China in 1855 until 1960 Manchurian plague (1910–11): Part of the third plague pandemic HIV/AIDS, originally referred to as the "gay plague" when it was discovered in the 1980s (see History of HIV/AIDS) Art, media, and entertainment Art Plague (painting), by Arnold Böcklin Fictional entities Plague, Lisbeth Salanders hacker friend and colleague in the Hacker Republic, e.g., see The Girl who Kicked the Hornets Nest#Trial The Plague (G.I. Joe), a Cobra special forces team in the comic book G.I. Joe: Americas Elite The Plague, a duo of demonic assassins in the movie Hobo with a Shotgun Films and television Plague (1979 film), a science-fiction film The Plague (1992 film), a drama film The Plague (2006 film), a horror film Plague (2014 film), an Australian horror film La peste (TV series), Spanish historical drama series broadcast in the UK as The Plague Games Corrupted Blood incident, a virtual plague that occurred in the video game World of Warcraft Plague Inc., a strategy game for smartphones and tablets by Ndemic Creations Plague!, a card game about the Black Plague in England Plague of Shadows (Plague Knight), a character and DLC gamemode for Shovel Knight The Plague, a playable killer character of Babylonian origin from the asymmetrical-survival horror Dead by Daylight Literature Plague, a 2000 young adult novel by Malcolm Rose Plague, a 1977 thriller novel by Graham Masterton Plague 99, a novel by Jean Ure The Plague (novel), a novel by Albert Camus "The Plague" (Dragon Prince), an epidemic which affects both humans and dragons in Melanie Rawns novel The Plague (magazine), New York Universitys comedy magazine Music Artists The Plague (American band), a hardcore punk band from Cleveland The Plague (English band), a punk rock band The Plague (New Zealand band), a theatrical punk/art rock band Albums Plague (Klinik album) Plagues (album), by The Devil Wears Prada The Plague, by Demon The Plague (Brotha Lynch Hung album) EPs The Plague (Nuclear Assault EP) The Plague (I Hate Sally EP) Songs "Plague", by Crystal Castles from the 2012 album III "The Plague", by Demon from the 1983 album by the same name "The Plague," an unreleased song by The Mountain Goats that appears on the 2020 live album The Jordan Lake Sessions Musicals Plague! The Musical, by David Massingham and Matthew Townend Television "Plague" (2003 Dead Zone episode) "Plague" (2004 Deadwood episode) "The Plague" (1994 Diagnosis: Murder episode) "The Plague", second episode of the 1966 Doctor Who serial The Ark "The Plague" (1996 Father Ted episode) The Plague, English title of 2018 Spanish TV series La peste Religion Plagues of Egypt, the 10 calamities that God inflicted on Egypt in the book of Exodus The seven plagues poured out from seven bowls in Revelation 15:5-16:21 Technology Capacitor plague, a condition afflicting computer motherboards in which capacitors fail See also Plaque (disambiguation)
Coital incontinence
Coital incontinence (CI) is urinary leakage that occurs during either penetration or orgasm and can occur with a sexual partner or with masturbation. It has been reported to occur in 10% to 27% of sexually active women with urinary continence problems. There is evidence to suggest links between urinary leakage at penetration and urodynamic stress incontinence, and between urinary leakage at orgasm and detrusor overactivity.Coital incontinence is physiologically distinct from female ejaculation, with which it is sometimes confused. == References ==
Trichostrongylus
Trichostrongylus species are nematodes (round worms), which are ubiquitous among herbivores worldwide, including cattle, sheep, donkeys, goats, deer, and rabbits. At least 10 Trichostrongylus species have been associated with human infections. Infections occur via ingestion of infective larvae from contaminated vegetables or water. Epidemiological studies indicate a worldwide distribution of Trichostrongylus infections in humans, with the highest prevalence rates observed in individuals from regions with poor sanitary conditions, in rural areas, or who are farmers / herders. Human infections are most prevalent in the Middle East and Asia, with a worldwide estimated prevalence of 5.5 million people. Life cycle Eggs are passed through the feces of an infected definitive host, usually a mammalian herbivore including rabbits, sheep, cattle, and rodents. Under certain environmental conditions, which include optimal temperature and humidity, larvae hatch from eggs after several days. Hatched rhabditiform larvae grow on vegetation or within soil. After 5 to 10 days, two molts (L1 & L2) have occurred and the parasite becomes a filariform (L3) larvae that is infectious. Infection in mammals occurs upon ingestion of infective filariform (L3) larvae. The larvae reaches the small intestine to reside and mature into adult worms within their definitive hosts. Infections in humans may occur as incidental infections. Trichostrongylus consists of multiple species that relate to each of its host, when it comes to parasitic survival and infection. For example, Trichostrongylus affinis primarily infects cottontail rats, Trichostrongylus sigmodontis affects hispid cotton rat, and marsh rice rat, and Trichostrongylus retortaeformis primarily affects European rabbits (Oryctolagus cuniculus). Clinical presentation The majority of human infections are asymptomatic or associated with mild symptoms. Symptomatic individuals may experience abdominal pain, nausea, diarrhea, flatulence, dizziness, generalized fatigue, and malaise. Eosinophilia is frequently observed. Infections with a heavy worm burden can lead to anemia, cholecystitis, and emaciation. Regarding Trichostrongylus retortaeformis, worm burden is affected parallel to host immune system and climate change. Although immunity prevents any significant long-term net accumulation of T. retortaeformis in the rabbit population, its seasonal effect is to increase heterogeneity in infection and transmission between individuals by worsening the parasite burden in the juveniles exposed to climate warming. Diagnosis The adult worms live in the small intestine. The diagnosis is based on the observation of eggs in the stool. The eggs are 85–115 um, oval, elongated, and pointed at one or both ends. Trichostrongylus eggs must be differentiated from hookworm eggs, which are smaller and do not have pointed ends. Prevention and treatment Since the use of herbivore manure as fertilizer is a common practice preceding infection, thorough cleaning and cooking of vegetables is required for prevention of infection. Treatment with pyrantel pamoate is recommended as the first line drug. Alternative agents include mebendazole and albendazole. Successful treatment with ivermectin has also been reported. Another way of avoiding these free-swimming stages of infective larvae, is to wear protective footwear when walking in areas of parasite prominence, and maintain general sanitary practices throughout the day. Species Species: Trichostrongylus affinis Trichostrongylus andreevi Grigorian, 1952 Trichostrongylus askivali Dunn, 1964 Trichostrongylus axei (Cobbold, 1879) Trichostrongylus brevis Otsuru, 1962 Trichostrongylus calcaratus Ransom, 1911 Trichostrongylus capricola Ransom, 1907 Trichostrongylus colubriformis (Giles, 1892) Trichostrongylus duretteae Rossin, Timi & Malizia, 2006 Trichostrongylus lerouxi Biocca, Chabaud & Ghadirian, 1974 Trichostrongylus longispicularis Gordon, 1933 Trichostrongylus medius Oliger, 1950 Trichostrongylus ostertagiaeformis Kadenazii, 1957 Trichostrongylus pietersei Le Roux, 1932 Trichostrongylus probolurus (Railliet, 1896) Trichostrongylus retortaeformis (Zeder, 1800) Trichostrongylus sigmodontis Trichostrongylus skrjabini Kalantarian, 1928 Trichostrongylus suis Iwanitzki, 1930 Trichostrongylus tenuis (Mehlis, 1846) Trichostrongylus ventricosus (Rudolphi, 1809) Trichostrongylus vitrinus Looss, 1905 == References ==
Vaginal septum
A vaginal septum is a vaginal anomaly that is partition within the vagina; such a septum could be either longitudinal or transverse. In some affected women, the septum is partial or does not extend the length or width of the vagina. Pain during intercourse can be a symptom. A longitudinal vaginal septum develops during embryogenesis when there is an incomplete fusion of the lower parts of the two Müllerian ducts. As a result, there may appear to be two openings to the vagina. There may be associated duplications of the more cranial parts of the Müllerian derivatives, a double cervix, and either a uterine septum or uterus didelphys (double uterus). A transverse septum forms during embryogenesis when the Müllerian ducts do not fuse to the urogenital sinus. A complete transverse septum can occur across the vagina at different levels. Menstrual flow can be blocked, and is a cause of primary amenorrhea. The accumulation of menstrual debris behind the septum is termed cryptomenorrhea. Some transverse septa are incomplete and may lead to dyspareunia or obstruction in labour. See also Diphallia References External links Media related to Vaginal septum at Wikimedia Commons Vagina, Anatomical Atlases, an Anatomical Digital Library (2018)
Military vehicle
A military vehicle is any vehicle for land-based military transport and activity, including combat vehicles; both specifically designed for, or significantly used by military and armed forces. Most military vehicles require off-road capabilities and/or vehicle armour (plate), making them heavy, therefore some have vehicle tracks instead of being wheeled vehicles; and half-tracks have both. Furthermore, some military vehicles are amphibious, constructed for use on land and water, and sometimes also intermediate surfaces. Military vehicles are almost always camouflaged, or at least painted in inconspicuous colour(s). In contrast, under the Geneva Conventions, all non-combatant military vehicles, such as field ambulances and mobile first aid stations, must be properly and clearly marked as such. Under the conventions, when respected, such vehicles are legally immune from deliberate attack by all combatants. Historically, militaries explored the use of commercial off-the-shelf (COTS) trucks and vehicles, both to gain experience with commercially available products and technology, and to try to save time in development, and money in procurement. A subtype that has become increasingly prominent since the late 20th century is the improvised fighting vehicle, often seen in irregular warfare. Military trucks A military truck is a vehicle designed to transport troops, fuel and military supplies to the battlefield, through asphalted roads and unpaved dirt roads. Several countries have manufactured their own models of military trucks, each of which has its own technical characteristics. These vehicles are adapted to the needs of the different armies on the ground. In general, these trucks are composed of a chassis, a motor, a transmission, a cabin, an area for the placement of the load and the equipment, axles of transmission, suspensions, direction, tires, electrical, pneumatic, hydraulic, engine cooling systems, and brakes. They can be operated with a gasoline engine or with a diesel engine, there are four-wheel drive (4x4) vehicles, six wheeled (6x6), eight wheeled (8x8), ten wheeled (10x10) and even twelve wheeled vehicles (12x12). Types of military vehicles Land combat and military transport vehicles include: See also Military transport (equipment) not primarily used on land: Military aircraft Military spacecraft Naval ships Warships Camouflage List of military vehicles == References ==
Megavitamin-B6 syndrome
Megavitamin-B6 syndrome is a collection of symptoms that can result from chronic supplementation, or acute overdose, of vitamin B6. While it is also known as hypervitaminosis B6, vitamin B6 toxicity and vitamin B6 excess, megavitamin-b6 syndrome is the name used in the ICD-10. Signs and symptoms The predominant symptom is peripheral sensory neuropathy that is experienced as numbness, pins-and-needles and burning sensations (paresthesia) in a patients limbs on both sides of their body. Patients may experience unsteadiness of gait, incoordination (ataxia), involuntary muscle movements (choreoathetosis) the sensation of an electric zap in their bodies (Lhermittes sign), a heightened sensitivity to sense stimuli including photosensitivity (hyperesthesia), impaired skin sensation (hypoesthesia), numbness around the mouth, and gastrointestinal symptoms such as nausea and heartburn. The ability to sense vibrations and to sense ones position are diminished to a greater degree than pain or temperature. Skin lesions have also been reported. Megavitamin-B6 syndrome may also contribute to burning mouth syndrome. Potential psychiatric symptoms range from anxiety, depression, agitation, and cognitive deficits to psychosis.Symptom severity appears to be dose-dependent (higher doses cause more severe symptoms) and the duration of supplementation with vitamin B6 before onset of systems appears to be inversely proportional to the amount taken daily (the smaller the daily dosage, the longer it will take for symptoms to develop). It is also possible that some individuals are more susceptible to the toxic effects of vitamin B6 than others. Megavitamin-B6 syndrome has been reported in doses as low as 24 mg/day.Symptoms may also be dependent on the form of vitamin B6 taken in supplements. It has been proposed that vitamin B6 in supplements should be in pyridoxal or pyridoxal phosphate form rather than pyridoxine as these are thought to reduce the likelihood of toxicity. A tissue culture study, however, showed that all B6 vitamers that could be converted into active coenzymes (pyridoxal, pyridoxine and pyridoxamine) were neurotoxic at similar concentrations. It has been shown, in vivo, that supplementing with pyridoxal or pyridoxal phosphate increases pyridoxine concentrations in humans, meaning there are metabolic pathways from each vitamer of B6 to the all other forms. Consuming high amounts of vitamin B6 from food has not been reported to cause adverse effects.Early diagnosis and cessation of vitamin B6 supplementation can reduce the morbidity of the syndrome. Cause While vitamin B6 is water-soluble, it has a half-life of 25–33 days and accumulates in the body where it is stored in muscle, plasma, the liver, red blood cells and bound to proteins in tissues. Potential mechanisms The common supplemental form of vitamin B6, pyridoxine, is similar to pyridine which can be neurotoxic. Pyridoxine has limited transport across the blood–brain barrier explaining why the central nervous system is spared. Cell bodies of motor fibers are located within the spinal cord that is also restricted by the blood-brain barrier explaining why motor impairment is rare. The dorsal root ganglia, however, are located outside of the blood-brain barrier making them more susceptible.Pyridoxine is converted to pyridoxal phosphate via two enzymes, pyridoxal kinase and pyridoxine 5′-phosphate oxidase. High levels of pyridoxine can inhibit these enzymes. As pyridoxal phosphate is the active form of vitamin B6, this saturation of pyridoxine could mimic a deficiency of vitamin B6. Tolerable upper limits Several government agencies have reviewed the data on vitamin B6 supplementation and produced consumption upper limits with the desired goal to prevent sensory neuropathy from excessive amounts. Each agency developed its own criteria for usable studies in relation to tolerable upper limits, and as such the recommendations vary by agency. Between agencies, current tolerable upper limit guidelines vary from 10 mg per day to 100 mg per day. Reviews of vitamin B6 related neuropathy cautioned that supplementation at doses greater than 50 mg per day for extended periods of time may be harmful and should be discouraged. In 2008, the Australian Complementary Medicines Evaluation Committee recommended warning statements appear on products containing daily doses of 50 mg or more vitamin B6 to avoid toxicity.The relationship between the amount of vitamin B6 consumed, and serum the levels of those who consume it, varies between individuals. Some people may have high serum concentrations without symptoms of neuropathy. It is not known if inhalation of vitamin B6 while, for example, working with animal feed containing vitamin B6 is safe. Exceptions High parenteral doses of vitamin B6 are used to treat isoniazid overdose with no adverse effects found, although a preservative in parenteral vitamin B6 may cause transient worsening of metabolic acidosis. High doses of vitamin B6 are used to treat gyromitra mushroom (false morel) poisoning, hydrazine exposure and homocystinuria Doses of 50 mg to 100 mg per day may also be used to treat pyridoxine deficient seizures and when patients are taking other medications that reduce vitamin B6. Daily doses of 10 mg to 50 mg are recommended for patients undergoing hemodialysis.Outside of rare medical conditions, placebo-controlled studies have generally failed to show benefits of high doses of vitamin B6. Reviews of supplementing with vitamin B6 have not found it to be effective at reducing swelling, reducing stress, producing energy, preventing neurotoxicity, or treating asthma. A Diagnosis The clinical hallmark of megavitamin-B6 syndrome is ataxia due to sensory polyneuropathy. Blood tests are performed to rule out other causes and to confirm an elevated level of vitamin B6 with an absence of hypophosphatasia. Examination does not typically show signs of a motor deficit, dysfunction of the autonomic nervous system or impairment of the central nervous system, although in severe cases motor and autonomic imparement can occur. When examined, patients typically have diminished reflexes (hyporeflexia), such as a diminished response when performing an ankle jerk reflex test. Nerve conduction studies typically show normal motor conduction but decrease in large sensory wave amplitude in the arms and legs. Needle electromyography studies generally reveal no signs of denervation. Classification Megavitamin-B6 syndrome is characterized mainly by degeneration of dorsal root ganglion axons and cell bodies, although it also affects the trigeminal ganglia. It is classified as a sensory ganglionopathy due to involvement of these ganglia. In electrodiagnostic testing, it has characteristic non-length-dependent abnormalities of sensory action potentials that occur globally, rather than distally decreasing of sensory nerve action potential amplitudes. Megavitamin-B6 syndrome is predominately a large fiber neuropathy characterized by sensory loss of joint position, vibration and ataxia. Although it has characteristics of small fiber neuropathy in severe cases where there is impairment of pain, temperature, and autonomic functions. Treatment The primary treatment for megavitamin-B6 syndrome is to stop taking supplemental vitamin B6. Physical therapy, including vestibular rehabilitation, has been used in attempts to improve recovery following cessation of vitamin B6 supplementation. Medications such as amitriptyline have been used to help with neuropathic pain.In experimental tests using animal subjects, neurotrophic factors, specifically neurotrophin-3, were shown to potentially reverse the neuropathy caused from the vitamin B6 toxicity. With rats and mice, improvement has also been seen with 4-methylcatechol, a specific chicory extract, coffee and trigonelline. Prognosis Other than with extremely high doses of vitamin B6, neurologic dysfunction improves following cessation of vitamin B6 supplementation and usually, but not always, resolves within six months. In cases of acute high doses, for example in people receiving daily doses of 2 grams of vitamin B6 per kilogram of body weight, symptoms may be irreversible and may additionally cause pseudoathetosis.In the immediate 2–6 weeks following discontinuation of vitamin B6, patients may experience a symptom progression before gradual improvement begins. This is known as coasting and is encountered in other toxic neuropathies. A vitamin B6 substance dependency may exist in daily dosages of 200 mg or more, making a drug withdrawal effect possible when discontinued. See also Notes References Further reading A chapter with a story about a woman experiencing a severe case of Megavitamin-B6 syndrome titled "The Disembodied Lady" appears in Chapter 3 of The Man Who Mistook His Wife for a Hat: Oliver Sacks; Oliver W. Sacks (1998). "Chapter 3: The Disembodied Lady". The Man Who Mistook His Wife For A Hat: And Other Clinical Tales. Simon and Schuster. pp. 43–52. ISBN 978-0-684-85394-9. An ethnographic study of an online support group for megavitamin B6 syndrome appears in: Laura D. Russell (16 December 2019). "Chapter 9: Making Collective Sense of Uncertainty: How Online Social Support Communities Negotiate Meaning for Contested Illnesses". In Nichole Egbert; Kevin B Wright (eds.). Social Support and Health in the Digital Age. Rowman & Littlefield. pp. 171–191. ISBN 978-1-4985-9535-3. External links StatPearls - Vitamin B6 Toxicity
Terrorism
Terrorism, in its broadest sense, is the use of criminal violence to provoke a state of terror or fear, mostly with the intention to achieve political or religious aims. The term is used in this regard primarily to refer to intentional violence during peacetime or in the context of war against non-combatants (mostly civilians and neutral military personnel). The terms "terrorist" and "terrorism" originated during the French Revolution of the late 18th century but became widely used internationally and gained worldwide attention in the 1970s during the Troubles in Northern Ireland, the Basque conflict, and the Israeli–Palestinian conflict. The increased use of suicide attacks from the 1980s onwards was typified by the 2001 September 11 attacks in the United States. There are various different definitions of terrorism, with no universal agreement about it. Terrorism is a charged term. It is often used with the connotation of something that is "morally wrong". Governments and non-state groups use the term to abuse or denounce opposing groups. Varied political organizations have been accused of using terrorism to achieve their objectives. These include left-wing and right-wing political organizations, nationalist groups, religious groups, revolutionaries and ruling governments. Legislation declaring terrorism a crime has been adopted in many states. When terrorism is perpetrated by nation states, it is not considered terrorism by the state conducting it, making legality a largely grey-area issue. There is no consensus as to whether terrorism should be regarded as a war crime.The Global Terrorism Database, maintained by the University of Maryland, College Park, has recorded more than 61,000 incidents of non-state terrorism, resulting in at least 140,000 deaths, between 2000 and 2014. Etymology Etymologically, the word terror is derived from the Latin verb Tersere, which later becomes Terrere. The latter form appears in European languages as early as the 12th century; its first known use in French is the word terrible in 1160. By 1356 the word terreur is in use. Terreur is the origin of the Middle English term terrour, which later becomes the modern word "terror". Historical background The term terroriste, meaning "terrorist", is first used in 1794 by the French philosopher François-Noël Babeuf, who denounces Maximilien Robespierres Jacobin regime as a dictatorship. In the years leading up to what became known as the Reign of Terror, the Brunswick Manifesto threatened Paris with an "exemplary, never to be forgotten vengeance: the city would be subjected to military punishment and total destruction" if the royal family was harmed, but this only increased the Revolutions will to abolish the monarchy. Some writers attitudes about French Revolution grew less favorable after the French monarchy was abolished in 1792. During the Reign of Terror, which began in July 1793 and lasted thirteen months, Paris was governed by the Committee of Public Safety who oversaw a regime of mass executions and public purges.Prior to the French Revolution, ancient philosophers wrote about tyrannicide, as tyranny was seen as the greatest political threat to Greco-Roman civilization. Medieval philosophers were similarly occupied with the concept of tyranny, though the analysis of some theologians like Thomas Aquinas drew a distinction between usurpers, who could be killed by anyone, and legitimate rulers who abused their power—the latter, in Aquinas view, could only be punished by a public authority. John of Salisbury was the first medieval Christian scholar to defend tyrannicide. Most scholars today trace the origins of the modern tactic of terrorism to the Jewish Sicarii Zealots who attacked Romans and Jews in 1st-century Palestine. They follow its development from the Persian Order of Assassins through to 19th-century anarchists. The "Reign of Terror" is usually regarded as an issue of etymology. The term terrorism has generally been used to describe violence by non-state actors rather than government violence since the 19th-century Anarchist Movement.In December 1795, Edmund Burke used the word "Terrorists" in a description of the new French government called Directory: At length, after a terrible struggle, the [Directory] Troops prevailed over the Citizens... To secure them further, they have a strong corps of irregulars, ready armed. Thousands of those Hell-hounds called Terrorists, whom they had shut up in Prison on their last Revolution, as the Satellites of Tyranny, are let loose on the people.(emphasis added) The terms "terrorism" and "terrorist" gained renewed currency in the 1970s as a result of the Israeli–Palestinian conflict, the Northern Ireland conflict, the Basque conflict, and the operations of groups such as the Red Army Faction. Leila Khaled was described as a terrorist in a 1970 issue of Life magazine. A number of books on terrorism were published in the 1970s. The topic came further to the fore after the 1983 Beirut barracks bombings and again after the 2001 September 11 attacks and the 2002 Bali bombings. Modern definitions In 2006 it was estimated that there were over 109 different definitions of terrorism. American political philosopher Michael Walzer in 2002 wrote: "Terrorism is the deliberate killing of innocent people, at random, to spread fear through a whole population and force the hand of its political leaders". Bruce Hoffman, an American scholar, has noted that it is not only individual agencies within the same governmental apparatus that cannot agree on a single definition of terrorism. Experts and other long-established scholars in the field are equally incapable of reaching a consensus.C. A. J. Coady has written that the question of how to define terrorism is "irresolvable" because "its natural home is in polemical, ideological and propagandist contexts".Experts disagree about "whether terrorism is wrong by definition or just wrong as a matter of fact; they disagree about whether terrorism should be defined in terms of its aims, or its methods, or both, or neither; they disagree about whether states can perpetrate terrorism; they even disagree about the importance or otherwise of terror for a definition of terrorism." State terrorism State terrorism refers to acts of terrorism conducted by a state against its own citizens or against another state. United Nations In November 2004, a Secretary-General of the United Nations report described terrorism as any act "intended to cause death or serious bodily harm to civilians or non-combatants with the purpose of intimidating a population or compelling a government or an international organization to do or abstain from doing any act". The international community has been slow to formulate a universally agreed, legally binding definition of this crime. These difficulties arise from the fact that the term "terrorism" is politically and emotionally charged. In this regard, Angus Martyn, briefing the Australian parliament, stated, The international community has never succeeded in developing an accepted comprehensive definition of terrorism. During the 1970s and 1980s, the United Nations attempts to define the term floundered mainly due to differences of opinion between various members about the use of violence in the context of conflicts over national liberation and self-determination. These divergences have made it impossible for the United Nations to conclude a Comprehensive Convention on International Terrorism that incorporates a single, all-encompassing, legally binding, criminal law definition of terrorism. The international community has adopted a series of sectoral conventions that define and criminalize various types of terrorist activities. Since 1994, the United Nations General Assembly has repeatedly condemned terrorist acts using the following political description of terrorism: Criminal acts intended or calculated to provoke a state of terror in the public, a group of persons or particular persons for political purposes are in any circumstance unjustifiable, whatever the considerations of a political, philosophical, ideological, racial, ethnic, religious or any other nature that may be invoked to justify them. U.S. law Various legal systems and government agencies use different definitions of terrorism in their national legislation. U.S. Code Title 22 Chapter 38, Section 2656f(d) defines terrorism as: "Premeditated, politically motivated violence perpetrated against noncombatant targets by subnational groups or clandestine agents".18 U.S.C. § 2331 defines "international terrorism" and "domestic terrorism" for purposes of Chapter 113B of the Code, entitled "Terrorism": "International terrorism" means activities with the following three characteristics: Involve violent acts or acts dangerous to human life that violate federal or state law; Appear to be intended (i) to intimidate or coerce a civilian population; (ii) to influence the policy of a government by intimidation or coercion; or (iii) to affect the conduct of a government by mass destruction, assassination, or kidnapping; and occur primarily outside the territorial jurisdiction of the U.S., or transcend national boundaries in terms of the means by which they are accomplished, the persons they appear intended to intimidate or coerce, or the locale in which their perpetrators operate or seek asylum. Media spectacle A definition proposed by Carsten Bockstette at the George C. Marshall European Center for Security Studies, underlines the psychological and tactical aspects of terrorism: Terrorism is defined as political violence in an asymmetrical conflict that is designed to induce terror and psychic fear (sometimes indiscriminate) through the violent victimization and destruction of noncombatant targets (sometimes iconic symbols). Such acts are meant to send a message from an illicit clandestine organization. The purpose of terrorism is to exploit the media in order to achieve maximum attainable publicity as an amplifying force multiplier in order to influence the targeted audience(s) in order to reach short- and midterm political goals and/or desired long-term end states. Terrorists attack national symbols, which may negatively affect a government, while increasing the prestige of the given terrorist group or its ideology. Political violence Terrorist acts frequently have a political purpose. Some official, governmental definitions of terrorism use the criterion of the illegitimacy or unlawfulness of the act. to distinguish between actions authorized by a government (and thus "lawful") and those of other actors, including individuals and small groups. For example, carrying out a strategic bombing on an enemy city, which is designed to affect civilian support for a cause, would not be considered terrorism if it were authorized by a government. This criterion is inherently problematic and is not universally accepted, because: it denies the existence of state terrorism. An associated term is violent non-state actor.According to Ali Khan, the distinction lies ultimately in a political judgment. Pejorative use Having the connotation of "something morally wrong", the term "terrorism" is often used to abuse or denounce opposite parties, either governments or non-state groups. An example of this is the terruqueo political attack used by right-wing groups in Peru to target leftist groups or those opposed to the neoliberal status quo, likening opponents to guerillas from the internal conflict in Peru.Those labeled "terrorists" by their opponents rarely identify themselves as such, and typically use other terms or terms specific to their situation, such as separatist, freedom fighter, liberator, revolutionary, vigilante, militant, paramilitary, guerrilla, rebel, patriot, or any similar-meaning word in other languages and cultures. Jihadi, mujahideen, and fedayeen are similar Arabic words that have entered the English lexicon. It is common for both parties in a conflict to describe each other as terrorists.On whether particular terrorist acts, such as killing non-combatants, can be justified as the lesser evil in a particular circumstance, philosophers have expressed different views: while, according to David Rodin, utilitarian philosophers can (in theory) conceive of cases in which the evil of terrorism is outweighed by the good that could not be achieved in a less morally costly way, in practice the "harmful effects of undermining the convention of non-combatant immunity is thought to outweigh the goods that may be achieved by particular acts of terrorism". Among the non-utilitarian philosophers, Michael Walzer argued that terrorism can be morally justified in only one specific case: when "a nation or community faces the extreme threat of complete destruction and the only way it can preserve itself is by intentionally targeting non-combatants,then it is morally entitled to do so".In his book Inside Terrorism Bruce Hoffman offered an explanation of why the term terrorism becomes distorted: On one point, at least, everyone agrees: terrorism is a pejorative term. It is a word with intrinsically negative connotations that is generally applied to ones enemies and opponents, or to those with whom one disagrees and would otherwise prefer to ignore. What is called terrorism, Brian Jenkins has written, thus seems to depend on ones point of view. Use of the term implies a moral judgment; and if one party can successfully attach the label terrorist to its opponent, then it has indirectly persuaded others to adopt its moral viewpoint. Hence the decision to call someone or label some organization terrorist becomes almost unavoidably subjective, depending largely on whether one sympathizes with or opposes the person/group/cause concerned. If one identifies with the victim of the violence, for example, then the act is terrorism. If, however, one identifies with the perpetrator, the violent act is regarded in a more sympathetic, if not positive (or, at the worst, an ambivalent) light; and it is not terrorism.The pejorative connotations of the word can be summed up in the aphorism, "One mans terrorist is another mans freedom fighter". This is exemplified when a group using irregular military methods is an ally of a state against a mutual enemy, but later falls out with the state and starts to use those methods against its former ally. During the Second World War, the Malayan Peoples Anti-Japanese Army were allied with the British, but during the Malayan Emergency, members of its successor organisation (the Malayan National Liberation Army) started campaigns against them, and were branded "terrorists" as a result. More recently, Ronald Reagan and others in the American administration frequently called the mujaheddin "freedom fighters" during the Soviet–Afghan War, however twenty years later, when a new generation of Afghan men (militant groups like the Taliban and allies) were fighting against what they perceive to be a regime installed by foreign powers, their attacks were labelled terrorism by George W. Bush.Groups accused of terrorism understandably prefer terms reflecting legitimate military or ideological action. Leading terrorism researcher Professor Martin Rudner, director of the Canadian Centre of Intelligence and Security Studies at Ottawas Carleton University, defines "terrorist acts" as unlawful attacks for political or other ideological goals, and said: There is the famous statement: One mans terrorist is another mans freedom fighter. But that is grossly misleading. It assesses the validity of the cause when terrorism is an act. One can have a perfectly beautiful cause and yet if one commits terrorist acts, it is terrorism regardless. Some groups, when involved in a "liberation" struggle, have been called "terrorists" by the Western governments or media. Later, these same persons, as leaders of the liberated nations, are called "statesmen" by similar organizations. Two examples of this phenomenon are the Nobel Peace Prize laureates Menachem Begin and Nelson Mandela. WikiLeaks editor Julian Assange has been called a "terrorist" by Sarah Palin and Joe Biden.Sometimes, states that are close allies, for reasons of history, culture and politics, can disagree over whether members of a certain organization are terrorists. For instance, some branches of the United States government refused to label members of the Provisional Irish Republican Army (IRA) as terrorists while the IRA was using methods against one of the United States closest allies (the United Kingdom) that the UK branded as terrorism. This was highlighted by the Quinn v. Robinson case.Media outlets who wish to convey impartiality may limit their usage of "terrorist" and "terrorism" because they are loosely defined, potentially controversial in nature, and subjective terms.The 2020 Nashville bombing revived a debate over the use of the word "terrorism", with critics saying it is quickly applied to attacks by Muslims but reluctantly if at all used by white Christian men, such as the Nashville bomber. History Depending on how broadly the term is defined, the roots and practice of terrorism can be traced at least to the 1st century AD. According to the contemporary Jewish-Roman historian Josephus, after the Zealotry rebellion against Roman rule in Judea, when some prominent Jewish collaborators with Roman rule were killed, Judas of Galilee formed a small and more extreme offshoot of the Zealots, the Sicarii Zealots, in 6 AD. They were a smaller and more radical offshoot of the Zealots which was active in Judaea Province at the beginning of the 1st century AD, and they can be considered early terrorists, although this is disputed. Their terror was directed against Jewish "collaborators", including temple priests, Sadducees, Herodians, and other wealthy elites.The term "terrorism" itself was originally used to describe the actions of the Jacobin Club during the "Reign of Terror" in the French Revolution. "Terror is nothing other than justice, prompt, severe, inflexible", said Jacobin leader Maximilien Robespierre. In 1795, Edmund Burke denounced the Jacobins for letting "thousands of those hell-hounds called Terrorists... loose on the people" of France. In January 1858, Italian patriot Felice Orsini threw three bombs in an attempt to assassinate French Emperor Napoleon III. Eight bystanders were killed and 142 injured. The incident played a crucial role as an inspiration for the development of the early terrorist groups.Arguably the first organization to use modern terrorist techniques was the Irish Republican Brotherhood, founded in 1858 as a revolutionary Irish nationalist group that carried out attacks in England. The group initiated the Fenian dynamite campaign in 1881, one of the first modern terror campaigns. Instead of earlier forms of terrorism based on political assassination, this campaign used timed explosives with the express aim of sowing fear in the very heart of metropolitan Britain, in order to achieve political gains.Another early terrorist-type group was Narodnaya Volya, founded in Russia in 1878 as a revolutionary anarchist group inspired by Sergei Nechayev and "propaganda by the deed" theorist Carlo Pisacane. The group developed ideas—such as targeted killing of the leaders of oppression—that were to become the hallmark of subsequent violence by small non-state groups, and they were convinced that the developing technologies of the age—such as the invention of dynamite, which they were the first anarchist group to make widespread use of—enabled them to strike directly and with discrimination.David Rapoport refers to four major waves of global terrorism: "the Anarchist, the Anti-Colonial, the New Left, and the Religious. The first three have been completed and lasted around 40 years; the fourth is now in its third decade." Infographics Types Depending on the country, the political system, and the time in history, the types of terrorism are varying. In early 1975, the Law Enforcement Assistant Administration in the United States formed the National Advisory Committee on Criminal Justice Standards and Goals. One of the five volumes that the committee wrote was titled Disorders and Terrorism, produced by the Task Force on Disorders and Terrorism under the direction of H. H. A. Cooper, Director of the Task Force staff. The Task Force defines terrorism as "a tactic or technique by means of which a violent act or the threat thereof is used for the prime purpose of creating overwhelming fear for coercive purposes". It classified disorders and terrorism into six categories: Civil disorder – A form of collective violence interfering with the peace, security, and normal functioning of the community. Political terrorism – Violent criminal behaviour designed primarily to generate fear in the community, or substantial segment of it, for political purposes. Non-Political terrorism – Terrorism that is not aimed at political purposes but which exhibits "conscious design to create and maintain a high degree of fear for coercive purposes, but the end is individual or collective gain rather than the achievement of a political objective". Anonymous terrorism – In the two decades prior to 2016–19, "fewer than half" of all terrorist attacks were either "claimed by their perpetrators or convincingly attributed by governments to specific terrorist groups". A number of theories have been advanced as to why this has happened. Quasi-terrorism – The activities incidental to the commission of crimes of violence that are similar in form and method to genuine terrorism but which nevertheless lack its essential ingredient. It is not the main purpose of the quasi-terrorists to induce terror in the immediate victim as in the case of genuine terrorism, but the quasi-terrorist uses the modalities and techniques of the genuine terrorist and produces similar consequences and reaction. For example, the fleeing felon who takes hostages is a quasi-terrorist, whose methods are similar to those of the genuine terrorist but whose purposes are quite different. Limited political terrorism – Genuine political terrorism is characterized by a revolutionary approach; limited political terrorism refers to "acts of terrorism which are committed for ideological or political motives but which are not part of a concerted campaign to capture control of the state". Official or state terrorism – "referring to nations whose rule is based upon fear and oppression that reach similar to terrorism or such proportions". It may be referred to as Structural Terrorism defined broadly as terrorist acts carried out by governments in pursuit of political objectives, often as part of their foreign policy.Other sources have defined the typology of terrorism in different ways, for example, broadly classifying it into domestic terrorism and international terrorism, or using categories such as vigilante terrorism or insurgent terrorism. One way the typology of terrorism may be defined: Political terrorism Sub-state terrorism Social revolutionary terrorism Nationalist-separatist terrorism Religious extremist terrorism Religious fundamentalist Terrorism New religions terrorism Right-wing terrorism Left-wing terrorism Communist terrorism State-sponsored terrorism Regime or state terrorism Criminal terrorism Pathological terrorism Causes and motivations Choice of terrorism as a tactic Individuals and groups choose terrorism as a tactic because it can: Act as a form of asymmetric warfare in order to directly force a government to agree to demands Intimidate a group of people into capitulating to the demands in order to avoid future injury Get attention and thus political support for a cause Directly inspire more people to the cause (such as revolutionary acts) – propaganda of the deed Indirectly inspire more people to the cause by provoking a hostile response or over-reaction from enemies to the causeAttacks on "collaborators" are used to intimidate people from cooperating with the state in order to undermine state control. This strategy was used in Ireland, in Kenya, in Algeria and in Cyprus during their independence struggles.Stated motives for the September 11 attacks included inspiring more fighters to join the cause of repelling the United States from Muslim countries with a successful high-profile attack. The attacks prompted some criticism from domestic and international observers regarding perceived injustices in U.S. foreign policy that provoked the attacks, but the larger practical effect was that the United States government declared a War on Terror that resulted in substantial military engagements in several Muslim-majority countries. Various commentators have inferred that al-Qaeda expected a military response, and welcomed it as a provocation that would result in more Muslims fight the United States. Some commentators believe that the resulting anger and suspicion directed toward innocent Muslims living in Western countries and the indignities inflicted upon them by security forces and the general public also contributes to radicalization of new recruits. Despite criticism that the Iraqi government had no involvement with the September 11 attacks, Bush declared the 2003 invasion of Iraq to be part of the War on Terror. The resulting backlash and instability enabled the rise of Islamic State of Iraq and the Levant and the temporary creation of an Islamic caliphate holding territory in Iraq and Syria, until ISIL lost its territory through military defeats. Attacks used to draw international attention to struggles that are otherwise unreported have included the Palestinian airplane hijackings in 1970 and the 1975 Dutch train hostage crisis. Causes motivating terrorism Specific political or social causes have included: Independence or separatist movements Irredentist movements Adoption of a particular political philosophy, such as socialism (left-wing terrorism), anarchism, or fascism (possibly through a coup or as an ideology of an independence or separatist movement) Environmental protection (eco-terrorism) Supremacism of a particular group Preventing a rival group from sharing or occupying a particular territory (such as by discouraging immigration or encouraging flight) Subjugation of a particular population (such as lynching of African Americans) Spread or dominance of a particular religion – religious terrorism Ending perceived government oppression Responding to a violent act (for example, tit-for-tat attacks in the Israeli–Palestinian conflict, in The Troubles in Northern Ireland, or Timothy McVeighs revenge for the Waco siege and Ruby Ridge incident)Causes for right-wing terrorism have included white nationalism, ethnonationalism, fascism, anti-socialism, the anti-abortion movement, and tax resistance. Sometimes terrorists on the same side fight for different reasons. For example, in the Chechen–Russian conflict secular Chechens using terrorist tactics fighting for national independence are allied with radical Islamist terrorists who have arrived from other countries. Personal and social factors Various personal and social factors may influence the personal choice of whether to join a terrorist group or attempt an act of terror, including: Identity, including affiliation with a particular culture, ethnicity, or religion Previous exposure to violence Financial reward (for example, the Palestinian Authority Martyrs Fund) Mental health disorder Social isolation Perception that the cause responds to a profound injustice or indignityA report conducted by Paul Gill, John Horgan and Paige Deckert found that for "lone wolf" terrorists: 43% were motivated by religious beliefs 32% had pre-existing mental health disorders, while many more are found to have mental health problems upon arrest At least 37% lived alone at the time of their event planning and/or execution, a further 26% lived with others, and no data were available for the remaining cases 40% were unemployed at the time of their arrest or terrorist event 19% subjectively experienced being disrespected by others 14% percent experienced being the victim of verbal or physical assaultAriel Merari, a psychologist who has studied the psychological profiles of suicide terrorists since 1983 through media reports that contained biographical details, interviews with the suicides families, and interviews with jailed would-be suicide attackers, concluded that they were unlikely to be psychologically abnormal. In comparison to economic theories of criminal behaviour, Scott Atran found that suicide terrorists exhibit none of the socially dysfunctional attributes—such as fatherless, friendless, jobless situations—or suicidal symptoms. By which he means, they do not kill themselves simply out of hopelessness or a sense of having nothing to lose.Abrahm suggests that terrorist organizations do not select terrorism for its political effectiveness. Individual terrorists tend to be motivated more by a desire for social solidarity with other members of their organization than by political platforms or strategic objectives, which are often murky and undefined.Michael Mousseau shows possible relationships between the type of economy within a country and ideology associated with terrorism. Many terrorists have a history of domestic violence. Democracy and domestic terrorism Terrorism is most common in nations with intermediate political freedom, and it is least common in the most democratic nations.Some examples of "terrorism" in non-democratic nations include ETA in Spain under Francisco Franco (although the groups terrorist activities increased sharply after Francos death), the Organization of Ukrainian Nationalists in pre-war Poland, the Shining Path in Peru under Alberto Fujimori, the Kurdistan Workers Party when Turkey was ruled by military leaders and the ANC in South Africa. Democracies, such as Japan, the United Kingdom, the United States, Israel, Indonesia, India, Spain, Germany, Italy and the Philippines, have experienced domestic terrorism. While a democratic nation espousing civil liberties may claim a sense of higher moral ground than other regimes, an act of terrorism within such a state may cause a dilemma: whether to maintain its civil liberties and thus risk being perceived as ineffective in dealing with the problem; or alternatively to restrict its civil liberties and thus risk delegitimizing its claim of supporting civil liberties. For this reason, homegrown terrorism has started to be seen as a greater threat, as stated by former CIA Director Michael Hayden. This dilemma, some social theorists would conclude, may very well play into the initial plans of the acting terrorist(s); namely, to delegitimize the state and cause a systematic shift towards anarchy via the accumulation of negative sentiments towards the state system. Religious terrorism According to the Global Terrorism Index by the University of Maryland, College Park, religious extremism has overtaken national separatism and become the main driver of terrorist attacks around the world. Since 9/11 there has been a five-fold increase in deaths from terrorist attacks. The majority of incidents over the past several years can be tied to groups with
Terrorism
a religious agenda. Before 2000, it was nationalist separatist terrorist organizations such as the IRA and Chechen rebels who were behind the most attacks. The number of incidents from nationalist separatist groups has remained relatively stable in the years since while religious extremism has grown. The prevalence of Islamist groups in Iraq, Afghanistan, Pakistan, Nigeria and Syria is the main driver behind these trends.Five of the terrorist groups that have been most active since 2001 are Hamas, Boko Haram, al-Qaeda, the Taliban and ISIL. These groups have been most active in Iraq, Afghanistan, Pakistan, Nigeria and Syria. Eighty percent of all deaths from terrorism occurred in one of these five countries. In 2015 four Islamic extremist groups were responsible for 74% of all deaths from Islamic terrorism: ISIS, Boko Haram, the Taliban, and al-Qaeda, according to the Global Terrorism Index 2016. Since approximately 2000, these incidents have occurred on a global scale, affecting not only Muslim-majority states in Africa and Asia, but also states with non-Muslim majority such as United States, United Kingdom, France, Germany, Spain, Belgium, Sweden, Russia, Australia, Canada, Sri Lanka, Israel, China, India and Philippines. Such attacks have targeted both Muslims and non-Muslims, however the majority affect Muslims themselves.Terrorism in Pakistan has become a great problem. From the summer of 2007 until late 2009, more than 1,500 people were killed in suicide and other attacks on civilians for reasons attributed to a number of causes—sectarian violence between Sunni and Shia Muslims; easy availability of guns and explosives; the existence of a "Kalashnikov culture"; an influx of ideologically driven Muslims based in or near Pakistan, who originated from various nations around the world and the subsequent war against the pro-Soviet Afghans in the 1980s which blew back into Pakistan; the presence of Islamist insurgent groups and forces such as the Taliban and Lashkar-e-Taiba. On July 2, 2013, in Lahore, 50 Muslim scholars of the Sunni Ittehad Council (SIC) issued a collective fatwa against suicide bombings, the killing of innocent people, bomb attacks, and targeted killings declaring them as Haraam or forbidden.In 2015, the Southern Poverty Law Center released a report on terrorism in the United States. The report (titled The Age of the Wolf) analyzed 62 incidents and found that, between 2009 and 2015, "more people have been killed in America by non-Islamic domestic terrorists than jihadists." The "virulent racist and anti-semitic" ideology of the ultra-right wing Christian Identity movement is usually accompanied by anti-government sentiments. Adherents of Christian Identity are not connected with specific Christian denominations, and they believe that whites of European descent can be traced back to the "Lost Tribes of Israel" and many consider Jews to be the Satanic offspring of Eve and the Serpent. This group has committed hate crimes, bombings and other acts of terrorism. Its influence ranges from the Ku Klux Klan and neo-Nazi groups to the anti-government militia and sovereign citizen movements. Christian Identitys origins can be traced back to Anglo-Israelism, which held the view that the British people were descendants of ancient Israelites. However, in the United States, the ideology started to become rife with anti-Semitism, and eventually Christian Identity theology diverged from the philo-semitic Anglo-Israelism, and developed what is known as the "two seed" theory. According to the two-seed theory, the Jewish people are descended from Cain and the serpent (not from Shem). The white European seedline is descended from the "lost tribes" of Israel. They hold themselves to "Gods laws", not to "mans laws", and they do not feel bound to a government that they consider run by Jews and the New World Order. The Ku Klux Klan is widely denounced by Christian denominations. Israel has had problems with Jewish religious terrorism even before independence in 1948. During British mandate over Palestine, the Irgun were among the Zionist groups labelled as terrorist organisations by the British authorities and United Nations, for violent terror attacks against Britons and Arabs. Another extremist group, the Lehi, openly declared its members as "terrorists". Historian William Cleveland stated many Jews justified any action, even terrorism, taken in the cause of the creation of a Jewish state. In 1995, Yigal Amir assassinated Israeli Prime Minister Yitzhak Rabin. For Amir, killing Rabin was an exemplary act that symbolized the fight against an illegitimate government that was prepared to cede Jewish Holy Land to the Palestinians. Perpetrators The perpetrators of acts of terrorism can be individuals, groups, or states. According to some definitions, clandestine or semi-clandestine state actors may carry out terrorist acts outside the framework of a state of war. The most common image of terrorism is that it is carried out by small and secretive cells, highly motivated to serve a particular cause and many of the most deadly operations in recent times, such as the September 11 attacks, the London underground bombing, 2008 Mumbai attacks and the 2002 Bali bombing were planned and carried out by a close clique, composed of close friends, family members and other strong social networks. These groups benefited from the free flow of information and efficient telecommunications to succeed where others had failed.Over the years, much research has been conducted to distill a terrorist profile to explain these individuals actions through their psychology and socio-economic circumstances. Others, like Roderick Hindery, have sought to discern profiles in the propaganda tactics used by terrorists. Some security organizations designate these groups as violent non-state actors. A 2007 study by economist Alan B. Krueger found that terrorists were less likely to come from an impoverished background (28 percent versus 33 percent) and more likely to have at least a high-school education (47 percent versus 38 percent). Another analysis found only 16 percent of terrorists came from impoverished families, versus 30 percent of male Palestinians, and over 60 percent had gone beyond high school, versus 15 percent of the populace.A study into the poverty-stricken conditions and whether terrorists are more likely to come from here,show that people who grew up in these situations tend to show aggression and frustration towards others. This theory is largely debated for the simple fact that just because one is frustrated,does not make them a potential terrorist.To avoid detection, a terrorist will look, dress, and behave normally until executing the assigned mission. Some claim that attempts to profile terrorists based on personality, physical, or sociological traits are not useful. The physical and behavioral description of the terrorist could describe almost any normal person. the majority of terrorist attacks are carried out by military age men, aged 16 to 40. Non-state groups Groups not part of the state apparatus of in opposition to the state are most commonly referred to as a "terrorist" in the media. According to the Global Terrorism Database, the most active terrorist group in the period 1970 to 2010 was Shining Path (with 4,517 attacks), followed by Farabundo Marti National Liberation Front (FMLN), Irish Republican Army (IRA), Basque Fatherland and Freedom (ETA), Revolutionary Armed Forces of Colombia (FARC), Taliban, Liberation Tigers of Tamil Eelam, New Peoples Army, National Liberation Army of Colombia (ELN), and Kurdistan Workers Party (PKK). State sponsors A state can sponsor terrorism by funding or harboring a terrorist group. Opinions as to which acts of violence by states consist of state-sponsored terrorism vary widely. When states provide funding for groups considered by some to be terrorist, they rarely acknowledge them as such. State terrorism Civilization is based on a clearly defined and widely accepted yet often unarticulated hierarchy. Violence done by those higher on the hierarchy to those lower is nearly always invisible, that is, unnoticed. When it is noticed, it is fully rationalized. Violence done by those lower on the hierarchy to those higher is unthinkable, and when it does occur it is regarded with shock, horror, and the fetishization of the victims. As with "terrorism" the concept of "state terrorism" is controversial. The Chairman of the United Nations Counter-Terrorism Committee has stated that the committee was conscious of 12 international conventions on the subject, and none of them referred to state terrorism, which was not an international legal concept. If states abused their power, they should be judged against international conventions dealing with war crimes, international human rights law, and international humanitarian law. Former United Nations Secretary-General Kofi Annan has said that it is "time to set aside debates on so-called state terrorism. The use of force by states is already thoroughly regulated under international law". he made clear that, "regardless of the differences between governments on the question of the definition of terrorism, what is clear and what we can all agree on is that any deliberate attack on innocent civilians [or non-combatants], regardless of ones cause, is unacceptable and fits into the definition of terrorism."State terrorism has been used to refer to terrorist acts committed by governmental agents or forces. This involves the use of state resources employed by a states foreign policies, such as using its military to directly perform acts of terrorism. Professor of Political Science Michael Stohl cites the examples that include the German bombing of London, the Allied firebombing of Dresden, and the U.S. atomic bombings of Hiroshima and Nagasaki during World War II. He argues that "the use of terror tactics is common in international relations and the state has been and remains a more likely employer of terrorism within the international system than insurgents." He cites the first strike option as an example of the "terror of coercive diplomacy" as a form of this, which holds the world hostage with the implied threat of using nuclear weapons in "crisis management" and he argues that the institutionalized form of terrorism has occurred as a result of changes that took place following World War II. In this analysis, state terrorism exhibited as a form of foreign policy was shaped by the presence and use of weapons of mass destruction, and the legitimizing of such violent behavior led to an increasingly accepted form of this behavior by the state. Charles Stewart Parnell described William Ewart Gladstones Irish Coercion Act as terrorism in his "no-Rent manifesto" in 1881, during the Irish Land War. The concept is used to describe political repressions by governments against their own civilian populations with the purpose of inciting fear. For example, taking and executing civilian hostages or extrajudicial elimination campaigns are commonly considered "terror" or terrorism, for example during the Red Terror or the Great Terror. Such actions are often described as democide or genocide, which have been argued to be equivalent to state terrorism. Empirical studies on this have found that democracies have little democide. Western democracies, including the United States, have supported state terrorism and mass killings, with some examples being the Indonesian mass killings of 1965–66 and Operation Condor. Connection with tourism The connection between terrorism and tourism has been widely studied since the Luxor massacre in Egypt. In the 1970s, the targets of terrorists were politicians and chiefs of police while now, international tourists and visitors are selected as the main targets of attacks. The attacks on the World Trade Center and the Pentagon on September 11, 2001, were the symbolic center, which marked a new epoch in the use of civil transport against the main power of the planet. From this event onwards, the spaces of leisure that characterized the pride of West were conceived as dangerous and frightful. Funding State sponsors have constituted a major form of funding; for example, Palestine Liberation Organization, Democratic Front for the Liberation of Palestine and other groups sometimes considered to be terrorist organizations, were funded by the Soviet Union. The Stern Gang received funding from Italian Fascist officers in Beirut to undermine the British authorities in Palestine."Revolutionary tax" is another major form of funding, and essentially a euphemism for "protection money". Revolutionary taxes "play a secondary role as one other means of intimidating the target population".Other major sources of funding include kidnapping for ransoms, smuggling (including wildlife smuggling), fraud, and robbery. The Islamic State in Iraq and the Levant has reportedly received funding "via private donations from the Gulf states".The Financial Action Task Force is an inter-governmental body whose mandate, since October 2001, has included combating terrorist financing. Tactics Terrorist attacks are often targeted to maximize fear and publicity, most frequently using explosives. Terrorist groups usually methodically plan attacks in advance, and may train participants, plant undercover agents, and raise money from supporters or through organized crime. Communications occur through modern telecommunications, or through old-fashioned methods such as couriers. There is concern about terrorist attacks employing weapons of mass destruction. Some academics have argued that while it is often assumed terrorism is intended to spread fear, this is not necessarily true, with fear instead being a by-product of the terrorists actions, while their intentions may be to avenge fallen comrades or destroy their perceived enemies.Terrorism is a form of asymmetric warfare, and is more common when direct conventional warfare will not be effective because opposing forces vary greatly in power. Yuval Harari argues that the peacefulness of modern states makes them paradoxically more vulnerable to terrorism than pre-modern states. Harari argues that because modern states have committed themselves to reducing political violence to almost zero, terrorists can, by creating political violence, threaten the very foundations of the legitimacy of the modern state. This is in contrast to pre-modern states, where violence was a routine and recognised aspect of politics at all levels, making political violence unremarkable. Terrorism thus shocks the population of a modern state far more than a pre-modern one and consequently the state is forced to overreact in an excessive, costly and spectacular manner, which is often what the terrorists desire.The type of people terrorists will target is dependent upon the ideology of the terrorists. A terrorists ideology will create a class of "legitimate targets" who are deemed as its enemies and who are permitted to be targeted. This ideology will also allow the terrorists to place the blame on the victim, who is viewed as being responsible for the violence in the first place.The context in which terrorist tactics are used is often a large-scale, unresolved political conflict. The type of conflict varies widely; historical examples include: Secession of a territory to form a new sovereign state or become part of a different state Dominance of territory or resources by various ethnic groups Imposition of a particular form of government Economic deprivation of a population Opposition to a domestic government or occupying army Religious fanaticism Responses Responses to terrorism are broad in scope. They can include re-alignments of the political spectrum and reassessments of fundamental values. Specific types of responses include: Targeted laws, criminal procedures, deportations, and enhanced police powers Target hardening, such as locking doors or adding traffic barriers Preemptive or reactive military action Increased intelligence and surveillance activities Preemptive humanitarian activities More permissive interrogation and detention policiesThe term "counter-terrorism" has a narrower connotation, implying that it is directed at terrorist actors. Terrorism research Terrorism research, also called terrorism studies, or terrorism and counter-terrorism research, is an interdisciplinary academic field which seeks to understand the causes of terrorism, how to prevent it as well as its impact in the broadest sense. Terrorism research can be carried out in both military and civilian contexts, for example by research centres such as the British Centre for the Study of Terrorism and Political Violence, the Norwegian Centre for Violence and Traumatic Stress Studies, and the International Centre for Counter-Terrorism (ICCT). There are several academic journals devoted to the field, including Perspectives on Terrorism. International agreements One of the agreements that promote the international legal anti-terror framework is the Code of Conduct Towards Achieving a World Free of Terrorism that was adopted at the 73rd session of the United Nations General Assembly in 2018. The Code of Conduct was initiated by Kazakhstan President Nursultan Nazarbayev. Its main goal is to implement a wide range of international commitments to counter terrorism and establish a broad global coalition towards achieving a world free of terrorism by 2045. The Code was signed by more than 70 countries. Response in the United States According to a report by Dana Priest and William M. Arkin in The Washington Post, "Some 1,271 government organizations and 1,931 private companies work on programs related to counterterrorism, homeland security and intelligence in about 10,000 locations across the United States."Americas thinking on how to defeat radical Islamists is split along two very different schools of thought. Republicans, typically follow what is known as the Bush Doctrine, advocate the military model of taking the fight to the enemy and seeking to democratize the Middle East. Democrats, by contrast, generally propose the law enforcement model of better cooperation with nations and more security at home. In the introduction of the U.S. Army / Marine Corps Counterinsurgency Field Manual, Sarah Sewall states the need for "U.S. forces to make securing the civilian, rather than destroying the enemy, their top priority. The civilian population is the center of gravity—the deciding factor in the struggle.... Civilian deaths create an extended family of enemies—new insurgent recruits or informants—and erode support of the host nation." Sewall sums up the books key points on how to win this battle: "Sometimes, the more you protect your force, the less secure you may be.... Sometimes, the more force is used, the less effective it is.... The more successful the counterinsurgency is, the less force can be used and the more risk must be accepted.... Sometimes, doing nothing is the best reaction." This strategy, often termed "courageous restraint", has certainly led to some success on the Middle East battlefield. However, it does not address the fact that terrorists are mostly homegrown. Mass media Mass media exposure may be a primary goal of those carrying out terrorism, to expose issues that would otherwise be ignored by the media. Some consider this to be manipulation and exploitation of the media.The Internet has created a new way for groups to spread their messages. This has created a cycle of measures and counter measures by groups in support of and in opposition to terrorist movements. The United Nations has created its own online counter-terrorism resource.The mass media will, on occasion, censor organizations involved in terrorism (through self-restraint or regulation) to discourage further terrorism. This may encourage organizations to perform more extreme acts of terrorism to be shown in the mass media. Conversely James F. Pastor explains the significant relationship between terrorism and the media, and the underlying benefit each receives from the other. There is always a point at which the terrorist ceases to manipulate the media gestalt. A point at which the violence may well escalate, but beyond which the terrorist has become symptomatic of the media gestalt itself. Terrorism as we ordinarily understand it is innately media-related.Former British Prime Minister Margaret Thatcher famously spoke of the close connection between terrorism and the media, calling publicity the oxygen of terrorism. Outcome of terrorist groups Jones and Libicki (2008) created a list of all the terrorist groups they could find that were active between 1968 and 2006. They found 648. Of those, 136 splintered and 244 were still active in 2006. Of the ones that ended, 43 percent converted to nonviolent political actions, like the Irish Republican Army in Northern Ireland. Law enforcement took out 40 percent. Ten percent won. Only 20 groups, 7 percent, were destroyed by military force. Forty-two groups became large enough to be labeled an insurgency; 38 of those had ended by 2006. Of those, 47 percent converted to nonviolent political actors. Only 5 percent were ended by law enforcement. Twenty-six percent won. Twenty-one percent succumbed to military force. Jones and Libicki concluded that military force may be necessary to deal with large insurgencies but are only occasionally decisive, because the military is too often seen as a bigger threat to civilians than the terrorists. To avoid that, the rules of engagement must be conscious of collateral damage and work to minimize it. Another researcher, Audrey Cronin, lists six primary ways that terrorist groups end: Capture or killing of a groups leader. (Decapitation). Entry of the group into a legitimate political process. (Negotiation). Achievement of group aims. (Success). Group implosion or loss of public support. (Failure). Defeat and elimination through brute force. (Repression). Transition from terrorism into other forms of violence. (Reorientation). Databases The following terrorism databases are or were made publicly available for research purposes, and track specific acts of terrorism: Global Terrorism Database, an open-source database by the University of Maryland, College Park on terrorist events around the world from 1970 through 2017 with more than 150,000 cases. MIPT Terrorism Knowledge Base Worldwide Incidents Tracking System Tocsearch (dynamic database)The following public report and index provides a summary of key global trends and patterns in terrorism around the world Global Terrorism Index, produced annually by the Institute for Economics and PeaceThe following publicly available resources index electronic and bibliographic resources on the subject of terrorism Human Security GatewayThe following terrorism databases are maintained in secrecy by the United States Government for intelligence and counter-terrorism purposes: Terrorist Identities Datamart Environment Terrorist Screening DatabaseJones and Libicki (2008) includes a table of 268 terrorist groups active between 1968 and 2006 with their status as of 2006: still active, splintered, converted to nonviolence, removed by law enforcement or military, or won. (These data are not in a convenient machine-readable format but are available.) See also Notes References Hoffman, Bruce (1988). Inside Terrorism. New York: Columbia University Press. Hoffman, Bruce (1998). "Inside Terrorism". Columbia University Press. p. 32. ISBN 0-231-11468-0. Retrieved January 11, 2010. Hoffman, Bruce (1998a). "Chapter One". Inside Terrorism. Retrieved January 11, 2010 – via The New York Times. Hoffman, Bruce (2006). Inside Terrorism (2nd ed.). Columbia University Press. Spaaij, Ramon (2012). Understanding Lone Wolf Terrorism: Global Patterns, Motivations and Prevention. Perspectives on Terrorisms Bibliography: Root Causes of Terrorism. 2017. Archived October 22, 2017, at the Wayback Machine Further reading Bakker, Edwin. Forecasting the Unpredictable: A Review of Forecasts on Terrorism 2000–2012 (International Centre for Counter-Terrorism – The Hague, 2014) Bowie, Neil G. (April 2021). "40 Terrorism Databases and Data Sets: A New Inventory" (PDF). Perspectives on Terrorism. Leiden University. XV (2). ISSN 2334-3745.Burleigh, Michael. Blood and rage: a cultural history of terrorism. Harper, 2009. Chaliand, Gérard and Arnaud Blin, eds. The history of terrorism: from antiquity to al Qaeda. University of California Press, 2007. Coates, Susan W., Rosenthal, Jane, and Schechter, Daniel S. September 11: Trauma and Human Bonds (New York: Taylor and Francis, Inc., 2003).Crenshaw, Martha, ed. Terrorism in context. Pennsylvania State University Press, 1995. Jones, Seth G.; Libicki, Martin C. (2008), How Terrorist Groups End: Lessons for Countering al Qaida (PDF), RAND Corporation, ISBN 978-0-8330-4465-5 Hennigfeld, Ursula/ Packard, Stephan, ed., Abschied von 9/11? Distanznahme zur Katastrophe. Berlin: Frank & Timme, 2013. Hennigfeld, Ursula, ed., Poetiken des Terrors. Narrative des 11. September 2001 im interkulturellen Vergleich. Heidelberg: Winter, 2014. Hewitt, Christopher. Understanding terrorism in America (Routledge, 2003). Hewitt, Christopher. "Terrorism and public opinion: A five country comparison." Terrorism and Political Violence 2.2 (1990): 145-170. Jones, Sidney. Terrorism: myths and facts. Jakarta: International Crisis Group, 2013. Land, Isaac, ed., Enemies of humanity: the nineteenth-century war on terrorism. Palgrave Macmillan, 2008. Lee, Newton. Counterterrorism and Cybersecurity: Total Information Awareness (2nd Edition). New York: Springer, 2015. ISBN 978-3-319-17243-9 Lutz, James and Brenda Lutz. Terrorism : origins and evolution (Palgrave Macmillan, 2005) Miller, Martin A. The foundations of modern terrorism : state, society and the dynamics of political violence. Cambridge University Press, 2013. Nairn, Tom; James, Paul (2005). Global Matrix: Nationalism, Globalism and State-Terrorism. London and New York: Pluto Press. Neria, Yuval, Gross, Raz, Marshall, Randall D., and Susser, Ezra. September 11, 2001: Treatment, Research and Public Mental Health in the Wake of a Terrorist Attack (New York: Cambridge University Press, 2006).Schmid, Alex P. (November 2020). Handbook of Terrorism Prevention and Preparedness. International Centre for Counter-Terrorism. doi:10.19165/2020.6.01 (inactive July 31, 2022). ISBN 9789090339771. ISSN 2468-0486.{{cite book}}: CS1 maint: DOI inactive as of July 2022 (link) An open-access publication, issued since November 2020 on the International Centre for Counter-Terrorism (ICCT) website, with a chapter published each week.Stern, Jessica. The Ultimate Terrorists. (Harvard University Press 2000 reprint; 1995). 214 p. ISBN 0-674-00394-2 Tausch, Arno, Estimates on the Global Threat of Islamic State Terrorism in the Face of the 2015 Paris and Copenhagen Attacks (December 11, 2015). Middle East Review of International Affairs, Rubin Center, Research in International Affairs, Idc Herzliya, Israel, Vol. 19, No. 1 (Spring 2015). Terrorism, Law & Democracy: 10 years after 9/11, Canadian Institute for the Administration of Justice. ISBN 978-2-9809728-7-4. United Kingdom Blackbourn, Jessie. "Counter-Terrorism and Civil Liberties: The United Kingdom Experience, 1968-2008." Journal of the Institute of Justice and International Studies 8 (2008): 63+ Bonner, David. "United Kingdom: the United Kingdom response to terrorism." Terrorism and Political Violence 4.4 (1992): 171-205. online Chin, Warren. Britain and the war on terror: Policy, strategy and operations (Routledge, 2016).Clutterbuck, Lindsay. "Countering Irish Republican terrorism in Britain: Its origin as a police function." Terrorism and Political Violence 18.1 (2006) pp: 95-118.Greer, Steven. "Terrorism and Counter-Terrorism in the UK: From Northern Irish Troubles to Global Islamist Jihad." in Counter-Terrorism, Constitutionalism and Miscarriages of Justice (Hart Publishing, 2018) pp. 45-62. Hamilton, Claire. "Counter-Terrorism in the UK." in Contagion, Counter-Terrorism and Criminology (Palgrave Pivot, Cham, 2019) pp. 15-47.Hewitt, Steve. "Great Britain: Terrorism and counter-terrorism since 1968." in Routledge Handbook of Terrorism and Counterterrorism (Routledge, 2018) pp. 540-551. Martínez-Peñas, Leandro, and Manuela Fernández-Rodríguez. "Evolution of British Law on Terrorism: From Ulster to Global Terrorism (1970–2010)." in Post 9/11 and the State of Permanent Legal Emergency (Springer, 2012) pp. 201-222. ODay, Alan. "Northern Ireland, Terrorism, and the British State." in Terrorism: Theory
Terrorism
and Practice (Routledge, 2019) pp. 121-135. Sacopulos, Peter J. "Terrorism in Britain: Threat, reality, response." Studies in Conflict & Terrorism 12.3 (1989): 153-165. Staniforth, Andrew, and Fraser Sampson, eds. The Routledge companion to UK counter-terrorism (Routledge, 2012). Sinclair, Georgina. "Confronting terrorism: British Experiences past and present." Crime, Histoire & Sociétés/Crime, History & Societies 18.2 (2014): 117-122. online Tinnes, Judith, ed. "Bibliography: Northern Ireland conflict (the troubles)." Perspectives on Terrorism 10.1 (2016): 83-110. online Wilkinson, Paul, ed. Terrorism: British Perspectives (Dartmouth, 1993). External links United Nations: Conventions on Terrorism United Nations Office on Drugs and Crime: "Conventions against terrorism". Archived from the original on August 5, 2007. UNODC – United Nations Office on Drugs and Crime – Terrorism Prevention Terrorism and international humanitarian law, International Committee of the Red Cross UK Counter Terrorism Policing
Valgus deformity
A valgus deformity is a condition in which the bone segment distal to a joint is angled outward, that is, angled laterally, away from the bodys midline. The opposite deformation, where the twist or angulation is directed medially, toward the center of the body, is called varus. Common causes of valgus knee (genu valgum or "knock-knee") in adults include arthritis of the knee and traumatic injuries. Knee arthritis with valgus knee Rheumatoid knee commonly presents as valgus knee. Osteoarthritis knee may also sometimes present with valgus deformity though varus deformity is common. Total knee arthroplasty (TKA) to correct valgus deformity is surgically difficult and requires specialized implants called constrained condylar knees. Examples Ankle: talipes valgus (from Latin talus = ankle and pes = foot) – outward turning of the heel, resulting in a flat foot presentation. Elbows: cubitus valgus (from Latin cubitus = elbow) – forearm is angled away from the body Foot: pes valgus (from Latin pes = foot) – a medial deviation of the foot at subtalar joint. Hand: manus valgus (from Latin manus = hand) Hip: coxa valga (from Latin coxa = hip) – the shaft of the femur is bent outward in respect to the neck of the femur. Coxa valga >125 degrees. Coxa vara <125 degrees. Knee: genu valgum (from Latin genu = knee) – the tibia is turned outward in relation to the femur, resulting in a knock-kneed appearance. Toe: hallux valgus (from Latin hallux = big toe) – outward deviation of the big toe toward the second toe, resulting in bunion. Wrist: Madelungs deformity – deformity wherein the wrist bones are not formed properly due to a genetic disorder. Terminology Valgus is a term for outward angulation of the distal segment of a bone or joint. The opposite condition is called varus, which is a medial deviation of the distal bone. The terms varus and valgus always refer to the direction that the distal segment of the joint points. The original Latin definitions for varus and valgus were the opposite of their current usage. For a discussion of the etymology of these words, see the entry under varus. A mnemonic to remember the two deformities is that valgus contains an "L", for Lateral deviation. See also Varus deformity References Canale & Beaty: Campbells Operative Orthopaedics, 11th ed. - 2007 - Mosby, An Imprint of Elsevier Bowed Leg (Varus) and Knock-Knee (Valgus) Malalignment: Everything You Need to Know to Make the Right Treatment Decision-Understanding lower limb malalignment-Tibial osteotomy for bowed legs, Noyes, Frank R. and Barber-Westin, Sue, Amazon Digital Version, Publish Green (October 6, 2013) == External links ==
Helminthiasis
Helminthiasis, also known as worm infection, is any macroparasitic disease of humans and other animals in which a part of the body is infected with parasitic worms, known as helminths. There are numerous species of these parasites, which are broadly classified into tapeworms, flukes, and roundworms. They often live in the gastrointestinal tract of their hosts, but they may also burrow into other organs, where they induce physiological damage. Soil-transmitted helminthiasis and schistosomiasis are the most important helminthiases, and are among the neglected tropical diseases. These group of helminthiases have been targeted under the joint action of the worlds leading pharmaceutical companies and non-governmental organizations through a project launched in 2012 called the London Declaration on Neglected Tropical Diseases, which aims to control or eradicate certain neglected tropical diseases by 2020.Helminthiasis has been found to result in poor birth outcome, poor cognitive development, poor school and work performance, poor socioeconomic development, and poverty. Chronic illness, malnutrition, and anemia are further examples of secondary effects.Soil-transmitted helminthiases are responsible for parasitic infections in as much as a quarter of the human population worldwide. One well-known example of soil-transmitted helminthiases is ascariasis. Signs and symptoms The signs and symptoms of helminthiasis depend on a number of factors including: the site of the infestation within the body; the type of worm involved; the number of worms and their volume; the type of damage the infesting worms cause; and, the immunological response of the body. Where the burden of parasites in the body is light, there may be no symptoms.Certain worms may cause particular constellations of symptoms. For instance, taeniasis can lead to seizures due to neurocysticercosis. Mass and volume In extreme cases of intestinal infestation, the mass and volume of the worms may cause the outer layers of the intestinal wall, such as the muscular layer, to tear. This may lead to peritonitis, volvulus, and gangrene of the intestine. Immunological response As pathogens in the body, helminths induce an immune response. Immune-mediated inflammatory changes occur in the skin, lung, liver, intestine, central nervous system, and eyes. Signs of the bodys immune response may include eosinophilia, edema, and arthritis. An example of the immune response is the hypersensitivity reaction that may lead to anaphylaxis. Another example is the migration of Ascaris larvae through the bronchi of the lungs causing asthma. Secondary effects Immune changes In humans, T helper cells and eosinophils respond to helminth infestation. It is well established that T helper 2 cells are the central players of protective immunity to helminths, while the roles for B cells and antibodies are context-dependent. Inflammation leads to encapsulation of egg deposits throughout the body. Helminths excrete into the intestine toxic substances after they feed. These substances then enter the circulatory and lymphatic systems of the host body.Chronic immune responses to helminthiasis may lead to increased susceptibility to other infections such as tuberculosis, HIV, and malaria. There is conflicting information about whether deworming reduces HIV progression and viral load and increases CD4 counts in antiretroviral naive and experienced individuals, although the most recent Cochrane review found some evidence that this approach might have favorable effects. Chronic illness Chronic helminthiasis may cause severe morbidity. Helminthiasis has been found to result in poor birth outcome, poor cognitive development, poor school and work performance, decreased productivity, poor socioeconomic development, and poverty. Malnutrition Helminthiasis may cause chronic illness through malnutrition including vitamin deficiencies, stunted growth, anemia, and protein-energy malnutrition. Worms compete directly with their hosts for nutrients, but the magnitude of this effect is likely minimal as the nutritional requirements of worms is relatively small. In pigs and humans, Ascaris has been linked to lactose intolerance and vitamin A, amino acid, and fat malabsorption. Impaired nutrient uptake may result from direct damage to the intestinal mucosal wall or from more subtle changes such as chemical imbalances and changes in gut flora. Alternatively, the worms’ release of protease inhibitors to defend against the bodys digestive processes may impair the breakdown of other nutrients. In addition, worm induced diarrhoea may shorten gut transit time, thus reducing absorption of nutrients.Malnutrition due to worms can give rise to anorexia. A study of 459 children in Zanzibar revealed spontaneous increases in appetite after deworming. Anorexia might be a result of the bodys immune response and the stress of combating infection. Specifically, some of the cytokines released in the immune response to worm infestation have been linked to anorexia in animals. Anemia Helminths may cause iron-deficiency anemia. This is most severe in heavy hookworm infections, as Necator americanus and Ancylostoma duodenale feed directly on the blood of their hosts. Although the daily consumption of an individual worm (0.02–0.07 ml and 0.14–0.26 ml respectively) is small, the collective consumption under heavy infection can be clinically significant. Intestinal whipworm may also cause anemia. Anemia has also been associated with reduced stamina for physical labor, a decline in the ability to learn new information, and apathy, irritability, and fatigue. A study of the effect of deworming and iron supplementation in 47 students from the Democratic Republic of the Congo found that the intervention improved cognitive function. Another study found that in 159 Jamaican schoolchildren, deworming led to better auditory short-term memory and scanning and retrieval of long-term memory over a period of nine-weeks. Cognitive changes Malnutrition due to helminths may affect cognitive function leading to low educational performance, decreased concentration and difficulty with abstract cognitive tasks. Iron deficiency in infants and preschoolers is associated with "lower scores ... on tests of mental and motor development ... [as well as] increased fearfulness, inattentiveness, and decreased social responsiveness". Studies in the Philippines and Indonesia found a significant correlation between helminthiasis and decreased memory and fluency. Large parasite burdens, particularly severe hookworm infections, are also associated with absenteeism, under-enrollment, and attrition in school children. Types of parasitic helminths Of all the known helminth species, the most important helminths with respect to understanding their transmission pathways, their control, inactivation and enumeration in samples of human excreta from dried feces, faecal sludge, wastewater, and sewage sludge are: soil-transmitted helminths, including Ascaris lumbricoides (the most common worldwide), Trichuris trichiura, Necator americanus, Strongyloides stercoralis and Ancylostoma duodenale Hymenolepis nana Taenia saginata Enterobius Fasciola hepatica Schistosoma mansoni Toxocara canis Toxocara catiHelminthiases are classified as follows (the disease names end with "-sis" and the causative worms are in brackets): Roundworm infection (nematodiasis) Filariasis (Wuchereria bancrofti, Brugia malayi infection) Onchocerciasis (Onchocerca volvulus infection) Soil-transmitted helminthiasis – this includes ascariasis (Ascaris lumbricoides infection), trichuriasis (Trichuris infection), and hookworm infection (includes necatoriasis and Ancylostoma duodenale infection) Trichostrongyliasis (Trichostrongylus spp. infection) Dracunculiasis (guinea worm infection) Baylisascaris (raccoon roundworm, may be transmitted to pets, livestock, and humans) Tapeworm infection (cestodiasis) Echinococcosis (Echinococcus infection) Hymenolepiasis (Hymenolepis infection) Taeniasis/cysticercosis (Taenia infection) Coenurosis (T. multiceps, T. serialis, T. glomerata, and T. brauni infection) Trematode infection (trematodiasis) Amphistomiasis (amphistomes infection) Clonorchiasis (Clonorchis sinensis infection) Fascioliasis (Fasciola infection) Fasciolopsiasis (Fasciolopsis buski infection) Opisthorchiasis (Opisthorchis infection) Paragonimiasis (Paragonimus infection) Schistosomiasis/bilharziasis (Schistosoma infection) Acanthocephala infection Moniliformis infection Transmission Helminths are transmitted to the final host in several ways. The most common infection is through ingestion of contaminated vegetables, drinking water, and raw or undercooked meat. Contaminated food may contain eggs of nematodes such as Ascaris, Enterobius, and Trichuris; cestodes such as Taenia, Hymenolepis, and Echinococcus; and trematodes such as Fasciola. Raw or undercooked meats are the major sources of Taenia (pork, beef and venison), Trichinella (pork and bear), Diphyllobothrium (fish), Clonorchis (fish), and Paragonimus (crustaceans). Schistosomes and nematodes such as hookworms (Ancylostoma and Necator) and Strongyloides can penetrate the skin directly.The roundworm, Dracunculus has a complex mode of transmission: it is acquired from drinking infested water or eating frogs and fish that contain (had eaten) infected crustaceans (copepods); and can also be transmitted from infected pets (cats and dogs). Roundworms such as Brugia, Wuchereria and Onchocerca are directly transmitted by mosquitoes. In the developing world, the use of contaminated water is a major risk factor for infection. Infection can also take place through the practice of geophagy, which is not uncommon in parts of sub-Saharan Africa. Soil is eaten, for example, by children or pregnant women to counteract a real or perceived deficiency of minerals in their diet. Diagnosis Specific helminths can be identified through microscopic examination of their eggs (ova) found in faecal samples. The number of eggs is measured in units of eggs per gram. However, it does not quantify mixed infections, and in practice, is inaccurate for quantifying the eggs of schistosomes and soil-transmitted helminths. Sophisticated tests such as serological assays, antigen tests, and molecular diagnosis are also available; however, they are time-consuming, expensive and not always reliable. Prevention Disrupting the cycle of the worm will prevent infestation and re-infestation. Prevention of infection can largely be achieved by addressing the issues of WASH—water, sanitation and hygiene. The reduction of open defecation is particularly called for, as is stopping the use of human waste as fertilizer.Further preventive measures include adherence to appropriate food hygiene, wearing of shoes, regular deworming of pets, and the proper disposal of their feces.Scientists are also searching for a vaccine against helminths, such as a hookworm vaccine. Treatment Medications Broad-spectrum benzimidazoles (such as albendazole and mebendazole) are the first line treatment of intestinal roundworm and tapeworm infections. Macrocyclic lactones (such as ivermectin) are effective against adult and migrating larval stages of nematodes. Praziquantel is the drug of choice for schistosomiasis, taeniasis, and most types of food-borne trematodiases. Oxamniquine is also widely used in mass deworming programmes. Pyrantel is commonly used for veterinary nematodiasis. Artemisinins and derivatives are proving to be candidates as drugs of choice for trematodiasis. Mass deworming In regions where helminthiasis is common, mass deworming treatments may be performed, particularly among school-age children, who are a high-risk group. Most of these initiatives are undertaken by the World Health Organization (WHO) with positive outcomes in many regions. Deworming programs can improve school attendance by 25 percent. Although deworming improves the health of an individual, outcomes from mass deworming campaigns, such as reduced deaths or increases in cognitive ability, nutritional benefits, physical growth, and performance, are uncertain or not apparent. Surgery If complications of helminthiasis, such as intestinal obstruction occur, emergency surgery may be required. Patients who require non-emergency surgery, for instance for removal of worms from the biliary tree, can be pre-treated with the anthelmintic drug albendazole. Epidemiology Areas with the highest prevalence of helminthiasis are tropical and subtropical areas including sub-Saharan Africa, central and east Asia, and the Americas. Neglected tropical diseases Some types of helminthiases are classified as neglected tropical diseases. They include: Soil-transmitted helminthiases Roundworm infections such as lymphatic filariasis, dracunculiasis, and onchocerciasis Trematode infections, such as schistosomiasis, and food-borne trematodiases, including fascioliasis, clonorchiasis, opisthorchiasis, and paragonimiasis Tapeworm infections such as cysticercosis, taeniasis, and echinococcosis Prevalence The soil-transmitted helminths (A. lumbricoides, T. trichiura, N. americanus, A. duodenale), schistosomes, and filarial worms collectively infect more than a quarter of the human population worldwide at any one time, far surpassing HIV and malaria together. Schistosomiasis is the second most prevalent parasitic disease of humans after malaria.In 2014–15, the WHO estimated that approximately 2 billion people were infected with soil-transmitted helminthiases, 249 million with schistosomiasis, 56 million people with food-borne trematodiasis, 120 million with lymphatic filariasis, 37 million people with onchocerciasis, and 1 million people with echinococcosis. Another source estimated a much higher figure of 3.5 billion infected with one or more soil-transmitted helminths.In 2014, only 148 people were reported to have dracunculiasis because of a successful eradication campaign for that particular helminth, which is easier to eradicate than other helminths as it is transmitted only by drinking contaminated water.Because of their high mobility and lower standards of hygiene, school-age children are particularly vulnerable to helminthiasis. Most children from developing nations will have at least one infestation. Multi-species infections are very common.The most common intestinal parasites in the United States are Enterobius vermicularis, Giardia lamblia, Ancylostoma duodenale, Necator americanus, and Entamoeba histolytica. Variations within communities Even in areas of high prevalence, the frequency and severity of infection is not uniform within communities or families. A small proportion of community members harbour the majority of worms, and this depends on age. The maximum worm burden is at five to ten years of age, declining rapidly thereafter. Individual predisposition to helminthiasis for people with the same sanitation infrastructure and hygiene behavior is thought to result from differing immunocompetence, nutritional status, and genetic factors. Because individuals are predisposed to a high or a low worm burden, the burden reacquired after successful treatment is proportional to that before treatment. Disability-adjusted life years It is estimated that intestinal nematode infections cause 5 million disability-adjusted life years (DALYS) to be lost, of which hookworm infections account for more than 3 million DALYS and ascaris infections more than 1 million. There are also signs of progress: The Global Burden of Disease Study published in 2015 estimates a 46 percent (59 percent when age standardised) reduction in years lived with disability (YLD) for the 13-year time period from 1990 to 2013 for all intestinal/nematode infections, and even a 74 percent (80 percent when age standardised) reduction in YLD from ascariasis. Deaths As many as 135,000 die annually from soil transmitted helminthiasis.The 1990–2013 Global Burden of Disease Study estimated 5,500 direct deaths from schistosomiasis, while more than 200,000 people were estimated in 2013 to die annually from causes related to schistosomiasis. Another 20 million have severe consequences from the disease. It is the most deadly of the neglected tropical diseases. See also Kato technique References External links Information at WHO European Commission Center for Disease Control and Prevention Global Atlas of Helminth Infections
Villonodular synovitis
Villonodular synovitis is a type of synovial swelling. Types include: Pigmented villonodular synovitis Giant cell tumor of the tendon sheathThough they have very different names, they have the same histology, and stain positive for CD68, HAM56, and vimentin. They are sometimes discussed together. == References ==
Organ-limited amyloidosis
Organ-limited amyloidosis is a category of amyloidosis where the distribution can be associated primarily with a single organ. It is contrasted to systemic amyloidosis, and it can be caused by several different types of amyloid.In almost all of the organ-specific pathologies, there is debate as to whether the amyloid plaques are the causal agent of the disease or instead a downstream consequence of a common idiopathic agent. The associated proteins are indicated in parentheses. Neurological amyloid Alzheimers disease (Aβ 39-43) Parkinsons disease (alpha-synuclein) Huntingtons disease (huntingtin protein) Transmissible spongiform encephalopathies caused by prion protein (PrP) were sometimes classed as amyloidoses, as one of the four pathological features in diseased tissue is the presence of amyloid plaques. These diseases include; Creutzfeldt–Jakob disease (PrP in cerebrum) Kuru (diffuse PrP deposits in brain) Fatal familial insomnia (PrP in thalamus) Bovine spongiform encephalopathy (PrP in cerebrum of cows) Cardiovascular amyloid Cardiac amyloidosis Senile cardiac amyloidosis-may cause heart failure Other Amylin deposition can occur in the pancreas in some cases of type 2 diabetes mellitus Cerebral amyloid angiopathy References == External links ==
Supraventricular tachycardia
Supraventricular tachycardia (SVT) is an umbrella term for fast heart rhythms arising from the upper part of the heart. This is in contrast to the other group of fast heart rhythms – ventricular tachycardia, which start within the lower chambers of the heart. There are four main types of SVT: atrial fibrillation, atrial flutter, paroxysmal supraventricular tachycardia (PSVT), and Wolff–Parkinson–White syndrome. The symptoms of SVT include palpitations, feeling of faintness, sweating, shortness of breath, and/or chest pain.These abnormal rhythms start from either the atria or atrioventricular node. They are generally due to one of two mechanisms: re-entry or increased automaticity. Diagnosis is typically by electrocardiogram (ECG), holter monitor, or event monitor. Blood tests may be done to rule out specific underlying causes such as hyperthyroidism or electrolyte abnormalities.A normal resting heart rate is 60 to 100 beats per minute. A resting heart rate of more than 100 beats per minute is defined as a tachycardia. During an episode of SVT, the heart beats about 150 to 220 times per minute.Specific treatment depends on the type of SVT and can include medications, medical procedures, or surgery. Vagal maneuvers, or a procedure known as catheter ablation, may be effective in certain types. For atrial fibrillation, calcium channel blockers or beta blockers may be used for rate control. and selected patients benefit from blood thinners (anticoagulants) such as warfarin or novel anticoagulants. Atrial fibrillation affects about 25 per 1000 people, paroxysmal supraventricular tachycardia 2.3 per 1000, Wolff-Parkinson-White syndrome 2 per 1000, and atrial flutter 0.8 per 1000. Signs and symptoms Signs and symptoms can arise suddenly and may resolve without treatment. Stress, exercise, and emotion can all result in a normal or physiological increase in heart rate, but they can precipitate SVT in rare cases. Episodes can last from a few minutes to one or two days. They sometimes persist until treated. The rapid heart rate, if fast enough, reduces the opportunity for the "pump" to fill between beats decreasing cardiac output and consequently blood pressure. The following symptoms are typical with a rate of 150–270 or more beats per minute: Pounding heart Rapid heart beat Shortness of breath Chest pain Rapid breathing Dizziness Sweating Loss of consciousnessSymptoms of heart arrhythmias, such as SVT, are more difficult to assess in infants and toddlers because of their limited ability to communicate. Caregivers should watch for lack of interest in feeding, shallow breathing, and lethargy. These symptoms may be subtle and may be accompanied by vomiting and/or a decrease in responsiveness. Pathophysiology The main pumping chamber, the ventricle, is protected (to a certain extent) against excessively high rates arising from the supraventricular areas by a "gating mechanism" at the atrioventricular node, which allows only a proportion of the fast impulses to pass through to the ventricles. An accessory "bypass tract" can avoid the AV node and its protection so that the fast rate may be directly transmitted to the ventricles. This situation has characteristic findings on ECG. A congenital heart lesion, Ebsteins anomaly, is most commonly associated with supraventricular tachycardia. Diagnosis Subtypes of SVT can often be distinguished by their electrocardiogram (ECG) characteristics. Most have a narrow QRS complex, although, occasionally, electrical conduction abnormalities may produce a wide QRS complex that may mimic ventricular tachycardia (VT). In the clinical setting, the distinction between narrow and wide complex tachycardia (supraventricular vs. ventricular) is fundamental since they are treated differently. In addition, ventricular tachycardia can quickly degenerate into ventricular fibrillation and death and merits different consideration. In the less common situation in which a wide-complex tachycardia may be supraventricular, a number of algorithms have been devised to assist in distinguishing between them. In general, a history of structural heart disease markedly increases the likelihood that the tachycardia is ventricular in origin. Sinus tachycardia is physiologic when a reasonable stimulus, such as the catecholamine surge associated with fright, stress, or physical activity, provokes the tachycardia. It is identical to a normal sinus rhythm, except for its faster rate (>100 beats per minute in adults). However, sinus tachycardia is considered part of the diagnoses included in SVT by most sources. Sinoatrial node reentrant tachycardia (SANRT) is caused by a reentry circuit localised to the SA node, resulting in a P-wave of normal shape and size (morphology) that falls before a regular, narrow QRS complex. It cannot be distinguished electrocardiographically from sinus tachycardia unless the sudden onset is observed (or recorded on a continuous monitoring device). It may sometimes be distinguished by its prompt response to vagal maneuvers. Ectopic (unifocal) atrial tachycardia arises from an independent focus within the atria, distinguished by a consistent P-wave of abnormal shape and/or size that falls before a narrow, regular QRS complex. It can be caused by automaticity, which means that some cardiac muscle cells, which have the primordial (primitive, inborn, inherent) ability to generate electrical impulses that are common to all cardiac muscle cells, have established themselves as a rhythm center with a natural rate of electrical discharge that is faster than the normal SA node. Some atrial tachycardias, rather than being a result of increased automaticity may be a result of a micro-reentrant circuit (defined by some as less than 2 cm in longest diameter to distinguish it from macro-reentrant atrial flutter). Still other atrial tachycardias may be due to triggered activity caused by after-depolarizations. Multifocal atrial tachycardia (MAT) is tachycardia arising from at least three ectopic foci within the atria, distinguished by P-waves of at least three different morphologies that all fall before irregular, narrow QRS complexes. This rhythm is most commonly seen in elderly people with COPD. Atrial fibrillation meets the definition of SVT when associated with a ventricular response greater than 100 beats per minute. It is characterized as an "irregularly, irregular rhythm" both in its atrial and ventricular depolarizations and is distinguished by its fibrillatory atrial waves that, at some point in their chaos, stimulate a response from the ventricles in the form of irregular, narrow QRS complexes. Atrial flutter, is caused by a re-entry rhythm in the atria, with a regular atrial rate often of about 300 beats per minute. On the ECG this appears as a line of "sawtooth" waves preceding the QRS complex. The AV node will not usually conduct 300 beats per minute so the P:QRS ratio is usually 2:1 or 4:1 pattern, (though rarely 3:1, and sometimes 1:1 where class IC antiarrhythmic drug are in use). Because the ratio of P to QRS is usually consistent, A-flutter is often regular in comparison to its irregular counterpart, atrial fibrillation. Atrial flutter is also not necessarily a tachycardia by definition unless the AV node permits a ventricular response greater than 100 beats per minute. AV nodal reentrant tachycardia (AVNRT) involves a reentry circuit forming next to, or within, the AV node. The circuit most often involves two tiny pathways one faster than the other. Because the node is immediately between the atria and ventricle, the re-entry circuit often stimulates both, appearing as a backward (retrograde) conducted P-wave buried within or occurring just after the regular, narrow QRS complexes. Atrioventricular reciprocating tachycardia (AVRT), also results from a reentry circuit, although one physically much larger than AVNRT. One portion of the circuit is usually the AV node, and the other, an abnormal accessory pathway (muscular connection) from the atria to the ventricle. Wolff-Parkinson-White syndrome (WPW) is a relatively common abnormality with an accessory pathway, the bundle of Kent crossing the AV valvular ring.In orthodromic AVRT, atrial impulses are conducted down through the AV node and retrogradely re-enter the atrium via the accessory pathway. A distinguishing characteristic of orthodromic AVRT can therefore be an inverted P-wave (relative to a sinus P wave) that follows each of its regular, narrow QRS complexes, due to retrograde conduction. In antidromic AVRT, atrial impulses are conducted down through the accessory pathway and re-enter the atrium retrogradely via the AV node. Because the accessory pathway initiates conduction in the ventricles outside of the bundle of His, the QRS complex in antidromic AVRT is wider than usual. A delta wave is an initial slurred deflection seen in the initial part of an otherwise narrow QRS of a patient at risk for WPW and is an indicator of the presence of an accessory pathway. These beats are a fusion between the conduction down the accessory pathway and the slightly delayed but then-dominant conduction via the AV node. Once an antidromic AVRT tachycardia is initiated, it is no longer delta waves but rather a wide complex (>120 ms) tachycardia that is seen. Junctional ectopic tachycardia (JET) is a rare tachycardia caused by increased automaticity of the AV node itself initiating frequent heartbeats. On the ECG, junctional tachycardia often presents with abnormal morphology P-waves that may fall anywhere in relation to a regular, narrow QRS complex. It is often due to drug toxicity. Classification The following types of supraventricular tachycardias are more precisely classified by their specific site of origin. While each belongs to the broad classification of SVT, the specific term/diagnosis is preferred when possible: Sinoatrial origin: Sinoatrial nodal reentrant tachycardia (SNRT)Atrial origin: Ectopic (unifocal) atrial tachycardia (EAT) Multifocal atrial tachycardia (MAT) Atrial fibrillation with rapid ventricular response Atrial flutter with rapid ventricular response(Without rapid ventricular response, fibrillation and flutter are usually not classified as SVT)Atrioventricular origin: AV nodal reentrant tachycardia (AVNRT) or junctional reciprocating tachycardia (JRT) AV reciprocating tachycardia (AVRT) – visible or concealed (including Wolff-Parkinson-White syndrome) Permanent (or persistent) junctional reciprocating tachycardia (PJRT), a form of SVT that involves a slow retrograde conduction over accessory pathway – occurs predominantly in infants and children but can occasionally occur in adults Junctional ectopic tachycardia (JET) Prevention Once an acute arrhythmia has been terminated, ongoing treatment may be indicated to prevent recurrence. However, those that have an isolated episode, or infrequent and minimally symptomatic episodes, usually do not warrant treatment other than observation and explanation. In general, patients with more frequent or disabling symptoms warrant some form of prevention. A variety of drugs including simple AV nodal blocking agents such as beta-blockers and verapamil, as well as antiarrhythmic drugs may be used, usually with good effect, although the adverse effects of these therapies need to be weighed against potential benefits.Radiofrequency ablation has revolutionized the treatment of tachycardia caused by a re-entrant pathway. This is a low-risk procedure that uses a catheter inside the heart to deliver radiofrequency energy to locate and destroy the abnormal electrical pathways. Ablation has been shown to be highly effective: around 90% in the case of AVNRT. Similar high rates of success are achieved with AVRT and typical atrial flutter.Cryoablation is a newer treatment involving the AV node directly. SVT involving the AV node is often a contraindication to using radiofrequency ablation due to the small (1%) incidence of injuring the AV node, then requiring a permanent pacemaker. Cryoablation uses a catheter supercooled by nitrous oxide gas freezing the tissue to −10 °C (+14.0 °F). This provides the same result as radiofrequency ablation but does not carry the same risk. If it is found that the wrong tissue is being frozen, the freezing process can be quickly stopped with the tissue returning to normal temperature and function in a short time. If after freezing the tissue to −10 °C the desired result is obtained, the tissue can be further cooled to a temperature of −73 °C (-99.4 °F) and it will be permanently ablated.This therapy has further improved the treatment options for AVNRT (and other SVTs with pathways close to the AV node), widening the application of curative ablation to young patients with relatively mild but still troublesome symptoms who might not have accepted the risk of requiring a pacemaker. Treatment Most SVTs are unpleasant rather than life-threatening, although very fast heart rates can be problematic for those with underlying ischemic heart disease, or the elderly. Episodes can be treated when they occur by Valsalva maneuver, adenosine injection or taking a AV node blocking agent as pill-in-pocket, but regular medication may also be used to prevent or reduce recurrence. While some treatment modalities can be applied to all SVTs, there are specific therapies available to treat some sub-types. Effective treatment consequently requires knowledge of how and where the arrhythmia is initiated and its mode of spread.Lifestyle changes, medication and heart procedures may be needed to control or eliminate the rapid heartbeats and related symptoms.SVTs can be categorised by whether the AV node is involved in maintaining the rhythm. If it is, manoeuvres slowing conduction through the AV node will terminate it. If it is not, AV nodal blocking maneuvers will not terminate it, but resulting temporary suppression of the AV node is still useful to unmask the underlying abnormal rhythm.Acute attacks of supraventricular tachycardia are treated with Esmolol (i.v.). Society and culture Notable cases of SVT: Bobby Julich, American professional road cyclist, third-place finisher in the 1998 Tour de France, bronze medalist in the 2004 Summer Olympics Tayyiba Haneef-Park, American volleyball competitor in the 2008 Summer Olympics Tony Blair, former Prime Minister of the United Kingdom Anastacia Lyn Newkirk, American singer-songwriter. Rebecca Soni, American Gold Medal Olympic swimmer Dana Vollmer, American Gold Medal Olympic swimmer Neville Fields, Australian football player Paul Bearer, wrestling manager Nathan Cohen, New Zealands two-time world champion and Olympic champion rower, was diagnosed with SVT in 2013 when he was 27 years old. Miley Cyrus, American singer and actress George Plimpton, notable author, sportswriter, and literary personality Mark Cuban, American billionaire entrepreneur and philanthropist References External links Cardiac Disorders – Open Directory Project Archived 2017-03-16 at the Wayback Machine
Arachnophobia
Arachnophobia is a specific phobia brought about by the irrational fear of spiders and other arachnids such as scorpions. Signs and symptoms People with arachnophobia tend to feel uneasy in any area they believe could harbour spiders or that has visible signs of their presence, such as webs. If arachnophobes see a spider, they may not enter the general vicinity until they have overcome the panic attack that is often associated with their phobia. Some people scream, cry, have emotional outbursts, experience trouble breathing, sweat and experience increased heart rates when they come in contact with an area near spiders or their webs. In some extreme cases, even a picture, a toy, or a realistic drawing of a spider can trigger intense fear. Reasons Arachnophobia may be an exaggerated form of an instinctive response that helped early humans to survive or a cultural phenomenon that is most common in predominantly European societies. Evolutionary An evolutionary reason for the phobia remains unresolved. One view, especially held in evolutionary psychology, is that the presence of venomous spiders led to the evolution of a fear of spiders, or made acquisition of a fear of spiders especially easy. Like all traits, there is variability in the intensity of fear of spiders, and those with more intense fears are classified as phobic. Being relatively small, spiders do not fit the usual criterion for a threat in the animal kingdom where size is a factor, but they can have medically significant venom and/or cause skin irritation with their setae. However, a phobia is an irrational fear as opposed to a rational fear.By ensuring that their surroundings were free from spiders, arachnophobes would have had a reduced risk of being bitten in ancestral environments, giving them a slight advantage over non-arachnophobes in terms of survival. However, having a disproportionate fear of spiders in comparison to other, potentially dangerous creatures present during Homo sapiens environment of evolutionary adaptiveness may have had drawbacks.In The Handbook of the Emotions (1993), psychologist Arne Öhman studied pairing an unconditioned stimulus with evolutionarily-relevant fear-response neutral stimuli (snakes and spiders) versus evolutionarily-irrelevant fear-response neutral stimuli (mushrooms, flowers, physical representation of polyhedra, firearms, and electrical outlets) on human subjects and found that ophidiophobia (fear of snakes) and arachnophobia required only one pairing to develop a conditioned response while mycophobia, anthophobia, phobias of physical representations of polyhedra, firearms, and electrical outlets required multiple pairings and went extinct without continued conditioning while the conditioned ophidiophobia and arachnophobia were permanent.Psychiatrist Randolph M. Nesse notes that while conditioned fear responses to evolutionarily novel dangerous objects such as electrical outlets is possible, the conditioning is slower because such cues have no prewired connection to fear, noting further that despite the emphasis of the risks of speeding and drunk driving in drivers education, it alone does not provide reliable protection against traffic collisions and that nearly one-quarter of all deaths in 2014 of people aged 15 to 24 in the United States were in traffic collisions. Nesse, psychiatrist Isaac Marks, and evolutionary biologist George C. Williams have noted that people with systematically deficient responses to various adaptive phobias (e.g. arachnophobia, ophidiophobia, basophobia) are more temperamentally careless and more likely to receive unintentional injuries that are potentially fatal and have proposed that such deficient phobia should be classified as "hypophobia" due to its selfish genetic consequences.A 2001 study found that people could detect images of spiders among images of flowers and mushrooms more quickly than they could detect images of flowers or mushrooms among images of spiders. The researchers suggested that this was because fast response to spiders was more relevant to human evolution. Cultural An alternative view is that the dangers, such as from spiders, are overrated and not sufficient to influence evolution. Instead, inheriting phobias would have restrictive and debilitating effects upon survival, rather than being an aid. For some communities such as in Papua New Guinea and Cambodia spiders are included in traditional foods. This suggests arachnophobia may, at least in part, be a cultural, rather than genetic trait.Stories about spiders in the media often contain errors and use sensationalistic vocabulary, which could contribute to the fear of spiders. Treatments The fear of spiders can be treated by any of the general techniques suggested for specific phobias. The first line of treatment is systematic desensitization – also known as exposure therapy. Before engaging in systematic desensitization, it is common to train the individual with arachnophobia in relaxation techniques, which will help keep the patient calm. Systematic desensitization can be done in vivo (with live spiders) or by getting the individual to imagine situations involving spiders, then modelling interaction with spiders for the person affected and eventually interacting with real spiders. This technique can be effective in just one session, although it generally takes more time.Recent advances in technology have enabled the use of virtual or augmented reality spiders for use in therapy. These techniques have proven to be effective. It has been suggested that exposure to short clips from the Spider-Man movies may help to reduce an individuals arachnophobia. Epidemiology Arachnophobia affects 3.5 to 6.1 percent of the global population. See also Apiphobia, fear of bees Entomophobia, fear of insects Myrmecophobia, fear of ants References External links Stiemerling, D. (1973). "Analysis of a spider and monster phobia". Z Psychosom Med Psychoanal (in German). 19 (4): 327–45. PMID 4129447. National Geographic: "Fear of Snakes, Spiders Rooted in Evolution, Study Finds"
Naegleriasis
Naegleriasis (also known as primary amoebic meningoencephalitis; PAM) is an almost invariably fatal infection of the brain by the free-living unicellular eukaryote Naegleria fowleri. Symptoms are meningitis-like and include headache, fever, nausea, vomiting, a stiff neck, confusion, hallucinations and seizures. Symptoms progress rapidly over around five days, and death usually results within one to two weeks of symptoms.N. fowleri is typically found in warm bodies of fresh water, such as ponds, lakes, rivers and hot springs. It is also found in an amoeboid or temporary flagellate stage in soil, poorly maintained municipal water supplies, water heaters, near warm-water discharges of industrial plants and in poorly chlorinated or unchlorinated swimming pools. There is no evidence of it living in salt water. As the disease is rare, it is often not considered during diagnosis.Although infection occurs very rarely, it almost inevitably results in death. Of the 450 or so naegleriasis cases in the past 60 years, only seven have survived, for a case fatality rate of 98.5%. Signs and symptoms Onset of symptoms begins one to nine days following exposure (with an average of five). Initial symptoms include changes in taste and smell, headache, fever, nausea, vomiting, back pain, and a stiff neck. Secondary symptoms are also meningitis-like including confusion, hallucinations, lack of attention, ataxia, cramp and seizures. After the start of symptoms, the disease progresses rapidly over three to seven days, with death usually occurring anywhere from seven to fourteen days later, although it can take longer. In 2013, a man in Taiwan died 25 days after being infected by Naegleria fowleri.It affects healthy children or young adults who have recently been exposed to bodies of fresh water. Some people have presented with a clinical triad of edematous brain lesions, immune suppression and fever. Scientists speculate that lower age groups are at a higher risk of contracting the disease because adolescents have a more underdeveloped and porous cribriform plate, through which the amoeba travels to reach the brain. Cause N. fowleri invades the central nervous system via the nose, specifically through the olfactory mucosa of the nasal tissues. This usually occurs as the result of the introduction of water that has been contaminated with N. fowleri into the nose during activities such as swimming, bathing or nasal irrigation.The amoeba follows the olfactory nerve fibers through the cribriform plate of the ethmoid bone into the skull. There, it migrates to the olfactory bulbs and subsequently other regions of the brain, where it feeds on the nerve tissue. The organism then begins to consume cells of the brain, piecemeal through trogocytosis, by means of an amoebostome, a unique actin-rich sucking apparatus extended from its cell surface. It then becomes pathogenic, causing primary amoebic meningoencephalitis (PAM or PAME).Primary amoebic meningoencephalitis presents symptoms similar to those of bacterial and viral meningitis. Upon abrupt disease onset, a plethora of problems arise. Endogenous cytokines, which release in response to pathogens, affect the hypothalamus thermoregulatory neurons and cause a rise in body temperature. Additionally, cytokines may act on the vascular organ of the lamina terminalis, leading to the synthesis of prostaglandin (PG) E2 which acts on the hypothalamus, resulting in an increase in body temperature. Also, the release of cytokines and exogenous exotoxins coupled with an increase in intracranial pressure stimulate nociceptors in the meninges creating pain sensations. The release of cytotoxic molecules in the central nervous system results in extensive tissue damage and necrosis, such as damage to the olfactory nerve through lysis of nerve cells and demyelination. Specifically, the olfactory nerve and bulbs become necrotic and hemorrhagic. Spinal flexion leads to nuchal rigidity, or stiff neck, due to the stretching of the inflamed meninges. The increase in intracranial pressure stimulates the area postrema to create nausea sensations which may lead to brain herniation and damage to the reticular formation. Ultimately, the increase in cerebrospinal fluid from inflammation of the meninges increases intracranial pressure and leads to the destruction of the central nervous system. Although the exact pathophysiology behind the seizures caused by PAM is unknown, scientists speculate that the seizures arise from altered meningeal permeability caused by increased intracranial pressure. Pathogenesis Naegleria fowleri propagates in warm, stagnant bodies of fresh water (typically during the summer months), and enters the central nervous system after insufflation of infected water by attaching itself to the olfactory nerve. It then migrates through the cribriform plate and into the olfactory bulbs of the forebrain, where it multiplies itself greatly by feeding on nerve tissue. Diagnosis N. fowleri can be grown in several kinds of liquid axenic media or on non-nutrient agar plates coated with bacteria. Escherichia coli can be used to overlay the non-nutrient agar plate and a drop of cerebrospinal fluid sediment is added to it. Plates are then incubated at 37 °C and checked daily for clearing of the agar in thin tracks, which indicate the trophozoites have fed on the bacteria.Detection in water is performed by centrifuging a water sample with E. coli added, then applying the pellet to a non-nutrient agar plate. After several days, the plate is microscopically inspected and Naegleria cysts are identified by their morphology. Final confirmation of the species identity can be performed by various molecular or biochemical methods.Confirmation of Naegleria presence can be done by a so-called flagellation test, where the organism is exposed to a hypotonic environment (distilled water). Naegleria, in contrast to other amoebae, differentiates within two hours into the flagellate state. Pathogenicity can be further confirmed by exposure to high temperature (42 °C): Naegleria fowleri is able to grow at this temperature, but the nonpathogenic Naegleria gruberi is not. Prevention Michael Beach, a recreational waterborne illness specialist for the Centers for Disease Control and Prevention, stated in remarks to the Associated Press that wearing of nose clips to prevent insufflation of contaminated water would be effective protection against contracting PAM, noting that "Youd have to have water going way up in your nose to begin with".Advice stated in the press release from Taiwans Centers for Disease Control recommended people prevent fresh water from entering the nostrils and avoid putting their heads down into fresh water or stirring mud in the water with feet. When starting to suffer from fever, headache, nausea, or vomiting subsequent to any kind of exposure to fresh water, even in the belief that no fresh water has traveled through the nostrils, people with such conditions should be carried to hospital quickly and make sure doctors are well-informed about the history of exposure to fresh water. Treatment On the basis of the laboratory evidence and case reports, heroic doses of amphotericin B have been the traditional mainstay of PAM treatment since the first reported survivor in the United States in 1982.Treatment has often also used combination therapy with multiple other antimicrobials in addition to amphotericin, such as fluconazole, miconazole, rifampicin and azithromycin. They have shown limited success only when administered early in the course of an infection. Fluconazole is commonly used as it has been shown to have synergistic effects against naegleria when used with amphotericin in-vitro.While the use of rifampicin has been common, including in all four North American cases of survival, its continued use has been questioned. It only has variable activity in-vitro and it has strong effects on the therapeutic levels of other antimicrobials used by inducing cytochrome p450 pathways.In 2013, two successfully treated cases in the United States utilized the medication miltefosine. As of 2015, there was no data on how well miltefosine is able to reach the central nervous system. As of 2015 the U.S. CDC offered miltefosine to doctors for the treatment of free-living amoebas including naegleria. In one of the cases, a 12-year-old female, was given miltefosine and targeted temperature management to manage cerebral edema that is secondary to the infection. She survived with no neurological damage. The targeted temperature management commingled with early diagnosis and the miltefosine medication has been attributed with her survival. On the other hand, the other survivor, an 8-year-old male, was diagnosed several days after symptoms appeared and was not treated with targeted temperature management; however, he was administered the miltefosine. He suffered what is likely permanent neurological damage.In 2016, a 16-year-old boy also survived PAM. He was treated with the same protocols of the 12-year-old girl in 2013. He recovered making a near complete neurological recovery; however, he has stated that learning has been more difficult for him since contracting the disease.In 2018, a 10-year-old girl in the Spanish city of Toledo became the first person to have PAM in Spain, and was successfully treated using intravenous and intrathecal amphotericin B. Prognosis Since its first description in the 1960s, only seven people worldwide have been reported to have survived PAM out of 450 cases diagnosed, implying a fatality rate of about 98.5%. The survivors include four in the United States, one in Mexico and one in Spain. One of the US survivors had brain damage that is likely permanent, but there are two documented surviving cases in the United States who made a full recovery with no neurological damage; they were both treated with the same protocols. Epidemiology The disease is rare and highly lethal: there had only been 300 cases as of 2008. Drug treatment research at Aga Khan University in Pakistan has shown that in-vitro drug susceptibility tests with some FDA approved drugs used for non-infectious diseases (digoxin and procyclidine were shown to be most effective of the drugs studied) have proved to kill Naegleria fowleri with an amoebicidal rate greater than 95%. The same source has also proposed a device for drug delivery via the transcranial route to the brain.In the US, the most common states with cases reported of PAM from N. fowleri are the southern states, with Texas and Florida having the highest prevalence. The most commonly affected age group is 5–14-year olds (those who play in water). The number of cases of infection could increase due to climate change, which was posited as the reason for three cases in Minnesota in 2010, 2012, and 2015.As of 2013, the numbers of reported cases were expected to increase simply because of better-informed diagnoses being made both in ongoing cases and in autopsy findings. History In 1899, Franz Schardinger first discovered and documented an amoeba he called Amoeba gruberi that could transform into a flagellate. The genus Naegleria was established by Alexis Alexeieff in 1912, who grouped the flagellate amoeba. He coined the term Naegleria after Kurt Nägler, who researched amoebae. It was not until 1965 that doctors Malcolm Fowler and Rodney F. Carter in Adelaide, Australia, reported the first four human cases of amoebic meningoencephalitis. These cases involved four Australian children, one in 1961 and the rest in 1965, all of whom had succumbed to the illness. Their work on amebo-flagellates has provided an example of how a protozoan can effectively live both freely in the environment, and in a human host.In 1966, Fowler termed the infection resulting from N. fowleri primary amoebic meningoencephalitis (PAM) to distinguish this central nervous system (CNS) invasion from other secondary invasions made by other amoebae such as Entamoeba histolytica. A retrospective study determined the first documented case of PAM possibly occurred in Britain in 1909. In 1966, four cases were reported in the US. By 1968 the causative organism, previously thought to be a species of Acanthamoeba or Hartmannella, was identified as Naegleria. This same year, occurrence of sixteen cases over a period of three years (1962–1965) was reported in Ústí nad Labem, Czechoslovakia. In 1970, Carter named the species of amoeba N. fowleri, after Malcolm Fowler. Society and culture Naegleria fowleri is also known as the "brain-eating amoeba". The term has also been applied to Balamuthia mandrillaris, causing some confusion between the two; Balamuthia mandrillaris is unrelated to Naegleria fowleri, and causes a different disease called granulomatous amoebic encephalitis. Unlike naegleriasis, which is usually seen in people with normal immune function, granulomatous amoebic encephalitis is usually seen in people with poor immune function, such as those with HIV/AIDS or leukemia.Naegleriasis was the topic of episodes 20 and 21 in Season 2 of the medical mystery drama House, M.D. Research The U.S. National Institutes of Health budgeted $800,000 for research on the disease in 2016. Phenothiazines have been tested in vitro and in animal models of PAM. Improving case detection through increased awareness, reporting, and information about cases might enable earlier detection of infections, provide insight into the human or environmental determinants of infection, and allow improved assessment of treatment effectiveness. See also Balamuthia mandrillaris – unrelated pathogenic organism that shares the same common name as N. fowleri References External links Naegleria Infection Information Page from the Centers for Disease Control and Prevention Naegleria General Information from the website of the Centers for Disease Control and Prevention
Ptosis (breasts)
Ptosis or sagging of the female breast is a natural consequence of aging. The rate at which a womans breasts drop and the degree of ptosis depends on many factors. The key factors influencing breast ptosis over a womans lifetime are cigarette smoking, her number of pregnancies, higher body mass index, larger bra cup size, and significant weight change. Post-menopausal women or people with collagen deficiencies (such as Ehlers-Danlos) may experience increased ptosis due to a loss of skin elasticity. Many women and medical professionals mistakenly believe that breastfeeding increases sagging. It is also commonly believed that the breast itself offers insufficient support and that wearing a bra prevents sagging, which has not been found to be true.Plastic surgeons categorize the degree of ptosis by evaluating the position of the nipple relative to the infra-mammary fold, the point at which the underside of the breasts attach to the chest wall. In the most advanced stage, the nipples are below the fold and point toward the ground. Signs and symptoms A womans breasts change in size, volume, and position on her chest throughout her life. In young women with large breasts, sagging may occur early in life due to the effects of gravity. It may be primarily caused by the volume and weight of the breasts which are disproportionate to her body size. Impact of pregnancy During pregnancy, the ovaries and the placenta produce estrogen and progesterone. These hormones stimulate the 15 to 20 lobes of the milk-secreting glands in the breasts to develop. Women who experience multiple pregnancies repeatedly stretch the skin envelope during engorgement while lactating. As a womans breasts change in size during repeated pregnancies, the size of her breasts change as her mammary glands are engorged with milk and as she gains and loses weight with each pregnancy. In addition, when milk production stops (usually as a child is weaned), the voluminous mammary glands diminish in volume, but they still add bulk and firmness to the breast. A 2010 review found that weight gain during pregnancy and breastfeeding were not significant risk factors for ptosis. Middle-aged women In middle-aged women, breast ptosis is caused by a combination of factors. If a woman has been pregnant, postpartum hormonal changes will cause her depleted milk glands to atrophy. Breast tissue and suspensory ligaments may also be stretched if the woman is overweight or loses and gains weight. When these factors are at play, the breast prolapses, or falls forward. When a woman with sagging breasts stands, the underside or inferior skin of the breast folds over the infra-mammary fold and lies against the chest wall. The nipple-areola complex tends to move lower on the breast relative to the inframammary crease. The nipple of the breast may also tend to point downward. Post-menopausal women In post-menopausal women, breast atrophy is aggravated by the inelasticity of over-stretched, aged skin. This is due in part to the reduction in estrogen, which affects all body tissues, including breast tissue. The loss of estrogen reduces breast size and fullness. Estrogen is also essential to maintaining a fibrous protein called collagen, which makes up much of the breasts connective tissue. Ptosis scale Plastic surgeons describe the degree of breast sagging using a ptosis scale like the modified Regnault ptosis scale below: Grade I: Mild ptosis—The nipple is at the level of the infra-mammary fold and above most of the lower breast tissue. Grade II: Moderate ptosis—The nipple is located below the infra-mammary fold but higher than most of the breast tissue hangs. Grade III: Advanced ptosis—The nipple is below the inframammary fold and at the level of maximum breast projection. Pseudoptosis—The nipple is located either at or above the infra-mammary fold, while the lower half of the breast sags below the fold. This is most often seen when a woman stops nursing, as her milk glands atrophy, causing her breast tissue to sag. Parenchymal Maldistribution—The lower breast tissue is lacking fullness, the inframammary fold is very high, and the nipple and areola are relatively close to the fold. This is usually a developmental deformity. Causes University of Kentucky plastic surgeon Brian Rinker encountered many women in his practice who attributed their sagging breasts to breastfeeding, which was also the usual belief among medical practitioners. He decided to find out if this was true, and between 1998 and 2006 he and other researchers interviewed 132 women who were seeking breast augmentation or breast lifts. They studied the womens medical history, body mass index (BMI), their number of pregnancies, their breast cup size before pregnancy, and smoking status. The study results were presented at a conference of the American Society of Plastic Surgeons.According to Rinkers research, there are several key factors. A history of cigarette smoking "breaks down a protein in the skin called elastin, which gives youthful skin its elastic appearance and supports the breast." The number of pregnancies was strongly correlated with ptosis, with the effects increasing with each pregnancy. As most women age, breasts naturally yield to gravity and tend to sag and fold over the inframammary crease, the lower attachment point to the chest wall. This is more true for larger-breasted women. The fourth reason was significant weight gain or loss (greater than 50 pounds (23 kg)). Other significant factors were higher body mass index and larger bra cup size.In Rinkers study, 55% of respondents reported an adverse change in breast shape after pregnancy. Many women mistakenly attribute the changes and their sagging breasts to breastfeeding, and as a result some are reluctant to nurse their infants. Research shows that breastfeeding is not the factor that many thought it was. Rinker concluded that "Expectant mothers should be reassured that breastfeeding does not appear to have an adverse effect upon breast appearance." Also discounted as causes affecting ptosis are weight gain during pregnancy and lack of participation in regular upper body exercise. Effect of vigorous exercise When running, breasts may move three-dimensionally: vertically, horizontally and laterally, in an overall figure-8 motion. Unrestrained movement of large breasts may contribute to sagging over time. Motion studies have revealed that when a woman runs, more than 50% of the breasts total movement is vertical, 22% is side-to-side, and 27% is in-and-out. A 2007 study found that encapsulation-type sports bras, in which each cup is separately molded, are more effective than compression-type bras, which press the breasts close to the body, at reducing total breast motion during exercise. Encapsulation bras reduce motion in two of the three planes, while compression bras reduce motion in only one plane. Previously, it was commonly believed that a woman with small to medium-size breasts benefited most from a compression-type sports bra, and women with larger breasts need an encapsulation-type sports bra. Mechanism Anatomically, a females breasts do not contain any muscle but are composed of soft, glandular tissue. Breasts are composed of mammary glands, milk ducts, adipose tissue (fat tissue) and Coopers ligaments. Mammary glands remain relatively constant throughout life. Fat tissue surrounds the mammary glands, and its volume will normally vary throughout life. Although the exact mechanisms that determine breast shape and size are largely unknown, the amount and distribution of fat tissue and, to a lesser extent, mammary tissue, cause variations in breast size, shape and volume. Some experts believe Coopers ligaments, which are connective tissue with the breast, provide some support within breasts, but theres no agreement on whether they provide support or simply divide breast tissue into compartments. Treatment Bras Since breasts are an external organ and do not contain muscle, exercise cannot improve their shape. They are not protected from external forces and are subject to gravity. Many women mistakenly believe that breasts cannot anatomically support themselves and that wearing a brassiere will prevent their breasts from sagging later in life. Researchers, bra manufacturers, and health professionals cannot find any evidence to support the idea that wearing a bra for any amount of time slows breast ptosis. Bra manufacturers are careful to claim that bras only affect the shape of breasts while they are being worn.In fact, there is some evidence that bra use reduces the development of Coopers ligaments, connective tissue that supports breast shape. That atrophy from bra wearing may therefore lead to more breast sag in the long run, much as the connective tissue in a limb weakens while its in a cast and must be re-strengthened afterward. Studies have actually documented that, after an initial period of adjustment, women experienced a significant increase in comfort and breast firmness from going without bras. Surgery Some women with ptosis choose to undergo plastic surgery to make their breasts less ptotic. Plastic surgeons offer several procedures for lifting sagging breasts. Surgery to correct the size, contour, and elevation of sagging breasts is called mastopexy. Women can also choose breast implants, or may undergo both procedures. The breast-lift procedure surgically elevates the parenchymal tissue (breast mass), cuts and re-sizes the skin envelope, and transposes the nipple-areola complex higher upon the breast hemisphere. If sagging is present and the woman opts not to undergo mastopexy, implants are typically placed above the muscle, to fill out the breast skin and tissue. Submuscular placement can result in deformity. In these cases, the implant appears to be high on the chest, while the natural breast tissue hangs down over the implant. See also Pencil test (breasts) References Further reading "Soutien-gorge de sport", in Thierry Adam, Gynécologie du sport (in French). Springer 2012, pp. 305–309. "Facteurs de lévolution morphologique du sein après arrêt du port du soutien-gorge : étude ouverte préliminaire longitudinale chez 50 volontaires. Olivier Roussel; Jean-Denis Rouillon; Université de Franche-Comté. Faculté de médecine et de pharmacie" (in French). Thèse d’exercice : Médecine : Besançon : 2009.
Micrognathism
Micrognathism is a condition where the jaw is undersized. It is also sometimes called mandibular hypoplasia. It is common in infants, but is usually self-corrected during growth, due to the jaws increasing in size. It may be a cause of abnormal tooth alignment and in severe cases can hamper feeding. It can also, both in adults and children, make intubation difficult, either during anesthesia or in emergency situations. Causes While not always pathological, it can present as a birth defect in multiple syndromes including: Catel–Manzke syndrome Bloom syndrome Coffin–Lowry syndrome Congenital rubella syndrome Cri du chat syndrome DiGeorge syndrome Ehlers–Danlos syndrome Fetal alcohol syndrome Hallermann–Streiff syndrome Hemifacial microsomia (as part of Goldenhar syndrome) Incontinentia pigmenti Juvenile idiopathic arthritis Marfan syndrome Möbius syndrome Noonan syndrome Pierre Robin syndrome Prader–Willi syndrome Progeria Silver–Russell syndrome Seckel syndrome Smith–Lemli–Opitz syndrome Stickler syndrome Treacher Collins syndrome Trisomy 13 (Patau syndrome) Trisomy 18 (Edwards syndrome) Trisomy 21 (Down syndrome) Wolf–Hirschhorn syndrome X0 syndrome (Turner syndrome) Diagnosis It can be detected by the naked eye as well as dental or skull X-Ray testing. Treatments Micrognathia can be treated by surgery, orthodontic braces, and modified eating methods. Early detection of the problem and monitoring as the problems grows can help understand it better and find the most effective treatment procedure. See also Human mandible Macrognathism Retrognathism References External links "Micrognathia". Medline Plus. 12 May 2009. Retrieved 21 May 2011.
Common variable immunodeficiency
Common variable immunodeficiency (CVID) is an immune disorder characterized by recurrent infections and low antibody levels, specifically in immunoglobulin (Ig) types IgG, IgM and IgA. Symptoms generally include high susceptibility to foreign invaders, chronic lung disease, and inflammation and infection of the gastrointestinal tract. CVID affects males and females equally. The condition can be found in children or teens but is generally not diagnosed or recognized until adulthood. The average age of diagnosis is between 20 and 50. However, symptoms vary greatly between people. "Variable" refers to the heterogeneous clinical manifestations of this disorder, which include recurrent bacterial infections, increased risk for autoimmune disease and lymphoma, as well as gastrointestinal disease. CVID is a lifelong disease. Signs and symptoms The symptoms of CVID vary between those affected. Its main features are hypogammaglobulinemia and recurrent infections. Hypogammaglobulinemia manifests as a significant decrease in the levels of IgG antibodies, usually alongside IgA antibodies; IgM antibody levels are also decreased in about half of those affected. Infections are a direct result of the low antibody levels in the circulation, which do not adequately protect them against pathogens. The microorganisms that most frequently cause infections in CVID are bacteria Haemophilus influenzae, Streptococcus pneumoniae, and Staphylococcus aureus. Pathogens less often isolated from those affected include Neisseria meningitidis, Pseudomonas aeruginosa, and Giardia lamblia. Infections mostly affect the respiratory tract (nose, sinuses, bronchi, lungs) and the ears; they can also occur at other sites, such as the eyes, skin, and gastrointestinal tract. These infections respond to antibiotics but can recur upon discontinuation of antibiotics. Bronchiectasis can develop when severe recurrent pulmonary infections are left untreated. In addition to infections, people with CVID can develop complications. These include: Autoimmune manifestations, e.g. pernicious anemia, autoimmune haemolytic anemia (AHA), idiopathic thrombocytopenic purpura (ITP), psoriasis, vitiligo, rheumatoid arthritis, primary hypothyroidism, atrophic gastritis. Autoimmunity is the main complication in people with CVID, appearing in some form in up to 50% of individuals; Malignancies, particularly non-Hodgkins lymphoma and gastric carcinoma; Enteropathy, which manifests with a blunting of intestinal villi and inflammation, and is usually accompanied by symptoms such as abdominal cramps, diarrhea, constipation, and in some cases malabsorption and weight loss. Symptoms of CVID enteropathy are similar to those of celiac disease, but do not respond to a gluten-free diet. Infectious causes must be excluded before a diagnosis of enteropathy can be made, as people with CVID are more susceptible to intestinal infections, e.g. by Giardia lamblia; Lymphocytic infiltration of tissues, which can cause enlargement of lymph nodes (lymphadenopathy), of the spleen (plenomegaly) and of the liver (hepatomegaly), as well as the formation of granulomas. In the lung this is known as granulomatous–lymphocytic interstitial lung disease.Anxiety and depression can occur as a result of dealing with the other symptoms.CVID patients generally complain of severe fatigue.As with any antibody deficiency, the most common types of infections and illnesses involve the ears, sinuses, nose, and lungs. Common infections include: Pneumonia Ear infections Sinusitis Chronic coughing (lasting from a few weeks to many months) Gastrointestinal infectionsGastrovascular infections or inflammation are very common for those with CVID. Signs of a gastrovascular infection include abdominal pain, nausea, bloating, vomiting, diarrhea, and weight loss. Many individuals with CVID have an impaired ability to absorb nutrients, including vitamins, proteins, minerals, fats, and sugar within the digestive tract.Due to changes in development in B cells, some individuals with CVID have accumulations of lymphocytes in lymphoid tissues. This can cause mild to severely swollen lymph nodes or inflammation of the spleen. In addition, a certain percentage of individuals with CVID are more susceptible to developing certain forms of cancer, more so than those without the condition. The two most common cancers in common variable immunodeficiency patients include lymphoma and certain stomach cancers. The risk for these given cancers is almost fifty times greater in common variable immunodeficiency patients than those without. People with common variable immunodeficiency have trouble fighting off infections due to the lack of antibodies produced which normally resist invading microbes. Due to impaired antibody development vaccines are not effective. Recurring bacterial infections are generally found in the upper and lower areas of the respiratory tract. Many who have a recurring lung infection, report developing, chronic lung diseases, and potentially life-threatening complications later in life. Causes The cause of CVID is poorly understood. A likely cause are Deletions in genes that encode cell surface proteins and cytokine receptors, such as CD19, CD20, CD21, and CD80. Additionally, the disease is defined by T cell defects, namely reduced proliferative capacity. The condition is hard to diagnose, taking on average 6–7 years after onset. CVID is a primary immunodeficiency. The underlying causes of CVID are largely obscure. Genetic mutations can be identified as the cause of disease in about 10% of people, while familial inheritance accounts for 10–25% of cases. Rather than arising from a single genetic mutation, CVID seems to result from variety of mutations that all contribute to a failure in antibody production. Mutations in the genes encoding ICOS, TACI, CD19, CD20, CD21, CD80 and BAFFR have been identified as causative of CVID. Susceptibility to CVID may also be linked to the major histocompatibility complex (MHC) of the genome, particularly to DR-DQ haplotypes. A mutation in the NFKB2 gene has recently been shown to cause CVID-like symptoms in a murine model. The frequency of this NFKB2 mutation in the CVID population is, however, yet to be established. Diagnosis According to a European registry study, the mean age at onset of symptoms was 26.3 years old. As per the criteria laid out by ESID (European Society for Immunodeficiencies) and PAGID (Pan-American Group for Immunodeficiency), CVID is diagnosed if: the person presents with a marked decrease of serum IgG levels (<4.5 g/L) and a marked decrease below the lower limit of normal for age in at least one of the isotypes IgM or IgA; the person is four years of age or older; the person lacks antibody immune response to protein antigens or immunization.Diagnosis is chiefly by exclusion, i.e. alternative causes of hypogammaglobulinemia, such as X-linked agammaglobulinemia, must be excluded before a diagnosis of CVID can be made. Diagnosis is difficult because of the diversity of phenotypes seen in people with CVID. For example, serum immunoglobulin levels in people with CVID vary greatly. Generally, people can be grouped as follows: no immunoglobulin production, immunoglobulin (Ig) M production only, or both normal IgM and IgG production. Additionally, B cell numbers are also highly variable. 12% of people have no detectable B cells, 12% have reduced B cells, and 54% are within the normal range. In general, people with CVID display higher frequencies of naive B cells and lower frequencies of class-switched memory B cells. Frequencies of other B cell populations, such as IgD memory B cells, transitional B cells, and CD21 B cells, are also affected, and are associated with specific disease features. Although CVID is often thought of as a serum immunoglobulin and B cell-mediated disease, T cells can display abnormal behavior. Affected individuals typically present with low frequencies of CD4+, a T-cell marker, and decreased circulation of regulatory T cells and iNKT cell. Notably, approximately 10% of people display CD4+ T cell counts lower than 200 cells/mm3; this particular phenotype of CVID has been named LOCID (Late Onset Combined Immunodeficiency), and has a poorer prognosis than classical CVID. Types The following types of CVID have been identified, and correspond to mutations in different gene segments. Treatment Treatment options are limited, and usually include lifelong immunoglobulin replacement therapy. This therapy is thought to help reduce bacterial infections. This treatment alone is not wholly effective, and many people still experience other symptoms such as lung disease and noninfectious inflammatory symptoms. This treatment replenishes Ig subtypes that the person lacks and is given at frequent intervals for life, and is thought to help reduce bacterial infections and boost immune function. Before therapy begins, plasma donations are tested for known blood-borne pathogens, then pooled and processed to obtain concentrated IgG samples. Infusions can be administered in three different forms: intravenously (IVIg), subcutaneously (SCIg), and intramuscularly (IMIg). The administration of intravenous immunoglobulins requires the insertion of a cannula or needle in a vein, usually in the arms or hands. Because highly concentrated product is used, IVIg infusions take place every 3 to 4 weeks. Subcutaneous infusions slowly release the Ig serum underneath the skin, again through a needle, and takes place every week. Intramuscular infusions are no longer widely used, as they can be painful and are more likely to cause reactions. People often experience adverse side effects to immunoglobulin infusions, including: swelling at the insertion site (common in SCIG) chills headache nausea (common in IVIG) fatigue (common in IVIG) muscle aches and pain, or joint pain fever (common in IVIG and rare in SCIG) hives (rare) thrombotic events (rare) aseptic meningitis (rare, more common in people with SLE) anaphylactic shock (very rare)In addition to Ig replacement therapy, treatment may also involve immune suppressants, to control autoimmune symptoms of the disease, and high dose steroids like corticosteroids. In some cases, antibiotics are used to fight chronic lung disease resulting from CVID. The outlook for people varies greatly depending on their level of lung and other organ damage prior to diagnosis and treatment. Epidemiology CVID has an estimated prevalence of about 1:50,000 in Caucasians. The disease seems to be less prevalent amongst Asians and African-Americans. Males and females are equally affected; however, among children, boys predominate. A recent study of people in Europe with primary immunodeficiencies found that 30% had CVID, as opposed to a different immunodeficiency. 10–25% of people inherited the disease, typically through autosomal-dominant inheritance. Given the rarity of the disease, it is not yet possible to generalize on disease prevalence among ethnic and racial groups. CVID shortens the life-span; but no study currently has a median age recorded. One study suggests the median age of death for men and women is 42 and 44 years old, respectively, but most patients involved in the study are still alive. Those people with accompanying disorders had the worst prognosis (50% survival 33 years after diagnosis) and those people with only CVID-caused frequent infections had the longest survival rates, with another study stating a life expectancy almost equalling that of the general UK population. Additionally, people with CVID with one or more noninfectious complications have an 11 times higher risk of death as compared to people with only infections. History Immunodeficiencies comprise many diseases and are genetic defects affecting the immune system. There are roughly 150 immunodeficiencies spanning over 120 genetic defects.Charles Janeway Sr. is generally credited with the first description of a case of CVID in 1953. The case involved a 39-year-old who had recurrent infections, bronchiectasis, and meningitis. CVID has since emerged as the predominant class of primary antibody deficiencies. It is thought to affect between 1 in 25,000 to 1 in 50,000 people worldwide. Though described in 1953, there was no standard definition for CVID until the 1990s, which caused widespread confusion during diagnosis. During the 1990s, the European Society for Immunodeficiency (ESID) and Pan-American Group for Immunodeficiency (PAGID) developed diagnostic criteria, including minimum age of diagnosis and the need to exclude other conditions, to describe the disease. These criteria were published in 1999 and since that time, some aspects, like increasing the minimum age, have been changed. Research Current research is aimed at studying large cohorts of people with CVID in an attempt to better understand age of onset, as well as mechanism, genetic factors, and progression of the disease.Funding for research in the US is provided by the National Institutes of Health. Key research in the UK was previously funded by the Primary Immunodeficiency Association (PiA) until its closure in January 2012, and funding is raised through the annual Jeans for Genes campaign. Current efforts are aimed at studying the following: Causes of complications. Little is known about why such diverse complications arise during treatment Underlying genetic factors. Though many polymorphisms and mutations have been identified, their respective roles in CVID development are poorly understood, and not represented in all people with CVID. Finding new ways to study CVID. Given that CVID arises from more than one gene, gene knock-out methods are unlikely to be helpful. It is necessary to seek out disease related polymorphisms by screening large populations of people with CVID, but this is challenging given the rarity of the disease. References Moris G.; Garcia-Monco JC (1999). "The Challenge of Drug-Induced Aseptic Meningitis". Archives of Internal Medicine. 159 (11): 1185–1194. doi:10.1001/archinte.159.11.1185. PMID 10371226. (IVIG and Aseptic Meningitis, association with SLE) External links GeneReviews/NCBI/NIH/UW entry on Common Variable Immune Deficiency Overview Archived 2010-06-10 at the Wayback Machine
Ocular hypertension
Ocular hypertension is the presence of elevated fluid pressure inside the eye (intraocular pressure), usually with no optic nerve damage or visual field loss.For most individuals, the normal range of intraocular pressure is between 10 mmHg and 21 mmHg. Elevated intraocular pressure is an important risk factor for glaucoma. One study found that topical ocular hypotensive medication delays or prevents the onset of primary open-angle glaucoma. Accordingly, most individuals with consistently elevated intraocular pressures of greater than 21mmHg, particularly if they have other risk factors, are treated in an effort to prevent vision loss from glaucoma. Pathophysiology The pressure within the eye is maintained by the balance between the fluid that enters the eye through the ciliary body and the fluid that exits the eye through the trabecular meshwork. Diagnosis The condition is diagnosed using ocular tonometry and glaucoma evaluation. Increased IOP without glaucomatous changes (in optic disc or visual field) is considered as ocular hypertension. Treatment Ocular hypertension is treated with either medications or laser. Medications that lower intraocular pressure work by decreasing aqueous humor production and/or increasing aqueous humor outflow. Laser trabeculoplasty works by increasing outflow. The cannabinoids found in cannabis sativa and indica (marijuana) have been shown to reduce intraocular pressure, by up to 50% for approximately four to five hours. But due to the duration of effect, significant side-effect profile, and lack of research proving efficacy, the American Glaucoma Society issued a position statement in 2009 regarding the use of marijuana as a treatment for glaucoma. Research The LiGHT trial compared the effectiveness of eye drops and selective laser trabeculoplasty for ocular hypertension and open angle glaucoma. Both treatments contributed to a similar quality of life but most people undergoing laser treatment were able to stop using eye drops. Laser trabeculoplasty was also shown to be more cost-effective. References External links eMedicine - Ocular Hypertension
Dracunculiasis
Dracunculiasis, also called Guinea-worm disease, is a parasitic infection by the Guinea worm, Dracunculus medinensis. A person becomes infected by drinking water containing water fleas infected with guinea worm larvae. The worms penetrate the digestive tract and escape into the body. Around a year later, the adult worm migrates to an exit site – usually a lower limb – and induces an intensely painful blister on the skin. The blister eventually bursts to form an intensely painful open wound, from which the worm slowly crawls over several weeks. The wound remains painful throughout the worms emergence, disabling the infected person for the three to ten weeks it takes the worm to emerge. During this time, the open wound can become infected with bacteria, leading to death in around 1% of cases.There is no medication to treat dracunculiasis. Instead, the mainstay of treatment is the careful wrapping of the emerging worm around a small stick to encourage its exit. Each day, a few more centimeters of the worm emerge, and the stick is turned to maintain gentle tension. With too much tension, the worm can break and die in the wound, causing severe pain and swelling at the ulcer site. Dracunculiasis is a disease of extreme poverty, occurring in places with poor access to clean drinking water. Prevention efforts center on filtering drinking water to remove water fleas, as well as public education campaigns to discourage people from soaking their emerging worms in sources of drinking water. Humans have had dracunculiasis since at least 1,000 BCE, and accounts consistent with dracunculiasis appear in surviving documents from physicians of antiquity. In the 19th and early 20th centuries, dracunculiasis was widespread across much of Africa and South Asia, affecting as many as 48 million people per year. The effort to eradicate dracunculiasis began in the 1980s following the successful eradication of smallpox. By 1995, every country with endemic dracunculiasis had established a national eradication program. In the ensuing years, dracunculiasis cases have dropped precipitously, and 15 previously endemic countries have been certified to have eradicated dracunculiasis, leaving the disease endemic in just four countries: Chad, Ethiopia, Mali, and South Sudan. A record low 15 cases of dracunculiasis were reported worldwide in 2021. If the eradication program succeeds, dracunculiasis will become the second human disease ever eradicated. Signs and symptoms The first signs of dracunculiasis occur around a year after infection, as the full-grown female worm prepares to leave the infected persons body. As the worm migrates to its exit site – typically the lower leg, though they can emerge anywhere on the body – some people have allergic reactions, including hives, fever, dizziness, nausea, vomiting, and diarrhea. Upon reaching its destination, the worm forms a fluid-filled blister under the skin. Over 1–3 days, the blister grows larger, begins to cause severe burning pain, and eventually bursts leaving a small open wound. The wound remains intensely painful as the worm slowly emerges over several weeks to months.In an attempt to alleviate the excruciating burning pain, the host will nearly always attempt to submerge the affected body part in water. When this occurs and the worm is exposed to water, it spews a white substance into the water containing thousands of larvae. As the worm emerges, the open blister often becomes infected with bacteria, resulting in redness and swelling, the formation of abscesses, or in severe cases gangrene, sepsis or lockjaw. When the secondary infection is near a joint (typically the ankle), the damage to the joint can result in stiffness, arthritis, or contractures.Infected people commonly harbor multiple worms – with an average 1.8 worms per person; up to 40 at a time – which will emerge from separate blisters at the same time. 90% of worms emerge from the legs or feet. However, worms can emerge from anywhere on the body. Cause Dracunculiasis is caused by infection with the roundworm Dracunculus medinensis. D. medinensis larvae reside within small aquatic crustaceans called copepods or water fleas. When humans drink the water, they can unintentionally ingest infected copepods. During digestion the copepods die, releasing the D. medinensis larvae. The larvae exit the digestive tract by penetrating the stomach and intestine, taking refuge in the abdomen or retroperitoneal space. Over the next two to three months the larvae develop into adult male and female worms. The male remains small at 4 cm (1.6 in) long and 0.4 mm (0.016 in) wide; the female is comparatively large, often over 100 cm (39 in) long and 1.5 mm (0.059 in) wide. Once the worms reach their adult size they mate, and the male dies. Over the ensuing months, the female migrates to connective tissue or along bones, and continues to develop.About a year after the initial infection, the female migrates to the skin, forms an ulcer, and emerges. When the wound touches freshwater, the female spews a milky-white substance containing hundreds of thousands of larvae into the water. Over the next several days as the female emerges from the wound, she can continue to discharge larvae into surrounding water. The larvae are eaten by copepods, and after two to three weeks of development, they are infectious to humans again. Diagnosis Dracunculiasis is diagnosed by visual examination – the thin white worm emerging from the blister is unique to this disease. Dead worms sometimes calcify and can be seen in the subcutaneous tissue by X-ray. Treatment There is no medicine to kill D. medinensis or prevent it from causing disease once within the body. Instead, treatment focuses on slowly and carefully removing the worm from the wound over days to weeks. Once the blister bursts and the worm begins to emerge, the wound is soaked in a bucket of water, allowing the worm to empty itself of larvae away from a source of drinking water. As the first part of the worm emerges, it is typically wrapped around a piece of gauze or a stick to maintain steady tension on the worm, encouraging its exit. Each day, several centimeters of the worm emerge from the blister, and the stick is wound to maintain tension. This is repeated daily until the full worm emerges, typically within a month. If too much pressure is applied at any point, the worm can break and die, leading to severe swelling and pain at the site of the ulcer.Treatment for dracunculiasis also tends to include regular wound care to avoid infection of the open ulcer while the worm is leaving. The U.S. Centers for Disease Control and Prevention (CDC) recommends cleaning the wound before the worm emerges. Once the worm begins to exit the body, the CDC recommends daily wound care: cleaning the wound, applying antibiotic ointment, and replacing the bandage with fresh gauze. Painkillers like aspirin or ibuprofen can help ease the pain of the worms exit. Outcomes Dracunculiasis is a debilitating disease, causing substantial disability in around half of those infected. People with worms emerging can be disabled for the three to ten weeks it takes the worms to fully emerge. When worms emerge near joints, the inflammation around a dead worm, or infection of the open wound can result in permanent stiffness, pain, or destruction of the joint. Some people with dracunculiasis have continuing pain for 12 to 18 months after the worm has emerged. Around 1% of dracunculiasis cases result in death from secondary infections of the wound.When dracunculiasis was widespread, it would often affect entire villages at once. Outbreaks occurring during planting and harvesting seasons severely impair a communitys agricultural operations – earning dracunculiasis the moniker "empty granary disease" in some places. Communities affected by dracunculiasis also see reduced school attendance as children of affected parents must take over farm or household duties, and affected children may be physically prevented from walking to school for weeks.Infection does not create immunity, so people can repeatedly experience dracunculiasis throughout their lives. Prevention There is no vaccine for dracunculiasis, and once infected with D. medinensis there is no way to prevent the disease from running its full course. Consequently, nearly all effort to reduce the burden of dracunculiasis focuses on preventing the transmission of D. medinensis from person to person. This is primarily accomplished by filtering drinking water to physically remove copepods. Nylon filters, finely woven cloth, or specialized filter straws are all effective means of copepod removal. Sources of drinking water can be treated with the larvicide temephos, which kills copepods. Where possible, open sources of drinking water are replaced by deep wells that can serve as new sources of clean water. Public education campaigns inform people in affected areas how dracunculiasis spreads and encourage those with the disease to avoid soaking their wounds in bodies of water that are used for drinking. Epidemiology Dracunculiasis is nearly eradicated, with just 15 cases reported worldwide in 2021. This is down from 27 cases in 2020, and dramatically less than the estimated 3.5 million annual cases in 20 countries in 1986 – the year the World Health Assembly called for dracunculiasis eradication. Dracunculiasis remains endemic in just four countries: Chad, Ethiopia, Mali, and South Sudan.Dracunculiasis is a disease of extreme poverty, occurring in places where there is poor access to clean drinking water. Cases tend to be split roughly equally between males and females, and can occur in all age groups. Within a given place, dracunculiasis risk is linked to occupation; people who farm or fetch drinking water are most likely to be infected.Cases of dracunculiasis have a seasonal cycle, though the timing varies by location. Along the Sahara deserts southern edge, cases peak during the mid-year rainy season (May–October) when stagnant water sources are more abundant. Along the Gulf of Guinea, cases are more common during the dry season (October–March) when flowing water sources dry up. History Dracunculiasis has been with humans for at least 3,000 years, as the remnants of a guinea worm infection have been found in the mummy of a girl entombed in Egypt around 1,000 BCE. Diseases consistent with the effects of dracunculiasis are referenced by writers throughout antiquity. The disease of "fiery serpents" that plagues the Hebrews in the Old Testament (around 1250 BCE) is often attributed to dracunculiasis. Plutarchs Symposiacon refers to a (now lost) description of a similar disease by the 2nd century BCE writer Agatharchides concerning a "hitherto unheard of disease" in which "small worms issue from [peoples] arms and legs... insinuating themselves between the muscles [to] give rise to horrible sufferings". Many of antiquitys famous physicians also write of diseases consistent with dracunculiasis, including Galen, Rhazes, and Avicenna; though there was some disagreement as to the nature of the disease, with some attributing it to a worm, while others considered it to be a corrupted part of the body emerging. In his 1674 treatise on dracunculiasis, Georg Hieronymous Velschius first proposed that the Rod of Asclepius, a common symbol of the medical profession, depicts a recently extracted guinea worm.Carl Linnaeus included the guinea worm in his 1758 edition of Systema Naturae, naming it Gordius medinensis. In Johann Friedrich Gmelins 13th edition of Systema Naturae (1788), he renamed the worm Filaria medinensis, leaving Gordius for free-living worms. Henry Bastian authored the first detailed description of the worm itself, published in 1863. The following year, in his book Entozoa, Thomas Spencer Cobbold used the name Dracunculus medinensis, which was enshrined as the official name by the International Commission on Zoological Nomenclature in 1915. Despite longstanding knowledge that the worm was associated with water, the lifecycle of D. medinensis was the topic of protracted debate. Alexei Pavlovich Fedchenko filled a major gap with his 1870 publication describing that D. medinensis larvae can infect and develop inside Cyclops crustaceans. The next step was shown by Robert Thomson Leiper, who described in a 1907 paper that monkeys fed D. medinensis-infected Cyclops developed mature guinea worms, while monkeys directly fed D. medinensis larvae did not.In the 19th and 20th centuries, dracunculiasis was widespread across nearly all of Africa and South Asia, though we lack exact case counts from the pre-eradication era. In a 1947 article in the Journal of Parasitology, Norman R. Stoll used rough estimates of populations in endemic areas to suggest that there could be as many as 48 million cases of dracunculiasis per year. In 1976, the WHO estimated the global burden at 10 million cases per year. Ten years later, as the eradication effort was beginning, the WHO estimated 3.5 million cases per year worldwide. Etymology Dracunculiasis Latin name, Dracunculus medinensis ("little dragon from Medina"), derives from its one-time high incidence in the city of Medina (in modern Saudi Arabia), and its common name, Guinea worm, is due to a similar past high incidence along the Guinea coast of West Africa. It is no longer endemic in either location. Eradication The campaign to eradicate dracunculiasis began at the urging of the CDC in 1980. Following smallpox eradication (last case in 1977; eradication certified in 1981), dracunculiasis was considered an achievable eradication target since it was relatively uncommon and preventable with only behavioral changes. In 1981, the steering committee for the United Nations International Drinking Water Supply and Sanitation Decade (a program to improve global drinking water during the decade from 1981 to 1990) adopted the goal of eradicating dracunculiasis as part of their efforts. The following June, an international meeting termed "Workshop on Opportunities for Control of Dracunculiasis" concluded that dracunculiasis could be eradicated through public education, drinking water improvement, and larvicide treatments. In response, India began its national eradication program in 1983. In 1986, the 39th World Health Assembly issued a statement endorsing dracunculiasis eradication and calling on member states to craft eradication plans. The same year, The Carter Center began collaborating with the government of Pakistan to initiate its national program, which then launched in 1988. By 1996, national eradication programs had been launched in every country with endemic dracunculiasis: Ghana and Nigeria in 1989; Cameroon in 1991; Togo, Burkina Faso, Senegal, and Uganda in 1992; Benin, Mauritania, Niger, Mali, and Côte dIvoire in 1993; Sudan, Kenya, Chad, and Ethiopia in 1994; Yemen and the Central African Republic in 1995.Each national eradication program had three phases. The first phase consisted of a nationwide search to identify the extent of dracunculiasis transmission and develop national and regional plans of action. The second phase involved the training and distribution of staff and volunteers to provide public education village-by-village, surveil for cases, and deliver water filters. This continued and evolved as needed until the national burden of disease was very low. Then in a third phase, programs intensified surveillance efforts with the goal of identifying each case within 24 hours of the worm emerging and preventing the person from contaminating drinking water supplies. Most national programs offered voluntary in-patient centers, where those affected could stay and receive food and care until their worms were removed.In May 1991, the 44th World Health Assembly called for an international certification system to verify dracunculiasis eradication country-by-country. To this end, in 1995 the WHO established the International Commission for the Certification of Dracunculiasis Eradication (ICCDE). Once a country reports zero cases of dracunculiasis for a calendar year, the ICCDE considers that country to have interrupted guinea worm transmission, and is then in the "precertification phase". If the country repeats this feat with zero cases in each of the next three calendar years, the ICCDE sends a team to the country to assess the countrys disease surveillance systems and to verify the countrys reports. The ICCDE can then formally recommend the WHO Director-General certify a country as free of dracunculiasis.Since the initiation of the global eradication program, the ICCDE has certified 15 of the original endemic countries as having eradicated dracunculiasis: Pakistan in 1997; India in 2000; Senegal and Yemen in 2004; the Central African Republic and Cameroon in 2007; Benin, Mauritania, and Uganda in 2009; Burkina Faso and Togo in 2011; Côte dIvoire, Niger, and Nigeria in 2013; and Ghana in 2015. Other animals In addition to humans, D. medinensis can infect dogs. Infections of domestic dogs have been particularly common in Chad, where they helped reignite dracunculiasis transmission in 2010. Dogs are thought to be infected by eating a paratenic host, likely a fish or amphibian. As with humans, prevention efforts have focused on preventing infection by encouraging people in affected areas to bury fish entrails as well as to identify and tie up dogs with emerging worms so that they cannot access drinking water sources until after the worms have emerged. Domestic ferrets can be infected with D. medinensis in laboratory settings, and have been used as an animal disease model for human dracunculiasis.Different Dracunculus species can infect snakes, turtles, and other mammals. Animal infections are most widespread in snakes, with nine different species of Dracunculus described in snakes in the United States, Brazil, India, Vietnam, Australia, Papua New Guinea, Benin, Madagascar, and Italy. The only other reptile affected is the snapping turtle with infected common snapping turtles described in several U.S. states, and a single infected South American snapping turtle described in Costa Rica. Infections of other mammals are limited to the Americas. Raccoons in the U.S. and Canada are most widely impacted, particularly by D. insignis; however, Dracunculus worms have also been reported in American skunks, coyotes, foxes, opossums, domestic dogs, domestic cats, and (rarely) muskrats and beavers. Notes References Works cited External links "Guinea Worm Disease Eradication Program". Carter Center. Nicholas D. Kristof from the New York Times follows a young Sudanese boy with a Guinea Worm parasite infection who is quarantined for treatment as part of the Carter program Tropical Medicine Central Resource: "Guinea Worm Infection (Dracunculiasis)" World Health Organization on Dracunculiasis
Macroglossia
Macroglossia is the medical term for an unusually large tongue. Severe enlargement of the tongue can cause cosmetic and functional difficulties in speaking, eating, swallowing and sleeping. Macroglossia is uncommon, and usually occurs in children. There are many causes. Treatment depends upon the exact cause. Signs and symptoms Although it may be asymptomatic, symptoms usually are more likely to be present and more severe with larger tongue enlargements. Signs and symptoms include: Dyspnea - difficult, noisy breathing, obstructive sleep apnea or airway obstruction Dysphagia - difficulty swallowing and eating Dysphonia - disrupted speech, possibly manifest as lisping Sialorrhea - drooling Angular cheilitis - sores at the corners of the mouth Crenated tongue - indentations on the lateral borders of the tongue caused by pressure from teeth ("pie crust tongue") Open bite malocclusion - a type of malocclusion of the teeth Mandibular prognathism - enlarged mandible Mouth breathing Orthodontic abnormalities - including diastema and tooth spacingA tongue that constantly protrudes from the mouth is vulnerable to drying out, ulceration, infection or even necrosis. Causes Macroglossia may be caused by a wide variety of congenital and acquired conditions. Isolated macroglossia has no determinable cause. The most common causes of tongue enlargement are vascular malformations (e.g. lymphangioma or hemangioma) and muscular hypertrophy (e.g. Beckwith–Wiedemann syndrome or hemihyperplasia). Enlargement due to lymphangioma gives the tongue a pebbly appearance with multiple superficial dilated lymphatic channels. Enlargement due to hemihyperplasia is unilateral. In edentulous persons, a lack of teeth leaves more room for the tongue to expand into laterally, which can create problems with wearing dentures and may cause pseudomacroglossia.Amyloidosis is an accumulation of insoluble proteins in tissues that impedes normal function. This can be a cause of macroglossia if amyloid is deposited in the tissues of the tongue, which gives it a nodular appearance. Beckwith–Wiedemann syndrome is a rare hereditary condition, which may include other defects such as omphalocele, visceromegaly, gigantism or neonatal hypoglycemia. The tongue may show a diffuse, smooth generalized enlargement. The face may show maxillary hypoplasia causing relative mandibular prognathism. Apparent macroglossia can also occur in Down syndrome. The tongue has a papillary, fissured surface. Macroglossia may be a sign of hypothyroid disorders. Other causes include mucopolysaccharidosis, neurofibromatosis, multiple endocrine neoplasia type 2B, myxedema, acromegaly, angioedema, tumors (e.g. carcinoma), Glycogen storage disease type 2, Simpson–Golabi–Behmel syndrome, Triploid syndrome, trisomy 4p, fucosidosis, alpha-mannosidosis, Klippel–Trénaunay syndrome, cardiofaciocutaneous syndrome, Ras pathway disorders, transient neonatal diabetes, and lingual thyroid. Diagnosis Macroglossia is usually diagnosed clinically. Sleep endoscopy and imaging may be used for assessment of obstructive sleep apnea. The initial evaluation of all patients with macroglossia may involve abdominal ultrasound and molecular studies for Beckwith–Wiedemann syndrome. Classification The ICD-10 lists macroglossia under "other congenital malformations of the digestive system". Definitions of macroglossia have been proposed, including "a tongue that protrudes beyond the teeth during [the] resting posture" and "if there is an impression of a tooth on the lingual border when the patients slightly open their mouths". Others have suggested there is no objective definition of what constitutes macroglossia. Some propose a distinction between true macroglossia, when histologic abnormalities correlate with the clinical findings of tongue enlargement, and relative macroglossia, where histology does not provide a pathologic explanation for the enlargement. Common examples of true macroglossia are vascular malformations, muscular enlargement and tumors; whilst Down syndrome is an example of relative macroglossia. Pseudomacryglossia refers to a tongue that is of normal size but gives a false impression of being too large in relation to adjacent anatomical structures. The Myer classification subdivides macroglossia into generalized or localized. Treatment Treatment and prognosis of macroglossia depends upon its cause, and also upon the severity of the enlargement and symptoms it is causing. No treatment may be required for mild cases or cases with minimal symptoms. Speech therapy may be beneficial, or surgery to reduce the size of the tongue (reduction glossectomy). Treatment may also involve correction of orthodontic abnormalities that may have been caused by the enlarged tongue. Treatment of any underlying systemic disease may be required, e.g. radiotherapy. Epidemiology Macroglossia is uncommon, and usually occurs in children. Macroglossia has been reported to have a positive family history in 6% of cases. The National Organization of Rare Disorders lists macroglossia as a rare disease (fewer than 200,000 individuals in the US). References == External links ==
Methemoglobinemia
Methemoglobinemia, or methaemoglobinaemia, is a condition of elevated methemoglobin in the blood. Symptoms may include headache, dizziness, shortness of breath, nausea, poor muscle coordination, and blue-colored skin (cyanosis). Complications may include seizures and heart arrhythmias.Methemoglobinemia can be due to certain medications, chemicals, or food or it can be inherited from a persons parents. Substances involved may include benzocaine, nitrates, or dapsone. The underlying mechanism involves some of the iron in hemoglobin being converted from the ferrous [Fe2+] to the ferric [Fe3+] form. The diagnosis is often suspected based on symptoms and a low blood oxygen that does not improve with oxygen therapy. Diagnosis is confirmed by a blood gas.Treatment is generally with oxygen therapy and methylene blue. Other treatments may include vitamin C, exchange transfusion, and hyperbaric oxygen therapy. Outcomes are generally good with treatment. Methemoglobinemia is relatively uncommon, with most cases being acquired rather than genetic. Signs and symptoms Signs and symptoms of methemoglobinemia (methemoglobin level above 10%) include shortness of breath, cyanosis, mental status changes (~50%), headache, fatigue, exercise intolerance, dizziness, and loss of consciousness.People with severe methemoglobinemia (methemoglobin level above 50%) may exhibit seizures, coma, and death (level above 70%). Healthy people may not have many symptoms with methemoglobin levels below 15%. However, people with co-morbidities such as anemia, cardiovascular disease, lung disease, sepsis, or who have abnormal hemoglobin species (e.g. carboxyhemoglobin, sulfhemoglobinemia or sickle hemoglobin) may experience moderate to severe symptoms at much lower levels (as low as 5–8%). Cause Acquired Methemoglobinemia may be acquired. Classical drug causes of methemoglobinaemia include various antibiotics (trimethoprim, sulfonamides, and dapsone), local anesthetics (especially articaine, benzocaine, prilocaine, and lidocaine), and aniline dyes, metoclopramide, rasburicase, umbellulone, chlorates, bromates, and nitrites. Nitrates are suspected to cause methemoglobinemia.In otherwise healthy individuals, the protective enzyme systems normally present in red blood cells rapidly reduce the methemoglobin back to hemoglobin and hence maintain methemoglobin levels at less than one percent of the total hemoglobin concentration. Exposure to exogenous oxidizing drugs and their metabolites (such as benzocaine, dapsone, and nitrates) may lead to an increase of up to a thousandfold of the methemoglobin formation rate, overwhelming the protective enzyme systems and acutely increasing methemoglobin levels.Infants under 6 months of age have lower levels of a key methemoglobin reduction enzyme (NADH-cytochrome b5 reductase) in their red blood cells. This results in a major risk of methemoglobinemia caused by nitrates ingested in drinking water, dehydration (usually caused by gastroenteritis with diarrhea), sepsis, or topical anesthetics containing benzocaine or prilocaine resulting in blue baby syndrome. Nitrates used in agricultural fertilizers may leak into the ground and may contaminate well water. The current EPA standard of 10 ppm nitrate-nitrogen for drinking water is specifically set to protect infants. Benzocaine applied to the gums or throat (as commonly used in baby teething gels, or sore throat lozenges) can cause methemoglobinemia. Genetic Due to a deficiency of the enzyme diaphorase I (cytochrome b5 reductase), methemoglobin levels rise and the blood of met-Hb patients has reduced oxygen-carrying capacity. Instead of being red in color, the arterial blood of met-Hb patients is brown. This results in the skin of white patients gaining a bluish hue. Hereditary met-Hb is caused by a recessive gene. If only one parent has this gene, offspring will have normal-hued skin, but if both parents carry the gene, there is a chance the offspring will have blue-hued skin.Another cause of congenital methemoglobinemia is seen in patients with abnormal hemoglobin variants such as hemoglobin M (HbM), or hemoglobin H (HbH), which are not amenable to reduction despite intact enzyme systems.Methemoglobinemia can also arise in patients with pyruvate kinase deficiency due to impaired production of NADH – the essential cofactor for diaphorase I. Similarly, patients with glucose-6-phosphate dehydrogenase deficiency may have impaired production of another co-factor, NADPH. Pathophysiology The affinity for oxygen of ferric iron is impaired. The binding of oxygen to methemoglobin results in an increased affinity for oxygen in the remaining heme sites that are in ferrous state within the same tetrameric hemoglobin unit. This leads to an overall reduced ability of the red blood cell to release oxygen to tissues, with the associated oxygen–hemoglobin dissociation curve therefore shifted to the left. When methemoglobin concentration is elevated in red blood cells, tissue hypoxia may occur.Normally, methemoglobin levels are <1%, as measured by the CO-oximetry test. Elevated levels of methemoglobin in the blood are caused when the mechanisms that defend against oxidative stress within the red blood cell are overwhelmed and the oxygen carrying ferrous ion (Fe2+) of the heme group of the hemoglobin molecule is oxidized to the ferric state (Fe3+). This converts hemoglobin to methemoglobin, resulting in a reduced ability to release oxygen to tissues and thereby hypoxia. This can give the blood a bluish or chocolate-brown color. Spontaneously formed methemoglobin is normally reduced (regenerating normal hemoglobin) by protective enzyme systems, e.g., NADH methemoglobin reductase (cytochrome-b5 reductase) (major pathway), NADPH methemoglobin reductase (minor pathway) and to a lesser extent the ascorbic acid and glutathione enzyme systems. Disruptions with these enzyme systems lead to methemoglobinemia. Hypoxia occurs due to the decreased oxygen-binding capacity of methemoglobin, as well as the increased oxygen-binding affinity of other subunits in the same hemoglobin molecule, which prevents them from releasing oxygen at normal tissue oxygen levels. Diagnosis The diagnosis of methemoglobinemia is made with the typical symptoms, a suggestive history, low oxygen saturation on pulse oximetry measurements (SpO2) and these symptoms (cyanosis and hypoxia) failing to improve on oxygen treatment. The definitive test would be obtaining either CO-oximeter or a methemoglobin level on an arterial blood gas test. Arterial blood with an elevated methemoglobin level has a characteristic chocolate-brown color as compared to normal bright red oxygen-containing arterial blood; the color can be compared with reference charts.The SaO2 calculation in the arterial blood gas analysis is falsely normal, as it is calculated under the premise of hemoglobin either being oxyhemoglobin or deoxyhemoglobin. However, co-oximetry can distinguish the methemoglobin concentration and percentage of hemoblobin. At the same time, the SpO2 concentration as measured by pulse ox is false high, because methemoglobin absorbs the pulse ox light at the 2 wavelengths it uses to calculate the ratio of oxyhemoglobin and deoxyhemoglobin. For example with a methemoglobin level of 30-35%, this ratio of light absorbance is 1.0, which translates into a false high SpO2 of 85%. Differential diagnosis Other conditions that can cause bluish skin include argyria, sulfhemoglobinemia, heart failure, Amiodarone-induced bluish skin pigmentation and acrodermatitis enteropathica. Treatment Methemoglobinemia can be treated with supplemental oxygen and methylene blue. Methylene blue is given as a 1% solution (10 mg/ml) 1 to 2 mg/kg administered intravenously slowly over five minutes. Although the response is usually rapid, the dose may be repeated in one hour if the level of methemoglobin is still high one hour after the initial infusion. Methylene blue inhibits monoamine oxidase, and serotonin toxicity can occur if taken with an SSRI (selective serotonin reuptake inhibitor) medicine.Methylene blue restores the iron in hemoglobin to its normal (reduced) oxygen-carrying state. This is achieved by providing an artificial electron acceptor (such as methylene blue or flavin) for NADPH methemoglobin reductase (RBCs usually dont have one; the presence of methylene blue allows the enzyme to function at 5× normal levels). The NADPH is generated via the hexose monophosphate shunt. Genetically induced chronic low-level methemoglobinemia may be treated with oral methylene blue daily. Also, vitamin C can occasionally reduce cyanosis associated with chronic methemoglobinemia, and may be helpful in settings in which methylene blue is unavailable or contraindicated (e.g., in an individual with G6PD deficiency). Diaphorase (cytochrome b5 reductase) normally contributes only a small percentage of the red blood cells reducing capacity, but can be pharmacologically activated by exogenous cofactors (such as methylene blue) to five times its normal level of activity. Epidemiology Methemoglobinemia mostly affects infants under 6 months of age (particularly those under 4 months) due to low hepatic production of methemoglobin reductase. The most at-risk populations are those with water sources high in nitrates, such as wells and other water that is not monitored or treated by a water treatment facility. The nitrates can be hazardous to the infants. The link between blue baby syndrome in infants and high nitrate levels is well established for waters exceeding the normal limit of 10 mg/L. However, there is also evidence that breastfeeding is protective in exposed populations. Society and culture Blue Fugates The Fugates, a family that lived in the hills of Kentucky, had the hereditary form. They are known as the "Blue Fugates". Martin Fugate and Elizabeth Smith, who had married and settled near Hazard, Kentucky in around 1800, were both carriers of the recessive methemoglobinemia (met-H) gene, as was a nearby clan with whom the Fugates descendants intermarried. As a result, many descendants of the Fugates were born with met-H. Blue Men of Lurgan The "blue men of Lurgan" were a pair of Lurgan men suffering from what was described as "familial idiopathic methaemoglobinaemia" who were treated by Dr. James Deeny in 1942. Deeny, who would later become the Chief Medical Officer of the Republic of Ireland, prescribed a course of ascorbic acid and sodium bicarbonate. In case one, by the eighth day of treatments, there was a marked change in appearance, and by the twelfth day of treatment, the patients complexion was normal. In case two, the patients complexion reached normality over a month-long duration of treatment. See also Carbon monoxide poisoning Hemoglobinemia References == External links ==
Familial hypercholesterolemia
Familial hypercholesterolemia (FH) is a genetic disorder characterized by high cholesterol levels, specifically very high levels of low-density lipoprotein (LDL cholesterol), in the blood and early cardiovascular disease. The most common mutations diminish the number of functional LDL receptors in the liver. Since the underlying body biochemistry is slightly different in individuals with FH, their high cholesterol levels are less responsive to the kinds of cholesterol control methods which are usually more effective in people without FH (such as dietary modification and statin tablets). Nevertheless, treatment (including higher statin doses) is usually effective. FH is classified as a type 2 familial dyslipidemia. There are five types of familial dyslipidemia (not including subtypes), and each are classified from both the altered lipid profile and by the genetic abnormality. For example, high LDL (often due to LDL receptor defect) is type 2. Others include defects in chylomicron metabolism, triglyceride metabolism, and metabolism of other cholesterol-containing particles, such as VLDL and IDL. About 1 in 100 to 200 people have mutations in the LDLR gene that encodes the LDL receptor protein, which normally removes LDL from the circulation, or apolipoprotein B (ApoB), which is the part of LDL that binds with the receptor; mutations in other genes are rare. People who have one abnormal copy (are heterozygous) of the LDLR gene may develop cardiovascular disease prematurely at the age of 30 to 40. Having two abnormal copies (being homozygous) may cause severe cardiovascular disease in childhood. Heterozygous FH is a common genetic disorder, inherited in an autosomal dominant pattern, occurring in 1:250 people in most countries; homozygous FH is much rarer, occurring in 1 in 300,000 people.Heterozygous FH is normally treated with statins, bile acid sequestrants, or other lipid-lowering agents that lower cholesterol levels. New cases are generally offered genetic counseling. Homozygous FH often does not respond to medical therapy and may require other treatments, including LDL apheresis (removal of LDL in a method similar to dialysis) and occasionally liver transplantation. Signs and symptoms Physical signs High cholesterol levels normally do not cause any symptoms. Yellow deposits of cholesterol-rich fat may be seen in various places on the body such as around the eyelids (known as xanthelasma palpebrarum), the outer margin of the iris (known as arcus senilis corneae), and in the tendons of the hands, elbows, knees and feet, particularly the Achilles tendon (known as a tendon xanthoma). Cardiovascular disease Accelerated deposition of cholesterol in the walls of arteries leads to atherosclerosis, the underlying cause of cardiovascular disease. The most common problem in FH is the development of coronary artery disease (atherosclerosis of the coronary arteries that supply the heart) at a much younger age than would be expected in the general population. This may lead to angina pectoris (chest pain or tightness on exertion) or heart attacks. Less commonly, arteries of the brain are affected; this may lead to transient ischemic attacks (brief episodes of weakness on one side of the body or inability to talk) or occasionally stroke. Peripheral artery occlusive disease (obstruction of the arteries of the legs) occurs mainly in people with FH who smoke; this can cause pain in the calf muscles during walking that resolves with rest (intermittent claudication) and problems due to a decreased blood supply to the feet (such as gangrene). Atherosclerosis risk is increased further with age and in those who smoke, have diabetes, high blood pressure and a family history of cardiovascular disease. Diagnosis Approximately 85% of individuals with this disorder have not been diagnosed and consequently are not receiving lipid-lowering treatments. Physical examination findings can help a physician make the diagnosis of FH. Tendon xanthomas are seen in 20-40% of individuals with FH and are pathognomonic for the condition. A xanthelasma or corneal arcus may also be seen. These common signs are supportive of the diagnosis, but are non-specific findings. Lipid measurements Cholesterol levels may be determined as part of health screening for health insurance or occupational health, when the external physical signs such as xanthelasma, xanthoma, arcus are noticed, symptoms of cardiovascular disease develop, or a family member has been found to have FH. A pattern compatible with hyperlipoproteinemia type IIa on the Fredrickson classification is typically found: raised level of total cholesterol, markedly raised level of low-density lipoprotein (LDL), normal level of high-density lipoprotein (HDL), and normal level of triglycerides. Total cholesterol levels of 350–550 mg/dL are typical of heterozygous FH while total cholesterol levels of 650–1000 mg/dL are typical of homozygous FH. The LDL is typically above the 75th percentile, that is, 75% of the healthy population would have a lower LDL level. Cholesterol levels can be drastically higher in people with FH who are also obese. Mutation analysis On the basis of the isolated high LDL and clinical criteria (which differ by country), genetic testing for LDL receptor mutations and ApoB mutations can be performed. Mutations are detected in between 50 and 80% of cases; those without a mutation often have higher triglyceride levels and may in fact have other causes for their high cholesterol, such as combined hyperlipidemia due to metabolic syndrome. Differential diagnosis FH needs to be distinguished from familial combined hyperlipidemia and polygenic hypercholesterolemia. Lipid levels and the presence of xanthomata can confirm the diagnosis. Sitosterolemia and cerebrotendineous xanthomatosis are two rare conditions that can also present with premature atherosclerosis and xanthomas. The latter condition can also involve neurological or psychiatric manifestations, cataracts, diarrhea and skeletal abnormalities. Genetics The most common genetic defects in FH are LDLR mutations (prevalence 1 in 250, depending on the population), ApoB mutations (prevalence 1 in 1000), PCSK9 mutations (less than 1 in 2500) and LDLRAP1. The related disease sitosterolemia, which has many similarities with FH and also features cholesterol accumulation in tissues, is due to ABCG5 and ABCG8 mutations. LDL receptor The LDL receptor gene is located on the short arm of chromosome 19 (19p13.1-13.3). It comprises 18 exons and spans 45 kb, and the protein gene product contains 839 amino acids in mature form. A single abnormal copy (heterozygote) of FH causes cardiovascular disease by the age of 50 in about 40% of cases. Having two abnormal copies (homozygote) causes accelerated atherosclerosis in childhood, including its complications. The plasma LDL levels are inversely related to the activity of LDL receptor (LDLR). Homozygotes have LDLR activity of less than 2%, while heterozygotes have defective LDL processing with receptor activity being 2–25%, depending on the nature of the mutation. Over 1000 different mutations are known.There are five major classes of FH due to LDLR mutations: Class I: LDLR is not synthesized at all. Class II: LDLR is not properly transported from the endoplasmic reticulum to the Golgi apparatus for expression on the cell surface. Class III: LDLR does not properly bind LDL on the cell surface because of a defect in either apolipoprotein B100 (R3500Q) or in LDL-R. Class IV: LDLR bound to LDL does not properly cluster in clathrin-coated pits for receptor-mediated endocytosis (pathway step 2). Class V: LDLR is not recycled back to the cell surface (pathway step 5). Apolipoprotein B Apolipoprotein B, in its ApoB100 form, is the main apolipoprotein, or protein part of the lipoprotein particle. Its gene is located on the second chromosome (2p24-p23) and is 46.2 kb long. FH is often associated with the mutation of R3500Q, which causes replacement of arginine by glutamine at position 3500. The mutation is located on a part of the protein that normally binds with the LDL receptor, and binding is reduced as a result of the mutation. Like LDLR, the number of abnormal copies determines the severity of the hypercholesterolemia. PCSK9 Mutations in the proprotein convertase subtilisin/kexin type 9 (PCSK9) gene were linked to autosomal dominant (i.e. requiring only one abnormal copy) FH in a 2003 report. The gene is located on the first chromosome (1p34.1-p32) and encodes a 666 amino acid protein that is expressed in the liver. It has been suggested that PCSK9 causes FH mainly by reducing the number of LDL receptors on liver cells. LDLRAP1 Abnormalities in the ARH gene, also known as LDLRAP1, were first reported in a family in 1973. In contrast to the other causes, two abnormal copies of the gene are required for FH to develop (autosomal recessive). The mutations in the protein tend to cause the production of a shortened protein. Its real function is unclear, but it seems to play a role in the relation between the LDL receptor and clathrin-coated pits. People with autosomal recessive hypercholesterolemia tend to have more severe disease than LDLR-heterozygotes but less severe than LDLR-homozygotes. Pathophysiology LDL cholesterol normally circulates in the body for 2.5 days, and subsequently the apolipoprotein B portion of LDL cholesterol binds to the LDL receptor on the liver cells, triggering its uptake and digestion. This process results in the removal of LDL from the circulatory system. Synthesis of cholesterol by the liver is suppressed in the HMG-CoA reductase pathway. In FH, LDL receptor function is reduced or absent, and LDL circulates for an average duration of 4.5 days, resulting in significantly increased level of LDL cholesterol in the blood with normal levels of other lipoproteins. In mutations of ApoB, reduced binding of LDL particles to the receptor causes the increased level of LDL cholesterol. It is not known how the mutation causes LDL receptor dysfunction in mutations of PCSK9 and ARH.Although atherosclerosis occurs to a certain degree in all people, people with FH may develop accelerated atherosclerosis due to the excess level of LDL. The degree of atherosclerosis approximately depends on the number of LDL receptors still expressed and the functionality of these receptors. In many heterozygous forms of FH, the receptor function is only mildly impaired, and LDL levels will remain relatively low. In the more serious homozygous forms, the receptor is not expressed at all.Some studies of FH cohorts suggest that additional risk factors are generally at play when a person develops atherosclerosis. In addition to the classic risk factors such as smoking, high blood pressure, and diabetes, genetic studies have shown that a common abnormality in the prothrombin gene (G20210A) increases the risk of cardiovascular events in people with FH. Several studies found that a high level of lipoprotein(a) was an additional risk factor for ischemic heart disease. The risk was also found to be higher in people with a specific genotype of the angiotensin-converting enzyme (ACE). Screening Cholesterol screening and genetic testing among family members of people with known FH is cost-effective. Other strategies such as universal screening at the age of 16 were suggested in 2001. The latter approach may however be less cost-effective in the short term. Screening at an age lower than 16 was thought likely to lead to an unacceptably high rate of false positives.A 2007 meta-analysis found that "the proposed strategy of screening children and parents for familial hypercholesterolaemia could have considerable impact in preventing the medical consequences of this disorder in two generations simultaneously." "The use of total cholesterol alone may best discriminate between people with and without FH between the ages of 1 to 9 years."Screening of toddlers has been suggested, and results of a trial on 10,000 one-year-olds were published in 2016. Work was needed to find whether screening was cost-effective, and acceptable to families. Genetic counseling can help assist in genetic testing following a positive cholesterol screen for FH. Treatment Heterozygous FH FH is usually treated with statins. Statins act by inhibiting the enzyme hydroxymethylglutaryl CoA reductase (HMG-CoA-reductase) in the liver. In response, the liver produces more LDL receptors, which remove circulating LDL from the blood. Statins effectively lower cholesterol and LDL levels, although sometimes add-on therapy with other drugs is required, such as bile acid sequestrants (cholestyramine or colestipol), nicotinic acid preparations or fibrates. Control of other risk factors for cardiovascular disease is required, as risk remains somewhat elevated even when cholesterol levels are controlled. Professional guidelines recommend that the decision to treat a person with FH with statins should not be based on the usual risk prediction tools (such as those derived from the Framingham Heart Study), as they are likely to underestimate the risk of cardiovascular disease; unlike the rest of the population, FH have had high levels of cholesterol since birth, probably increasing their relative risk. Prior to the introduction of the statins, clofibrate (an older fibrate that often caused gallstones), probucol (especially in large xanthomas) and thyroxine were used to reduce LDL cholesterol levels. More controversial is the addition of ezetimibe, which inhibits cholesterol absorption in the gut. While it reduces LDL cholesterol, it does not appear to improve a marker of atherosclerosis called the intima-media thickness. Whether this means that ezetimibe is of no overall benefit in FH is unknown.There are no interventional studies that directly show mortality benefit of cholesterol lowering in FH. Rather, evidence of benefit is derived from a number of trials conducted in people who have polygenic hypercholesterolemia (in which heredity plays a smaller role). Still, a 1999 observational study of a large British registry showed that mortality in people with FH had started to improve in the early 1990s when statins were introduced.A cohort study suggested that treatment of FH with statins leads to a 48% reduction in death from coronary heart disease to a point where people are no more likely to die of coronary heart disease than the general population. However, if the person already had coronary heart disease the reduction was 25%. The results emphasize the importance of early identification of FH and treatment with statins.Alirocumab and evolocumab, both monoclonal antibodies against PCSK9, are specifically indicated as adjunct to diet and maximally tolerated statin therapy for the treatment of adults with heterozygous familial hypercholesterolemia, who require additional lowering of LDL cholesterol.More recently Inclisiran has been approved for the treatment of HeFH Homozygous FH Homozygous FH is harder to treat. The LDL (Low Density Lipoprotein) receptors are minimally functional, if at all. Only high doses of statins, often in combination with other medications, are modestly effective in improving lipid levels. If medical therapy is not successful at reducing cholesterol levels, LDL apheresis may be used; this filters LDL from the bloodstream in a process reminiscent of dialysis. Very severe cases may be considered for a liver transplant; this provides a liver with normally functional LDL receptors, and leads to rapid improvement of the cholesterol levels, but at the risk of complications from any solid organ transplant (such as rejection, infections, or side-effects of the medication required to suppress rejection). Other surgical techniques include partial ileal bypass surgery, in which part of the small bowel is bypassed to decrease the absorption of nutrients and hence cholesterol, and portacaval shunt surgery, in which the portal vein is connected to the vena cava to allow blood with nutrients from the intestine to bypass the liver.Lomitapide, an inhibitor of the microsomal triglyceride transfer protein, was approved by the US FDA in December 2012 as an orphan drug for the treatment of homozygous familial hypercholesterolemia. In January 2013, The US FDA also approved mipomersen, which inhibits the action of the gene apolipoprotein B, for the treatment of homozygous familial hypercholesterolemia. Gene therapy is a possible future alternative.Evinacumab, a monoclonal antibody inhibiting angiopoietin-like protein 3, was approved in 2021 for adjunct therapy. Children Given that FH is present from birth and atherosclerotic changes may begin early in life, it is sometimes necessary to treat adolescents or even teenagers with agents that were originally developed for adults. Due to safety concerns, many physicians prefer to use bile acid sequestrants and fenofibrate as these are licensed in children. Nevertheless, statins seem safe and effective, and in older children may be used as in adults.An expert panel in 2006 advised on early combination therapy with LDL apheresis, statins, and cholesterol absorption inhibitors in children with homozygous FH at the highest risk. Epidemiology The global prevalence of FH is approximately 10 million people. In most populations studied, heterozygous FH occurs in about 1:250 people, but not all develop symptoms. Homozygous FH occurs in about 1:1,000,000.LDLR mutations are more common in certain populations, presumably because of a genetic phenomenon known as the founder effect—they were founded by a small group of individuals, one or several of whom was a carrier of the mutation. The Afrikaner, French Canadians, Lebanese Christians, and Finns have high rates of specific mutations that make FH particularly common in these groups. APOB mutations are more common in Central Europe. History The Norwegian physician Dr Carl Müller first associated the physical signs, high cholesterol levels and autosomal dominant inheritance in 1938. In the early 1970s and 1980s, the genetic cause for FH was described by Dr Joseph L. Goldstein and Dr Michael S. Brown of Dallas, Texas. Initially, they found increased activity of HMG-CoA reductase, but studies showed that this did not explain the very abnormal cholesterol levels in people with FH. The focus shifted to the binding of LDL to its receptor, and effects of impaired binding on metabolism; this proved to be the underlying mechanism for FH. Subsequently, numerous mutations in the protein were directly identified by sequencing. They later won the 1985 Nobel Prize in Medicine for their discovery of the LDL receptor and its impact on lipoprotein metabolism. See also Primary hyperlipoproteinemia Familial hypertriglyceridemia Lipoprotein lipase deficiency Familial apoprotein CII deficiency Akira Endo, discoverer of the first statin References External links MedicinePlus: Familial Hypercholesterolemia
Chignon
Chignon can mean: Chignon (hairstyle), a hairstyle with the hair in a "bun" Chignon (medical term), a temporary swelling left on an infants head after delivery by a ventouse suction cap See also Chingon (disambiguation)
Stepmother
A stepmother, stepmum or stepmom is a non-biological female parent married to ones preexisting parent. A stepmother-in-law is a stepmother of ones spouse. Children from her spouses previous unions are known as her stepchildren. Culture Stepparents (mainly stepmothers) may also face some societal challenges due to the stigma surrounding the "evil stepmother" character. Morello notes that the introduction of the "evil stepmother" character in the past is problematic to stepparents today, as it has created a stigma towards stepmothers. The presence of this stigma can have a negative impact on stepmothers self-esteem. Fiction In fiction, stepmothers are often portrayed as being wicked and evil. The character of the wicked stepmother features heavily in fairy tales; the most famous examples are Cinderella, Snow White and Hansel and Gretel. Stepdaughters are her most common victim, and then stepdaughter/stepson pairs, but stepsons also are victims as in The Juniper Tree—sometimes, as in East of the Sun and West of the Moon, because he refused to marry his stepsister as she wished, or, indeed, they may make their stepdaughters-in-law their victims, as in The Boys with the Golden Stars. In some fairy tales, such as Giambattista Basiles La Gatta Cennerentola or the Danish Green Knight, the stepmother wins the marriage by ingratiating herself with the stepdaughter, and once she obtains it, becomes cruel.In some fairy tales, the stepdaughters escape by marrying does not free her from her stepmother. After the birth of the stepdaughters first child, the stepmother may attempt to murder the new mother and replace her with her own daughter—thus making her the stepmother to the next generation. Such a replacement occurs in The Wonderful Birch, Brother and Sister, and The Three Little Men in the Wood; only by foiling the stepmothers plot (and usually executing her), is the story brought to a happy ending. In the Korean Folktale Janghwa Hongryeon jeon, the stepmother kills her own stepdaughters. In many stories with evil stepmothers, the hostility between the stepmother and the stepchild is underscored by having the child succeed through aid from the dead mother. This motif occurs from Norse mythology, where Svipdagr rouses his mother Gróa from the grave so as to learn from her how to accomplish a task his stepmother set, to fairy tales such as the Brothers Grimm version of Cinderella, where Aschenputtel receives her clothing from a tree growing on her mothers grave, the Russian Vasilissa the Beautiful, where Vasilissa is aided by a doll her mother gave, and her mothers blessing, and the Malay Bawang Putih Bawang Merah, where the heroines mother comes back as fish to protect her. The notion of the word stepmother being descriptive of an intrinsically unkind parent is suggested by peculiar wording in John Gambles "An Irish Wake" (1826). He writes of a woman soon to die, who instructs her successor to "be kind to my children." Gamble writes that the injunction was forgotten and that she "proved a very step-mother." Fairy tales can have variants where one tale has an evil mother and the other an evil stepmother: in The Six Swans by the Brothers Grimm and also in The Wild Swans by Hans Christian Andersen, the heroine is persecuted by her husbands mother and in another one by her stepmother, and in The Twelve Wild Ducks, by his stepmother. Sometimes this appears to be a deliberate switch: The Brothers Grimm, having put in their first editions versions of Snow White and Hansel and Gretel where the villain was the biological mother, altered it to a stepmother in later editions, perhaps to mitigate the storys violence. Another reason for the change from a villainous mother to a villainous stepmother may have been the belief that mothers were sacred, as well as the belief that people would not believe that a mother could harbor such ill-will and animosity toward their child. The Icelandic fairy tale The Horse Gullfaxi and the Sword Gunnfoder features a good stepmother, who indeed aids the prince like a fairy godmother, but this figure is very rare in fairy tales. The stepmother may be identified with other evils the characters meet. For instance, both the stepmother and the witch in Hansel and Gretel are deeply concerned with food, the stepmother to avoid hunger, the witch with her house built of food and her desire to eat the children, and when the children kill the witch and return home, their stepmother has mysteriously died.This hostility from the stepmother and tenderness from the true mother has been interpreted in varying ways. A psychological interpretation, by Bruno Bettelheim, describes it as "splitting" the actual mother in an ideal mother and a false mother that contains what the child dislikes in the actual mother. However, historically, many women died in childbirth, their husbands remarried, and the new stepmothers competed with the children of the first marriage for resources; the tales can be interpreted as factual conflicts from history. In some fairy tales, such as The Juniper Tree, the stepmothers hostility is overtly the desire to secure the inheritance of her children.Stepmothers also make many appearances in Chinese tales of family. Wicked stepmothers are common. In Classic of Filial Piety, Guo Jujing told the story of Min Ziqian, who had lost his mother at a young age. His stepmother had two more sons and saw to it that they were warmly dressed in winter but neglected her stepson. When her husband discovered this, he decided to divorce her. His son interceded, on the ground that she neglected only him, but when they had no mother, all three sons would be neglected. His father relented, and the stepmother henceforth took care of all three children. For this, he was held up as a model of filial piety. Conversely, the exemplary stepmother prefers the stepson to her own child, in recognition that his seniority makes him superior. The "righteous stepmother of Qi", faced with her son and stepson having been found by a murdered man, and both having confessed to shield the other, argues for her sons execution because her husband had ordered her to look after her stepson, and her son is the junior brother; the king pardoned them both for her devotion to duty.The ubiquity of the wicked stepmother has made it a frequent theme of revisionist fairy tale fantasy. This can range from Tanith Lees Red as Blood, where the stepmother queen is desperately trying to protect the land from her evil stepdaughters magic, to Diana Wynne Joness Howls Moving Castle, where, although it is known that stepmothers are evil, the actual stepmother is guilty of nothing more than some carelessness, to Erma Bombecks retelling where Cinderella is lazy and a liar. More subtly, Piers Anthony depicted the Princess Threnody as being cursed by her stepmother in Crewel Lye: A Caustic Yarn: if she ever entered Castle Roogna, it would fall down. But Threnody explains that her presence at the castle caused her father to dote on her and neglect his duties to the destruction of the kingdom; her stepmother had merely made her destructive potential literal, and forced her to confront what she was doing.The character of the evil stepmother can also be found in the genre of young adult fiction or young adult social problem novels. In Lisa Heathfields Paper Butterflies. the protagonist June suffers horrific abuse at the hands of her stepmother, a fact that she conceals from her father. Despite many examples of evil or cruel stepmothers, loving stepmothers also exist in fiction. In Kevin and Kell, Kell is portrayed as loving her stepdaughter Lindesfarne, whom her husband Kevin had adopted during his previous marriage. Likewise, Lindesfarne considers Kell her mother, and has a considerably more favorable view of her than Angelique, Kevins ex-wife and her adoptive mother, due to feeling neglected by Angelique during her childhood. The Disney film Enchanted also makes references to the "evil stepmother" belief, as the villainess is a stepmother, but her wickedness comes from her selfishness and power hungriness rather than the simple fact she is a stepmother. When a little girl tells the heroine Giselle that all stepmothers are evil, Giselle reminds her that she personally knows some wonderful women who were good stepmothers, and the fact a woman is a stepmother does not suddenly change her personality. This is shown later on when Giselle marries that girls father, who had her from a previous marriage, thus becoming a stepmother herself. As Giselle is a sweet and caring woman, she makes a good wife and stepmother. However, it is notable that during much of that film, Giselle was more of an older sister figure than a maternal figure to that little girl. In the movie Nanny McPhee a group of children worry that their father will remarry, believing from their fairy tales that all stepmothers are an "evil breed." Although they help their father marry again to help keep the family together, their soon-to-be stepmother is very cruel, as they suspected. When the wedding to her is called off, the father decides to marry the much kinder scullery maid, causing one child to comment that the evil stepmother personification does not apply to her. Stepmother relationships are often examined in soap operas. An example of this is the long-running rivalry between Victoria Lord Banks and stepmother Dorian Lord on the American soap opera One Life to Live. In contrast to many other Disney-related media, the animated series Phineas and Ferb features a stepfamily in which both parents get along well with their three children (avoiding the normal tropes of evil stepparents). In television, Drake & Josh features a stepfamily in which both parents usually get along well with their three children. In the series The Adventures of Shirley Holmes, one episode featured a princess who was the heir to the throne of her country and feared that her stepmother wanted to have her assassinated as her own son was next in line after her stepdaughter. The episode concludes the revelation that her stepmother actually wanted her stepdaughter to inherit the throne and had attempted to thwart actual assassins who did not want a woman to rule their country. In Sofia the First, Sofias mother Miranda became stepmother to Prince James and Princess Amber, she acknowledged there werent many tales featuring loving and kind stepmothers. This is another example of a well-blended family. Classical Literature Greek Alcestis (play) 438 BCE: The dying biological mother requests that her husband not remarry, for fear of her children being mistreated by a stepmother. Hippolytus 428 BCE: The stepmother commits suicide to prevent herself from following through on her lust for her stepson and leaves a note falsely claiming that the stepson had raped her. == References ==
Mania
Mania, also known as manic syndrome, is a mental and behavioral disorder defined as a state of abnormally elevated arousal, affect, and energy level, or "a state of heightened overall activation with enhanced affective expression together with lability of affect." During a manic episode, an individual will experience rapidly changing emotions and moods, highly influenced by surrounding stimuli. Although mania is often conceived as a "mirror image" to depression, the heightened mood can be either euphoric or dysphoric. As the mania intensifies, irritability can be more pronounced and result in anxiety or anger. The symptoms of mania include elevated mood (either euphoric or irritable), flight of ideas and pressure of speech, increased energy, decreased need and desire for sleep, and hyperactivity. They are most plainly evident in fully developed hypomanic states. However, in full-blown mania, they undergo progressively severe exacerbations and become more and more obscured by other signs and symptoms, such as delusions and fragmentation of behavior. Causes and diagnosis Mania is a syndrome with multiple causes. Although the vast majority of cases occur in the context of bipolar disorder, it is a key component of other psychiatric disorders (such as schizoaffective disorder, bipolar type) and may also occur secondary to various general medical conditions, such as multiple sclerosis; certain medications may perpetuate a manic state, for example prednisone; or substances prone to abuse, especially stimulants, such as caffeine and cocaine. In the current DSM-5, hypomanic episodes are separated from the more severe full manic episodes, which, in turn, are characterized as either mild, moderate, or severe, with certain diagnostic criteria (e.g. catatonia, psychosis). Mania is divided into three stages: hypomania, or stage I; acute mania, or stage II; and delirious mania (delirium), or stage III. This "staging" of a manic episode is useful from a descriptive and differential diagnostic point of view Mania varies in intensity, from mild mania (hypomania) to delirious mania, marked by such symptoms as disorientation, florid psychosis, incoherence, and catatonia. Standardized tools such as Altman Self-Rating Mania Scale and Young Mania Rating Scale can be used to measure severity of manic episodes. Because mania and hypomania have also long been associated with creativity and artistic talent, it is not always the case that the clearly manic/hypomanic bipolar patient needs or wants medical help; such persons often either retain sufficient self-control to function normally or are unaware that they have "gone manic" severely enough to be committed or to commit themselves. Manic persons often can be mistaken for being under the influence of drugs. Classification Mixed states In a mixed affective state, the individual, though meeting the general criteria for a hypomanic (discussed below) or manic episode, experiences three or more concurrent depressive symptoms. This has caused some speculation, among clinicians, that mania and depression, rather than constituting "true" polar opposites, are, rather, two independent axes in a unipolar—bipolar spectrum. A mixed affective state, especially with prominent manic symptoms, places the patient at a greater risk for suicide. Depression on its own is a risk factor but, when coupled with an increase in energy and goal-directed activity, the patient is far more likely to act with violence on suicidal impulses. Hypomania Hypomania, which means "less than mania", is a lowered state of mania that does little to impair function or decrease quality of life. Although creativity and hypomania have been historically linked, a review and meta-analysis exploring this relationship found that this assumption may be too general and empirical research evidence is lacking. In hypomania, there is less need for sleep and both goal-motivated behaviour and metabolism increase. Some studies exploring brain metabolism in subjects with hypomania, however, did not find any conclusive link; while there are studies that reported abnormalities, some failed to detect differences. Though the elevated mood and energy level typical of hypomania could be seen as a benefit, true mania itself generally has many undesirable consequences including suicidal tendencies, and hypomania can, if the prominent mood is irritable as opposed to euphoric, be a rather unpleasant experience. In addition, the exaggerated case of hypomania can lead to problems. For instance, trait-based positivity for a person could make them more engaging and outgoing, and cause them to have a positive outlook in life. When exaggerated in hypomania, however, such a person can display excessive optimism, grandiosity, and poor decision making, often with little regard to the consequences. Associated disorders A single manic episode, in the absence of secondary causes, (i.e., substance use disorders, pharmacologics, or general medical conditions) is often sufficient to diagnose bipolar I disorder. Hypomania may be indicative of bipolar II disorder. Manic episodes are often complicated by delusions and/or hallucinations; and if the psychotic features persist for a duration significantly longer than the episode of typical mania (two weeks or more), a diagnosis of schizoaffective disorder is more appropriate. Certain obsessive-compulsive spectrum disorders as well as impulse control disorders share the suffix "-mania," namely, kleptomania, pyromania, and trichotillomania. Despite the unfortunate association implied by the name, however, no connection exists between mania or bipolar disorder and these disorders. Furthermore, evidence indicates a B12 deficiency can also cause symptoms characteristic of mania and psychosis.Hyperthyroidism can produce similar symptoms to those of mania, such as agitation, elevated mood, increased energy, hyperactivity, sleep disturbances and sometimes, especially in severe cases, psychosis. Signs and symptoms A manic episode is defined in the American Psychiatric Associations diagnostic manual as a "distinct period of abnormally and persistently elevated, expansive, or irritable mood and abnormally and persistently increased activity or energy, lasting at least 1 week and present most of the day, nearly every day (or any duration, if hospitalization is necessary)," where the mood is not caused by drugs/medication or a non-mental medical illness (e.g., hyperthyroidism), and: (a) is causing obvious difficulties at work or in social relationships and activities, or (b) requires admission to hospital to protect the person or others, or (c) the person has psychosis.To be classified as a manic episode, while the disturbed mood and an increase in goal-directed activity or energy is present, at least three (or four, if only irritability is present) of the following must have been consistently present: Inflated self-esteem or grandiosity. Decreased need for sleep (e.g., feels rested after 3 hours of sleep). More talkative than usual, or acts pressured to keep talking. Flights of ideas or subjective experience that thoughts are racing. Increase in goal-directed activity, or psychomotor acceleration. Distractibility (too easily drawn to unimportant or irrelevant external stimuli). Excessive involvement in activities with a high likelihood of painful consequences.(e.g., extravagant shopping, improbable commercial schemes, hypersexuality).Though the activities one participates in while in a manic state are not always negative, those with the potential to have negative outcomes are far more likely. If the person is concurrently depressed, they are said to be having a mixed episode.The World Health Organizations classification system defines a manic episode as one where mood is higher than the persons situation warrants and may vary from relaxed high spirits to barely controllable exuberance, is accompanied by hyperactivity, a compulsion to speak, a reduced sleep requirement, difficulty sustaining attention, and/or often increased distractibility. Frequently, confidence and self-esteem are excessively enlarged, and grand, extravagant ideas are expressed. Behavior that is out-of-character and risky, foolish or inappropriate may result from a loss of normal social restraint.Some people also have physical symptoms, such as sweating, pacing, and weight loss. In full-blown mania, often the manic person will feel as though their goal(s) are of paramount importance, that there are no consequences, or that negative consequences would be minimal, and that they need not exercise restraint in the pursuit of what they are after. Hypomania is different, as it may cause little or no impairment in function. The hypomanic persons connection with the external world, and its standards of interaction, remain intact, although intensity of moods is heightened. But those with prolonged unresolved hypomania do run the risk of developing full mania, and may cross that "line" without even realizing they have done so.One of the signature symptoms of mania (and to a lesser extent, hypomania) is what many have described as racing thoughts. These are usually instances in which the manic person is excessively distracted by objectively unimportant stimuli. This experience creates an absent-mindedness where the manic individuals thoughts totally preoccupy them, making them unable to keep track of time, or be aware of anything besides the flow of thoughts. Racing thoughts also interfere with the ability to fall asleep. Manic states are always relative to the normal state of intensity of the affected individual; thus, already irritable patients may find themselves losing their tempers even more quickly, and an academically gifted person may, during the hypomanic stage, adopt seemingly "genius" characteristics and an ability to perform and articulate at a level far beyond that which they would be capable of during euthymia. A very simple indicator of a manic state would be if a heretofore clinically depressed patient suddenly becomes inordinately energetic, enthusiastic, cheerful, aggressive, or "over-happy". Other, often less obvious, elements of mania include delusions (generally of either grandeur or persecution, according to whether the predominant mood is euphoric or irritable), hypersensitivity, hypervigilance, hypersexuality, hyper-religiosity, hyperactivity and impulsivity, a compulsion to over explain (typically accompanied by pressure of speech), grandiose schemes and ideas, and a decreased need for sleep (for example, feeling rested after only 3 or 4 hours of sleep). In the case of the latter, the eyes of such patients may both look and seem abnormally "wide open", rarely blinking, and may contribute to some clinicians erroneous belief that these patients are under the influence of a stimulant drug, when the patient, in fact, is either not on any mind-altering substances or is actually on a depressant drug. Individuals may also engage in out-of-character behavior during the episode, such as questionable business transactions, wasteful expenditures of money (e.g., spending sprees), risky sexual activity, abuse of recreational substances, excessive gambling, reckless behavior (such as extreme speeding or other daredevil activity), abnormal social interaction (e.g. over-familiarity and conversing with strangers), or highly vocal arguments. These behaviours may increase stress in personal relationships, lead to problems at work, and increase the risk of altercations with law enforcement. There is a high risk of impulsively taking part in activities potentially harmful to the self and others.Although "severely elevated mood" sounds somewhat desirable and enjoyable, the experience of mania is ultimately often quite unpleasant and sometimes disturbing, if not frightening, for the person involved and for those close to them, and it may lead to impulsive behaviour that may later be regretted. It can also often be complicated by the individuals lack of judgment and insight regarding periods of exacerbation of characteristic states. Manic patients are frequently grandiose, obsessive, impulsive, irritable, belligerent, and frequently deny anything is wrong with them. Because mania frequently encourages high energy and decreased perception of need or ability to sleep, within a few days of a manic cycle, sleep-deprived psychosis may appear, further complicating the ability to think clearly. Racing thoughts and misperceptions lead to frustration and decreased ability to communicate with others. Mania may also, as earlier mentioned, be divided into three “stages”. Stage I corresponds with hypomania and may feature typical hypomanic characteristics, such as gregariousness and euphoria. In stages II and III mania, however, the patient may be extraordinarily irritable, psychotic or even delirious. These latter two stages are referred to as acute and delirious (or Bells), respectively. Causes Various triggers have been associated with switching from euthymic or depressed states into mania. One common trigger of mania is antidepressant therapy. Studies show that the risk of switching while on an antidepressant is between 6-69 percent. Dopaminergic drugs such as reuptake inhibitors and dopamine agonists may also increase risk of switch. Other medication possibly include glutaminergic agents and drugs that alter the HPA axis. Lifestyle triggers include irregular sleep-wake schedules and sleep deprivation, as well as extremely emotional or stressful stimuli.Various genes that have been implicated in genetic studies of bipolar have been manipulated in preclinical animal models to produce syndromes reflecting different aspects of mania. CLOCK and DBP polymorphisms have been linked to bipolar in population studies, and behavioral changes induced by knockout are reversed by lithium treatment. Metabotropic glutamate receptor 6 has been genetically linked to bipolar, and found to be under-expressed in the cortex. Pituitary adenylate cyclase-activating peptide has been associated with bipolar in gene linkage studies, and knockout in mice produces mania like-behavior. Targets of various treatments such as GSK-3, and ERK1 have also demonstrated mania like behavior in preclinical models.Mania may be associated with strokes, especially cerebral lesions in the right hemisphere.Deep brain stimulation of the subthalamic nucleus in Parkinsons disease has been associated with mania, especially with electrodes placed in the ventromedial STN. A proposed mechanism involves increased excitatory input from the STN to dopaminergic nuclei.There are certain psychoactive drugs that can induce a state of manic psychosis, including: amphetamine, cathinone, cocaine, MDMA, methamphetamine, methylphenidate, oxycodone, phencyclidine, designer drugs, etc.Mania can also be caused by physical trauma or illness. When the causes are physical, it is called secondary mania. Mechanism The mechanism underlying mania is unknown, but the neurocognitive profile of mania is highly consistent with dysfunction in the right prefrontal cortex, a common finding in neuroimaging studies. Various lines of evidence from post-mortem studies and the putative mechanisms of anti-manic agents point to abnormalities in GSK-3, dopamine, Protein kinase C and Inositol monophosphatase.Meta analysis of neuroimaging studies demonstrate increased thalamic activity, and bilaterally reduced inferior frontal gyrus activation. Activity in the amygdala and other subcortical structures such as the ventral striatum tend to be increased, although results are inconsistent and likely dependent upon task characteristics such as valence. Reduced functional connectivity between the ventral prefrontal cortex and amygdala along with variable findings supports a hypothesis of general dysregulation of subcortical structures by the prefrontal cortex. A bias towards positively valenced stimuli, and increased responsiveness in reward circuitry may predispose towards mania. Mania tends to be associated with right hemisphere lesions, while depression tends to be associated with left hemisphere lesions.Post-mortem examinations of bipolar disorder demonstrate increased expression of Protein Kinase C (PKC). While limited, some studies demonstrate manipulation of PKC in animals produces behavioral changes mirroring mania, and treatment with PKC inhibitor tamoxifen (also an anti-estrogen drug) demonstrates antimanic effects. Traditional antimanic drugs also demonstrate PKC inhibiting properties, among other effects such as GSK3 inhibition.Manic episodes may be triggered by dopamine receptor agonists, and this combined with tentative reports of increased VMAT2 activity, measured via PET scans of radioligand binding, suggests a role of dopamine in mania. Decreased cerebrospinal fluid levels of the serotonin metabolite 5-HIAA have been found in manic patients too, which may be explained by a failure of serotonergic regulation and dopaminergic hyperactivity.Limited evidence suggests that mania is associated with behavioral reward hypersensitivity, as well as with neural reward hypersensitivity. Electrophysiological evidence supporting this comes from studies associating left frontal EEG activity with mania. As left frontal EEG activity is generally thought to be a reflection of behavioral activation system activity, this is thought to support a role for reward hypersensitivity in mania. Tentative evidence also comes from one study that reported an association between manic traits and feedback negativity during receipt of monetary reward or loss. Neuroimaging evidence during acute mania is sparse, but one study reported elevated orbitofrontal cortex activity to monetary reward, and another study reported elevated striatal activity to reward omission. The latter finding was interpreted in the context of either elevated baseline activity (resulting in a null finding of reward hypersensitivity), or reduced ability to discriminate between reward and punishment, still supporting reward hyperactivity in mania. Punishment hyposensitivity, as reflected in a number of neuroimaging studies as reduced lateral orbitofrontal response to punishment, has been proposed as a mechanism of reward hypersensitivity in mania. Diagnosis In the ICD-10 there are several disorders with the manic syndrome: organic manic disorder (F06.30), mania without psychotic symptoms (F30.1), mania with psychotic symptoms (F30.2), other manic episodes (F30.8), unspecified manic episode (F30.9), manic type of schizoaffective disorder (F25.0), bipolar disorder, current episode manic without psychotic symptoms (F31.1), bipolar affective disorder, current episode manic with psychotic symptoms (F31.2). Treatment Before beginning treatment for mania, careful differential diagnosis must be performed to rule out secondary causes. The acute treatment of a manic episode of bipolar disorder involves the utilization of either a mood stabilizer (carbamazepine, valproate, lithium, or lamotrigine) or an atypical antipsychotic (olanzapine, quetiapine, risperidone, aripiprazole, or cariprazine). The use of antipsychotic agents in the treatment of acute mania was reviewed by Tohen and Vieta in 2009.When the manic behaviours have gone, long-term treatment then focuses on prophylactic treatment to try to stabilize the patients mood, typically through a combination of pharmacotherapy and psychotherapy. The likelihood of having a relapse is very high for those who have experienced two or more episodes of mania or depression. While medication for bipolar disorder is important to manage symptoms of mania and depression, studies show relying on medications alone is not the most effective method of treatment. Medication is most effective when used in combination with other bipolar disorder treatments, including psychotherapy, self-help coping strategies, and healthy lifestyle choices.Lithium is the classic mood stabilizer to prevent further manic and depressive episodes. A systematic review found that long term lithium treatment substantially reduces the risk of bipolar manic relapse, by 42%. Anticonvulsants such as valproate, oxcarbazepine and carbamazepine are also used for prophylaxis. More recent drug solutions include lamotrigine and topiramate, both anticonvulsants as well. In some cases, long-acting benzodiazepines, particularly clonazepam, are used after other options are exhausted. In more urgent circumstances, such as in emergency rooms, lorazepam, combined with haloperidol, is used to promptly alleviate symptoms of agitation, aggression, and psychosis. Antidepressant monotherapy is not recommended for the treatment of depression in patients with bipolar disorders I or II, and no benefit has been demonstrated by combining antidepressants with mood stabilizers in these patients. Some atypical antidepressants, however, such as mirtazepine and trazodone have been occasionally used after other options have failed. Society and culture In Electroboy: A Memoir of Mania by Andy Behrman, he describes his experience of mania as "the most perfect prescription glasses with which to see the world... life appears in front of you like an oversized movie screen". Behrman indicates early in his memoir that he sees himself not as a person with an uncontrollable disabling illness, but as a director of the movie that is his vivid and emotionally alive life. There is some evidence that people in the creative industries have bipolar disorder more often than those in other occupations.Winston Churchill had periods of manic symptoms that may have been both an asset and a liability.English actor Stephen Fry, who has bipolar disorder, recounts manic behaviour during his adolescence: "When I was about 17 ... going around London on two stolen credit cards, it was a sort of fantastic reinvention of myself, an attempt to. I bought ridiculous suits with stiff collars and silk ties from the 1920s, and would go to the Savoy and Ritz and drink cocktails." While he has experienced suicidal thoughts, he says the manic side of his condition has had positive contributions on his life. Etymology The nosology of the various stages of a manic episode has changed over the decades. The word derives from the Ancient Greek μανία (manía), "madness, frenzy" and the verb μαίνομαι (maínomai), "to be mad, to rage, to be furious". See also References Further reading Expert Opin Pharmacother. 2001 December;2(12):1963–73. Schizoaffective Disorder. 2007 September Mayo Clinic. Retrieved October 1, 2007. Schizoaffective Disorder Archived 2011-08-18 at the Wayback Machine. 2004 May. All Psych Online: Virtual Psychology Classroom. Retrieved October 2, 2007. Psychotic Disorders. 2004 May. All Psych Online: Virtual Psychology Classroom. Retrieved October 2, 2007. Sajatovic, Martha; DiBiovanni, Sue Kim; Bastani, Bijan; Hattab, Helen; Ramirez, Luis F. (1996). "Risperidone therapy in treatment refractory acute bipolar and schizoaffective mania". Psychopharmacology Bulletin. 32 (1): 55–61. PMID 8927675. External links Bipolar Mania Symptoms Depression and Bipolar Support Alliance
Sebaceous cyst
A sebaceous cyst is a term commonly used to refer to either: Epidermoid cysts (also termed epidermal cysts, infundibular cyst) Pilar cysts (also termed trichelemmal cysts, isthmus-catagen cysts)Both of the above types of cysts contain keratin, not sebum, and neither originates from sebaceous glands. Epidermoid cysts originate in the epidermis and pilar cysts originate from hair follicles. Technically speaking, then, they are not sebaceous cysts. "True" sebaceous cysts, which originate from sebaceous glands and which contain sebum, are relatively rare and are known as steatocystoma simplex or, if multiple, as steatocystoma multiplex. Medical professionals have suggested that the term "sebaceous cyst" be avoided since it can be misleading.: 31  In practice, however, the term is still often used for epidermoid and pilar cysts. Signs and symptoms The scalp, ears, back, face, and upper arm, are common sites of sebaceous cysts, though they may occur anywhere on the body except the palms of the hands and soles of the feet. They are more common in hairier areas, where in cases of long duration they could result in hair loss on the skin surface immediately above the cyst. They are smooth to the touch, vary in size, and are generally round in shape. They are generally mobile masses that can consist of: Fibrous tissues and fluids A fatty (keratinous) substance that resembles cottage cheese, in which case the cyst may be called "keratin cyst" - this material has a characteristic "cheesy" or foot odor smell A somewhat viscous, serosanguineous fluid (containing purulent and bloody material)The nature of the contents of a sebaceous cyst, and of its surrounding capsule, differs depending on whether the cyst has ever been infected. With surgery, a cyst can usually be excised in its entirety. Poor surgical technique, or previous infection leading to scarring and tethering of the cyst to the surrounding tissue, may lead to rupture during excision and removal. A completely removed cyst will not recur, though if the patient has a predisposition to cyst formation, further cysts may develop in the same general area. Causes Cysts may be related to high levels of testosterone, hence may be more frequent in users of anabolic steroids.A case has been reported of a sebaceous cyst being caused by the human botfly.Hereditary causes of sebaceous cysts include Gardners syndrome and basal cell nevus syndrome. Types Epidermoid cyst Pilar cyst About 90% of pilar cysts occur on the scalp, with the remaining sometimes occurring on the face, trunk, and extremities.: 1477  Pilar cysts are significantly more common in females, and a tendency to develop these cysts is often inherited in an autosomal dominant pattern.: 1477  In most cases, multiple pilar cysts appear at once.: 1477 Treatment Sebaceous cysts generally do not require medical treatment. However, if they continue to grow, they may become unsightly, painful, and/or infected. Surgical Surgical excision of a sebaceous cyst is a simple procedure to completely remove the sac and its contents, although it should be performed when inflammation is minimal. Three general approaches are used - traditional wide excision, minimal excision, and punch biopsy excision.The typical outpatient surgical procedure for cyst removal is to numb the area around the cyst with a local anaesthetic, then to use a scalpel to open the lesion with either a single cut down the center of the swelling, or an oval cut on both sides of the center point. If the cyst is small, it may be lanced, instead. The person performing the surgery will squeeze out the contents of the cyst, then use blunt-headed scissors or another instrument to hold the incision wide open while using fingers or forceps to try to remove the cyst wall intact. If the cyst wall can be removed in one piece, the "cure rate" is 100%. If, however, it is fragmented and cannot be entirely recovered, the operator may use curettage (scraping) to remove the remaining exposed fragments, then burn them with an electrocauterization tool, in an effort to destroy them in place. In such cases, the cyst may recur. In either case, the incision is then disinfected, and if necessary, the skin is stitched back together over it. A scar will most likely result. An infected cyst may require oral antibiotics or other treatment before or after excision. If pus has already formed, then incision and drainage should be done along with avulsion of the cyst wall with proper antibiotics coverage. An approach involving incision, rather than excision, has also been proposed. References External links Overview at University of Maryland Medical Center Epidermal Inclusion Cyst at eMedicine
Bulimia nervosa
Bulimia nervosa, also known as simply bulimia, is an eating disorder characterized by binge eating followed by purging or fasting, and excessive concern with body shape and weight. The aim of this activity is to expel the body of calories eaten from the binging phase of the process. Binge eating refers to eating a large amount of food in a short amount of time. Purging refers to the attempts to get rid of the food consumed. This may be done by vomiting or taking laxatives. Other efforts to lose weight may include the use of diuretics, stimulants, water fasting, or excessive exercise. Most people with bulimia are at a normal weight. The forcing of vomiting may result in thickened skin on the knuckles, breakdown of the teeth and effects on metabolic rate and caloric intake which cause thyroid dysfunction. Bulimia is frequently associated with other mental disorders such as depression, anxiety, borderline personality disorder, bipolar disorder and problems with drugs or alcohol. There is also a higher risk of suicide and self-harm.Bulimia is more common among those who have a close relative with the condition. The percentage risk that is estimated to be due to genetics is between 30% and 80%. Other risk factors for the disease include psychological stress, cultural pressure to attain a certain body type, poor self-esteem, and obesity. Living in a culture that commercializes or glamorizes dieting and having parental figures who fixate about weight are also risks. Diagnosis is based on a persons medical history; however, this is difficult, as people are usually secretive about their binge eating and purging habits. Further, the diagnosis of anorexia nervosa takes precedence over that of bulimia. Other similar disorders include binge eating disorder, Kleine–Levin syndrome, and borderline personality disorder. Signs and symptoms Bulimia typically involves rapid and out-of-control eating, which may stop when the person is interrupted by another person or the stomach hurts from over-extension, followed by self-induced vomiting or other forms of purging. This cycle may be repeated several times a week or, in more serious cases, several times a day and may directly cause: Chronic gastric reflux after eating, secondary to vomiting Dehydration and hypokalemia due to renal potassium loss in the presence of alkalosis and frequent vomiting Electrolyte imbalance, which can lead to abnormal heart rhythms, cardiac arrest, and even death Esophagitis, or inflammation of the esophagus Mallory-Weiss tears Boerhaave syndrome, a rupture in the esophageal wall due to vomiting Oral trauma, in which repetitive insertion of fingers or other objects causes lacerations to the lining of the mouth or throat Russells sign: calluses on knuckles and back of hands due to repeated trauma from incisors Perimolysis, or severe dental erosion of tooth enamel Swollen salivary glands (for example, in the neck, under the jaw line) Gastroparesis, or delayed gastric emptying Constipation or diarrhea Tachycardia or palpitations Hypotension Peptic ulcers Infertility Constant weight fluctuations are common Elevated blood sugar, cholesterol, and amylase levels may occur Hypoglycemia may occur after vomitingThese are some of the many signs that may indicate whether someone has bulimia nervosa: A fixation on the number of calories consumed A fixation on and extreme consciousness of ones weight Low self-esteem and/or self-harming Suicidal tendencies An irregular menstrual cycle in women Regular trips to the bathroom, especially soon after eating Depression, anxiety disorders and sleep disorders Frequent occurrences involving consumption of abnormally large portions of food The use of laxatives, diuretics, and diet pills Compulsive or excessive exercise Unhealthy/dry skin, hair, nails, and lips Fatigue, or exhaustionAs with many psychiatric illnesses, delusions can occur, in conjunction with other signs and symptoms, leaving the person with a false belief that is not ordinarily accepted by others.People with bulimia nervosa may also exercise to a point that excludes other activities. Interoceptive People with bulimia exhibit several interoceptive deficits, in which one experiences impairment in recognizing and discriminating between internal sensations, feelings, and emotions. People with bulimia may also react negatively to somatic and affective states. In relation to interoceptive sensitivity, hyposensitive individuals may not detect feelings of fullness in a normal and timely fashion, and therefore are prone to eating more calories.Examining from a neural basis also connects elements of interoception and emotion; notable overlaps occur in the medial prefrontal cortex, anterior and posterior cingulate, and anterior insula cortices, which are linked to both interoception and emotional eating. Related disorders People with bulimia are at a higher risk to have an affective disorder, such as depression or general anxiety disorder. One study found 70% had depression at some time in their lives (as opposed to 26% for adult females in the general population), rising to 88% for all affective disorders combined. Another study in the Journal of Affective Disorders found that of the population of patients that were diagnosed with an eating disorder according to the DSM-V guidelines about 27% also suffered from bipolar disorder. Within this article, the majority of the patients were diagnosed with bulimia nervosa, the second most common condition reported was binge-eating disorder.Some individuals with anorexia nervosa exhibit episodes of bulimic tendencies through purging (either through self-induced vomiting or laxatives) as a way to quickly remove food in their system. There may be an increased risk for diabetes mellitus type 2. Bulimia also has negative effects on a persons teeth due to the acid passed through the mouth from frequent vomiting causing acid erosion, mainly on the posterior dental surface. Research has shown that there is a relationship between bulimia and narcissism. According to a study by the Australian National University, eating disorders are more susceptible among vulnerable narcissists. This can be caused by a childhood in which inner feelings and thoughts were minimized by parents, leading to "a high focus on receiving validation from others to maintain a positive sense of self".The medical journal Borderline Personality Disorder and Emotion Dysregulation notes that a "substantial rate of patients with bulimia nervosa" also have Borderline personality disorder.A study by the Psychopharmacology Research Program of the University of Cincinnati College of Medicine "leaves little doubt that bipolar and eating disorders—particularly bulimia nervosa and bipolar II disorder—are related." The research shows that most clinical studies indicate that patients with bipolar disorder have higher rates of eating disorders, and vice versa. There is overlap in phenomenology, course, comorbidity, family history, and pharmacologic treatment response of these disorders. This is especially true of "eating dysregulation, mood dysregulation, impulsivity and compulsivity, craving for activity and/or exercise."Studies have shown a relationship between bulimias effect on metabolic rate and caloric intake with thyroid dysfunction. Causes Biological As with anorexia nervosa, there is evidence of genetic predispositions contributing to the onset of this eating disorder. Abnormal levels of many hormones, notably serotonin, have been shown to be responsible for some disordered eating behaviors. Brain-derived neurotrophic factor (BDNF) is under investigation as a possible mechanism.There is evidence that sex hormones may influence appetite and eating in women and the onset of bulimia nervosa. Studies have shown that women with hyperandrogenism and polycystic ovary syndrome have a dysregulation of appetite, along with carbohydrates and fats. This dysregulation of appetite is also seen in women with bulimia nervosa. In addition, gene knockout studies in mice have shown that mice that have the gene encoding estrogen receptors have decreased fertility due to ovarian dysfunction and dysregulation of androgen receptors. In humans, there is evidence that there is an association between polymorphisms in the ERβ (estrogen receptor β) and bulimia, suggesting there is a correlation between sex hormones and bulimia nervosa.Bulimia has been compared to drug addiction, though the empirical support for this characterization is limited. However, people with bulimia nervosa may share dopamine D2 receptor-related vulnerabilities with those with substance use disorders.Dieting, a common behaviour in bulimics, is associated with lower plasma tryptophan levels. Decreased tryptophan levels in the brain, and thus the synthesis of serotonin, such as via acute tryptophan depletion, increases bulimic urges in currently and formerly bulimic individuals within hours.Abnormal blood levels of peptides important for the regulation of appetite and energy balance are observed in individuals with bulimia nervosa, but it remains unknown if this is a state or trait.In recent years, evolutionary psychiatry as an emerging scientific discipline has been studying mental disorders from an evolutionary perspective. If eating disorders, Bulimia nervosa in particular, have evolutionary functions or if they are new modern "lifestyle" problems is still debated. Social Media portrayals of an ideal body shape are widely considered to be a contributing factor to bulimia. In a 1991 study by Weltzin, Hsu, Pollicle, and Kaye, it was stated that 19% of bulimics undereat, 37% of bulimics eat an average or normal amount of food, and 44% of bulimics overeat. A survey of 15- to 18-year-old high school girls in Nadroga, Fiji, found the self-reported incidence of purging rose from 0% in 1995 (a few weeks after the introduction of television in the province) to 11.3% in 1998. In addition, the suicide rate among people with bulimia nervosa is 7.5 times higher than in the general population.When attempting to decipher the origin of bulimia nervosa in a cognitive context, Christopher Fairburn et al.s cognitive-behavioral model is often considered the golden standard. Fairburn et al.s model discusses the process in which an individual falls into the binge-purge cycle and thus develops bulimia. Fairburn et al. argue that extreme concern with weight and shape coupled with low self-esteem will result in strict, rigid, and inflexible dietary rules. Accordingly, this would lead to unrealistically restricted eating, which may consequently induce an eventual "slip" where the individual commits a minor infraction of the strict and inflexible dietary rules. Moreover, the cognitive distortion due to dichotomous thinking leads the individual to binge. The binge subsequently should trigger a perceived loss of control, promoting the individual to purge in hope of counteracting the binge. However, Fairburn et al. assert the cycle repeats itself, and thus consider the binge-purge cycle to be self-perpetuating.In contrast, Byrne and Mcleans findings differed slightly from Fairburn et al.s cognitive-behavioral model of bulimia nervosa in that the drive for thinness was the major cause of purging as a way of controlling weight. In turn, Byrne and Mclean argued that this makes the individual vulnerable to binging, indicating that it is not a binge-purge cycle but rather a purge-binge cycle in that purging comes before bingeing. Similarly, Fairburn et al.s cognitive-behavioral model of bulimia nervosa is not necessarily applicable to every individual and is certainly reductionist. Every one differs from another, and taking such a complex behavior like bulimia and applying the same one theory to everyone would certainly be invalid. In addition, the cognitive-behavioral model of bulimia nervosa is very culturally bound in that it may not be necessarily applicable to cultures outside of Western society. To evaluate, Fairburn et al..s model and more generally the cognitive explanation of bulimia nervosa is more descriptive than explanatory, as it does not necessarily explain how bulimia arises. Furthermore, it is difficult to ascertain cause and effect, because it may be that distorted eating leads to distorted cognition rather than vice versa.A considerable amount of literature has identified a correlation between sexual abuse and the development of bulimia nervosa. The reported incident rate of unwanted sexual contact is higher among those with bulimia nervosa than anorexia nervosa.When exploring the etiology of bulimia through a socio-cultural perspective, the "thin ideal internalization" is significantly responsible. The thin-ideal internalization is the extent to which individuals adapt to the societal ideals of attractiveness. Studies have shown that young women that read fashion magazines tend to have more bulimic symptoms than those women who do not. This further demonstrates the impact of media on the likelihood of developing the disorder. Individuals first accept and "buy into" the ideals, and then attempt to transform themselves in order to reflect the societal ideals of attractiveness. J. Kevin Thompson and Eric Stice claim that family, peers, and most evidently media reinforce the thin ideal, which may lead to an individual accepting and "buying into" the thin ideal. In turn, Thompson and Stice assert that if the thin ideal is accepted, one could begin to feel uncomfortable with their body shape or size since it may not necessarily reflect the thin ideal set out by society. Thus, people feeling uncomfortable with their bodies may result in body dissatisfaction and may develop a certain drive for thinness. Consequently, body dissatisfaction coupled with a drive for thinness is thought to promote dieting and negative effects, which could eventually lead to bulimic symptoms such as purging or bingeing. Binges lead to self-disgust which causes purging to prevent weight gain.A study dedicated to investigating the thin ideal internalization as a factor of bulimia nervosa is Thompsons and Stices research. Their study aimed to investigate how and to what degree media affects the thin ideal internalization. Thompson and Stice used randomized experiments (more specifically programs) dedicated to teaching young women how to be more critical when it comes to media, to reduce thin-ideal internalization. The results showed that by creating more awareness of the medias control of the societal ideal of attractiveness, the thin ideal internalization significantly dropped. In other words, less thin ideal images portrayed by the media resulted in less thin-ideal internalization. Therefore, Thompson and Stice concluded that media greatly affected the thin ideal internalization. Papies showed that it is not the thin ideal itself, but rather the self-association with other persons of a certain weight that decide how someone with bulimia nervosa feels. People that associate themselves with thin models get in a positive attitude when they see thin models and people that associate with overweight get in a negative attitude when they see thin models. Moreover, it can be taught to associate with thinner people. Diagnosis The onset of bulimia nervosa is often during adolescence, between 13 and 20 years of age, and many cases have previously experienced obesity, with many relapsing in adulthood into episodic bingeing and purging even after initially successful treatment and remission. A lifetime prevalence of 0.5 percent and 0.9 percent for adults and adolescents, respectively, is estimated among the United States population. Bulimia nervosa may affect up to 1% of young women and, after 10 years of diagnosis, half will recover fully, a third will recover partially, and 10–20% will still have symptoms.Adolescents with bulimia nervosa are more likely to have self-imposed perfectionism and compulsivity issues in eating compared to their peers. This means that the high expectations and unrealistic goals that these individuals set for themselves are internally motivated rather than by social views or expectations. Criteria Bulimia nervosa can be difficult to detect, compared to anorexia nervosa, because bulimics tend to be of average or slightly above average weight. Many bulimics may also engage in significantly disordered eating and exercise patterns without meeting the full diagnostic criteria for bulimia nervosa. Recently, the Diagnostic and Statistical Manual of Mental Disorders was revised, which resulted in the loosening of criteria regarding the diagnoses of bulimia nervosa and anorexia nervosa. The diagnostic criteria utilized by the DSM-5 includes repetitive episodes of binge eating (a discrete episode of overeating during which the individual feels out of control of consumption) compensated for by excessive or inappropriate measures taken to avoid gaining weight. The diagnosis also requires the episodes of compensatory behaviors and binge eating to happen a minimum of once a week for a consistent time period of 3 months. The diagnosis is made only when the behavior is not a part of the symptom complex of anorexia nervosa and when the behavior reflects an overemphasis on physical mass or appearance. Purging often is a common characteristic of a more severe case of bulimia nervosa. Treatment There are two main types of treatment given to those with bulimia nervosa; psychopharmacological and psychosocial treatments. Psychotherapy Cognitive behavioral therapy is the primary treatment for bulimia. Antidepressants of the selective serotonin reuptake inhibitor (SSRI) or tricyclic antidepressant classes may have a modest benefit. While outcomes with bulimia are typically better than in those with anorexia, the risk of death among those affected is higher than that of the general population. At 10 years after receiving treatment about 50% of people are fully recovered.Cognitive behavioral therapy (CBT), which involves teaching a person to challenge automatic thoughts and engage in behavioral experiments (for example, in session eating of "forbidden foods") has a small amount of evidence supporting its use.By using CBT people record how much food they eat and periods of vomiting with the purpose of identifying and avoiding emotional fluctuations that bring on episodes of bulimia on a regular basis. Barker (2003) states that research has found 40–60% of people using cognitive behaviour therapy to become symptom free. He states in order for the therapy to work, all parties must work together to discuss, record and develop coping strategies. Barker (2003) claims by making people aware of their actions they will think of alternatives. People undergoing CBT who exhibit early behavioral changes are most likely to achieve the best treatment outcomes in the long run. Researchers have also reported some positive outcomes for interpersonal psychotherapy and dialectical behavior therapy.Maudsley family therapy, developed at the Maudsley Hospital in London for the treatment of anorexia, has been shown promising results in bulimia.The use of CBT has been shown to be quite effective for treating bulimia nervosa (BN) in adults, but little research has been done on effective treatments of BN for adolescents. Although CBT is seen as more cost-efficient and helps individuals with BN in self-guided care, Family Based Treatment (FBT) might be more helpful to younger adolescents who need more support and guidance from their families. Adolescents are at the stage where their brains are still quite malleable and developing gradually. Therefore, young adolescents with BN are less likely to realize the detrimental consequences of becoming bulimic and have less motivation to change, which is why FBT would be useful to have families intervene and support the teens. Working with BN patients and their families in FBT can empower the families by having them involved in their adolescents food choices and behaviors, taking more control of the situation in the beginning and gradually letting the adolescent become more autonomous when they have learned healthier eating habits. Medication Participating in some sort of therapy can be the best medication for bulimia. Antidepressants of the selective serotonin reuptake inhibitors (SSRI) class may have a modest benefit. This includes fluoxetine, also known as prozac, which is FDA approved, for the treatment of bulimia, other antidepressants such as sertraline may also be effective against bulimia. Topiramate may also be useful but has greater side effects. Compared to placebo, the use of a single antidepressant has been shown to be effective. Combining medication with counseling can improve outcomes in some circumstances. Some positive outcomes of treatments can include: abstinence from binge eating, a decrease in obsessive behaviors to lose weight and in shape preoccupation, less severe psychiatric symptoms, a desire to counter the effects of binge eating, as well as an improvement in social functioning and reduced relapse rates. Alternative medicine Some researchers have also claimed positive outcomes in hypnotherapy. The first use of hypnotherapy in Bulimic patients was in 1981. When it comes to hypnotherapy, Bulimic patients are easier to hypnotize than Anorexia Nervosa patients. In Bulimic patients, hypnotherapy focuses on learning self-control when it comes to binging and vomiting, strengthening stimulus control techniques, enhancing ones ego, improving weight control, and helping overweight patients see their body differently (have a different image). Risk Factors Being female and having bulimia nervosa takes a toll on mental health. Women frequently reported an onset of anxiety at the same time of the onset of bulimia nervosa. Another concern with eating disorders is developing a coexisting substance use disorder. Epidemiology There is little data on the percentage of people with bulimia in general populations. Most studies conducted thus far have been on convenience samples from hospital patients, high school or university students. These have yielded a wide range of results: between 0.1% and 1.4% of males, and between 0.3% and 9.4% of females. Studies on time trends in the prevalence of bulimia nervosa have also yielded inconsistent results. According to Gelder, Mayou and Geddes (2005) bulimia nervosa is prevalent between 1 and 2 percent of women aged 15–40 years. Bulimia nervosa occurs more frequently in developed countries and in cities, with one study finding that bulimia is five times more prevalent in cities than in rural areas. There is a perception that bulimia is most prevalent amongst girls from middle-class families; however, in a 2009 study girls from families in the lowest income bracket studied were 153 percent more likely to be bulimic than girls from the highest income bracket.There are higher rates of eating disorders in groups involved in activities which idealize a slim physique, such as dance, gymnastics, modeling, cheerleading, running, acting, swimming, diving, rowing and figure skating. Bulimia is thought to be more prevalent among Caucasians; however, a more recent study showed that African-American teenage girls were 50 percent more likely than Caucasian girls to exhibit bulimic behavior, including both binging and purging. History Etymology The term bulimia comes from Greek βουλιμία boulīmia, "ravenous hunger", a compound of βοῦς bous, "ox" and λιμός, līmos, "hunger". Literally, the scientific name of the disorder, bulimia nervosa, translates to "nervous ravenous hunger". Before the 20th century Although diagnostic criteria for bulimia nervosa did not appear until 1979, evidence suggests that binging and purging were popular in certain ancient cultures. The first documented account of behavior resembling bulimia nervosa was recorded in Xenophons Anabasis around 370 B.C, in which Greek soldiers purged themselves in the mountains of Asia Minor. It is unclear whether this purging was preceded by binging. In ancient Egypt, physicians recommended purging once a month for three days to preserve health. This practice stemmed from the belief that human diseases were caused by the food itself. In ancient Rome, elite society members would vomit to "make room" in their stomachs for more food at all-day banquets. Emperors Claudius and Vitellius both were gluttonous and obese, and they often resorted to habitual purging.Historical records also suggest that some saints who developed anorexia (as a result of a life of asceticism) may also have displayed bulimic behaviors. Saint Mary Magdalen de Pazzi (1566–1607) and Saint Veronica Giuliani (1660–1727) were both observed binge eating—giving in, as they believed, to the temptations of the devil. Saint Catherine of Siena (1347–1380) is known to have supplemented her strict abstinence from food by purging as reparation for her sins. Catherine died from starvation at age thirty-three.While the psychological disorder "bulimia nervosa" is relatively new, the word "bulimia", signifying overeating, has been present for centuries. The Babylon Talmud referenced practices of "bulimia", yet scholars believe that this simply referred to overeating without the purging or the psychological implications bulimia nervosa. In fact, a search for evidence of bulimia nervosa from the 17th to late 19th century revealed that only a quarter of the overeating cases they examined actually vomited after the binges. There was no evidence of deliberate vomiting or an attempt to control weight. 20th century Globally, bulimia was estimated to affect 3.6 million people in 2015. About 1% of young women have bulimia at a given point in time and about 2% to 3% of women have the condition at some point in their lives. The condition is less common in the developing world. Bulimia is about nine times more likely to occur in women than men. Among women, rates are highest in young adults. Bulimia was named and first described by the British psychiatrist Gerald Russell in 1979.At the turn of the century, bulimia (overeating) was described as a clinical symptom, but rarely in the context of weight control. Purging, however, was seen in anorexic patients and attributed to gastric pain rather than another method of weight control.In 1930, admissions of anorexia nervosa patients to the Mayo Clinic from 1917 to 1929 were compiled. Fifty-five to sixty-five percent of these patients were reported to be voluntarily vomiting to relieve weight anxiety. Records show that purging for weight control continued throughout the mid-1900s. Several case studies from this era reveal patients with the modern description of bulimia nervosa. In 1939, Rahman and Richardson reported that out of their six anorexic patients, one had periods of overeating, and another practiced self-induced vomiting. Wulff, in 1932, treated "Patient D", who would have periods of intense cravings for food and overeat for weeks, which often resulted in frequent vomiting. Patient D, who grew up with a tyrannical father, was repulsed by her weight and would fast for a few days, rapidly losing weight. Ellen West, a patient described by Ludwig Binswanger in 1958, was teased by friends for being fat and excessively took thyroid pills to lose weight, later using laxatives and vomiting. She reportedly consumed dozens of oranges and several pounds of tomatoes each day, yet would skip meals. After being admitted to a psychiatric facility for depression, Ellen ate ravenously yet lost weight, presumably due to self-induced vomiting. However, while these patients may have met modern criteria for bulimia nervosa, they cannot technically be diagnosed with the disorder, as it had not yet appeared in the Diagnostic and Statistical Manual of Mental Disorders at the time of their treatment.An explanation for the increased instances of bulimic symptoms may be due to the 20th centurys new ideals of thinness. The shame of being fat emerged in the 1940s when teasing remarks about weight became more common. The 1950s, however, truly introduced the trend of aspiration for thinness.In 1979, Gerald Russell first published a description of bulimia nervosa, in which he studied patients with a "morbid fear of becoming fat" who overate and purged afterward. He specified treatment options and indicated the seriousness of the disease, which can be accompanied by depression and suicide. In 1980, bulimia nervosa first appeared in the DSM-III.After its appearance in the DSM-III, there was a sudden rise in the documented incidents of bulimia nervosa. In the early 1980s, incidents of the disorder rose to about 40 in every 100,000 people. This decreased to about 27 in every 100,000 people at the end of the 1980s/early 1990s. However, bulimia nervosas prevalence was still much higher than anorexia nervosas, which at the time occurred in about 14 people per 100,000.In 1991, Kendler et al. documented the cumulative risk for bulimia nerv
Bulimia nervosa
osa for those born before 1950, from 1950 to 1959, and after 1959. The risk for those born after 1959 is much higher than those in either of the other cohorts. See also Anorectic Behavior Observation Scale Eating recovery Evolutionary psychiatry Binge eating disorder List of people with bulimia nervosa References == External links ==
Pruritus vulvae
Pruritus vulvae is itchiness of the vulva, which is the counterpart of pruritus scroti, and may have many different causes.: 56  Patch testing may be used to diagnose the cause. Causes This condition is a symptom of an underlying condition more often than it is a primary condition. Vulva irritation can be caused by any moisture left on the skin. This moisture may be perspiration, urine, vaginal discharge or small amounts of stool. It may be caused by vaginal infections, vulvitis, HPV (human papilloma virus) infection, anal incontinence, Bowens disease, or dietary irritants (caffeine, potatoes, chilli, capsicum, tomatoes, and peanuts). Treatment with antibiotics can lead to a yeast infection and irritation of the vulva. Some diseases increase the possibility of yeast infections, such as diabetes mellitus. Chronic inflammation of the vulva predisposes to the development of premalignant or malignant changes. == References ==
Peristalsis
Peristalsis ( PERR-ih-STAL-siss, US also -⁠STAWL-) is a radially symmetrical contraction and relaxation of muscles that propagate in a wave down a tube, in an anterograde direction. Peristalsis is progression of coordinated contraction of involuntary circular muscles, which is preceded by a simultaneous contraction of the longitudinal muscle and relaxation of the circular muscle in the lining of the gut.In much of a digestive tract such as the human gastrointestinal tract, smooth muscle tissue contracts in sequence to produce a peristaltic wave, which propels a ball of food (called a bolus before being transformed into chyme in the stomach) along the tract. The peristaltic movement comprises relaxation of circular smooth muscles, then their contraction behind the chewed material to keep it from moving backward, then longitudinal contraction to push it forward. Earthworms use a similar mechanism to drive their locomotion, and some modern machinery imitate this design. The word comes from New Latin and is derived from the Greek peristellein, "to wrap around," from peri-, "around" + stellein, "draw in, bring together; set in order". Human physiology Peristalsis is generally directed caudal, that is, towards the anus. This sense of direction might be attributable to the polarisation of the myenteric plexus. Because of the reliance of the peristaltic reflex on the myenteric plexus, it is also referred to as the myenteric reflex. Mechanism of the peristaltic reflex The food bolus causes a stretch of the gut smooth muscle that causes serotonin to be secreted to sensory neurons, which then get activated. These sensory neurons, in turn, activate neurons of the myenteric plexus, which then proceed to split into two cholinergic pathways: a retrograde and an anterograde. Activated neurons of the retrograde pathway release substance P and acetylcholine to contract the smooth muscle behind the bolus. The activated neurons of the anterograde pathway instead release nitric oxide and vasoactive intestinal polypeptide to relax the smooth muscle caudal to the bolus. This allows the food bolus to effectively be pushed forward along the digestive tract. Esophagus After food is chewed into a bolus, it is swallowed and moved through the esophagus. Smooth muscles contract behind the bolus to prevent it from being squeezed back into the mouth. Then rhythmic, unidirectional waves of contractions work to rapidly force the food into the stomach. The migrating motor complex (MMC) helps trigger peristaltic waves. This process works in one direction only, and its sole esophageal function is to move food from the mouth into the stomach (the MMC also functions to clear out remaining food in the stomach to the small bowel and remaining particles in the small bowel into the colon). In the esophagus, two types of peristalsis occur: First, there is a primary peristaltic wave, which occurs when the bolus enters the esophagus during swallowing. The primary peristaltic wave forces the bolus down the esophagus and into the stomach in a wave lasting about 8–9 seconds. The wave travels down to the stomach even if the bolus of food descends at a greater rate than the wave itself, and continues even if for some reason the bolus gets stuck further up the esophagus. If the bolus gets stuck or moves slower than the primary peristaltic wave (as can happen when it is poorly lubricated), then stretch receptors in the esophageal lining are stimulated and a local reflex response causes a secondary peristaltic wave around the bolus, forcing it further down the esophagus, and these secondary waves continue indefinitely until the bolus enters the stomach. The process of peristalsis is controlled by the medulla oblongata. Esophageal peristalsis is typically assessed by performing an esophageal motility study. A third type of peristalsis, tertiary peristalsis, is dysfunctional and involves irregular, diffuse, simultaneous contractions. These contractions are suspect in esophageal dysmotility and present on a barium swallow as a "corkscrew esophagus".During vomiting, the propulsion of food up the esophagus and out the mouth comes from the contraction of the abdominal muscles; peristalsis does not reverse in the esophagus. Stomach When a peristaltic wave reaches at the end of the esophagus, the cardiac sphincter (gastroesophageal sphincter) opens, allowing the passage of bolus into the stomach. The gastroesophageal sphincter normally remains closed and does not allow the stomachs food contents to move back. The churning movements of the stomachs thick muscular wall blend the food thoroughly with the acidic gastric juice, producing a mixture called the chyme. The muscularis layer of the stomach is thickest and maximum peristalsis occurs here. After short intervals, the pyloric sphincter keeps on opening and closing so the chyme is fed into the intestine in installments. Small intestine Once processed and digested by the stomach, the semifluid chyme is passed through the pyloric sphincter into the small intestine. Once past the stomach, a typical peristaltic wave lasts only a few seconds, traveling at only a few centimeters per second. Its primary purpose is to mix the chyme in the intestine rather than to move it forward in the intestine. Through this process of mixing and continued digestion and absorption of nutrients, the chyme gradually works its way through the small intestine to the large intestine.In contrast to peristalsis, segmentation contractions result in that churning and mixing without pushing materials further down the digestive tract. Large intestine Although the large intestine has peristalsis of the type that the small intestine uses, it is not the primary propulsion. Instead, general contractions called mass action contractions occur one to three times per day in the large intestine, propelling the chyme (now feces) toward the rectum. Mass movements often tend to be triggered by meals, as the presence of chyme in the stomach and duodenum prompts them (gastrocolic reflex). Minimum peristalsis is found in the rectum part of the large intestine as a result of the thinnest muscularis layer. Lymph The human lymphatic system has no central pump. Instead, lymph circulates through peristalsis in the lymph capillaries as well as valves in the capillaries, compression during contraction of adjacent skeletal muscle, and arterial pulsation. Sperm During ejaculation, the smooth muscle in the walls of the vas deferens contracts reflexively in peristalsis, propelling sperm from the testicles to the urethra. Earthworms The earthworm is a limbless annelid worm with a hydrostatic skeleton that moves by peristalsis. Its hydrostatic skeleton consists of a fluid-filled body cavity surrounded by an extensible body wall. The worm moves by radially constricting the anterior portion of its body, increasing length via hydrostatic pressure. This constricted region propagates posteriorly along the worms body. As a result, each segment is extended forward, then relaxes and re-contacts the substrate, with hair-like setae preventing backward slipping. Various other invertebrates, such as caterpillars and millipedes, also move by peristalsis. Machinery A peristaltic pump is a positive-displacement pump in which a motor pinches advancing portions of a flexible tube to propel a fluid within the tube. The pump isolates the fluid from the machinery, which is important if the fluid is abrasive or must remain sterile. Robots have been designed that use peristalsis to achieve locomotion, as the earthworm uses it. Related terms Aperistalsis refers to a lack of propulsion. It can result from achalasia of the smooth muscle involved. Basal electrical rhythm is a slow wave of electrical activity that can initiate a contraction. Catastalsis is a related intestinal muscle process. Ileus is a disruption of the normal propulsive ability of the gastrointestinal tract caused by the failure of peristalsis. Retroperistalsis, the reverse of peristalsis References External links Interactive 3D display of swallow waves at menne-biomed.de Peristalsis at the US National Library of Medicine Medical Subject Headings (MeSH) Nosek, Thomas M. "Section 6/6ch3/s6ch3_9". Essentials of Human Physiology. Archived from the original on 2016-03-24. Overview at colostate.edu
Le Fort fracture of skull
A Le Fort fracture of the skull is a classic transfacial fracture of the midface, involving the maxillary bone and surrounding structures in either a horizontal, pyramidal or transverse direction. The hallmark of Lefort fractures is traumatic pterygomaxillary separation, which signifies fractures between the pterygoid plates, horseshoe-shaped bony protuberances which extend from the inferior margin of the maxilla, and the maxillary sinuses. Continuity of this structure is a keystone for stability of the midface, involvement of which impacts surgical management of trauma victims, as it requires fixation to a horizontal bar of the frontal bone. The pterygoid plates lie posterior to the upper dental row, or alveolar ridge, when viewing the face from an anterior view. The fractures are named after French surgeon René Le Fort (1869–1951), who discovered the fracture patterns by examining crush injuries in cadavers. Signs and symptoms Le Fort I – Slight swelling of the upper lip, ecchymosis is present in the buccal sulcus beneath each zygomatic arch, malocclusion, mobility of teeth. Impacted type of fractures may be almost immobile and it is only by grasping the maxillary teeth and applying a little firm pressure that a characteristic grate can be felt which is diagnostic of the fracture. Percussion of upper teeth results in cracked pot sound. Guérins sign is present characterised by ecchymosis in the region of greater palatine vessels.Le Fort II and Le Fort III (common) – Gross edema of soft tissue over the middle third of the face, bilateral circumorbital ecchymosis, bilateral subconjunctival hemorrhage, epistaxis, CSF rhinorrhoea, dish face deformity, diplopia, enophthalmos, cracked pot sound.Le Fort II – Step deformity at infraorbital margin, mobile mid face, anesthesia or paresthesia of cheek. Le Fort III – Tenderness and separation at frontozygomatic suture, lengthening of face, depression of ocular levels (enophthalmos), hooding of eyes, and tilting of occlusal plane, an imaginary curved plane between the edges of the incisors and the tips of the posterior teeth. As a result, there is gagging on the side of injury. Diagnosis Diagnosis is suspected by physical exam and history, in which, classically, the hard and soft palate of the midface are mobile with respect to the remainder of facial structures. This finding can be inconsistent due to the midfacial bleeding and swelling that typically accompany such injuries, and so confirmation is usually needed by radiograph or CT. Classification There are three types of Le Fort fractures. As the classification increases, the anatomic level of the maxillary fracture ascends from inferior to superior with respect to the maxilla: Le Fort I fracture (horizontal), otherwise known as a floating palate, may result from a force of injury directed low on the maxillary alveolar rim, or upper dental row, in a downward direction. The essential component of these fractures, in addition to pterygoid plate involvement, is involvement of the lateral bony margin of the nasal opening. They also involve the medial and lateral buttresses, or walls, of the maxillary sinus, traveling through the face just above the alveolar ridge of the upper dental row. At the midline, the inferior nasal septum is involved. Historically, it has also been referred to as a Guérin fracture, although this name is less commonly used in practice. Le Fort II fracture (pyramidal) may result from a blow to the lower or mid maxillary area. In addition to pterygoid plate disruption, their distinguishing component is involvement of inferior orbital rim. When viewed from the front, the fracture is classically shaped like a pyramid. It extends from the nasal bridge at or below the nasofrontal suture through the superior medial wall of the maxilla, inferolaterally through the lacrimal bones which contain the tear ducts, and inferior orbital floor through or near the infraorbital foramen. Le Fort III fracture (transverse), otherwise known as craniofacial dissociation, may follow impact to the nasal bridge or upper maxilla. The salient feature of these fractures, beyond pterygoid plate involvement, is that they invariably involve the zygomatic arch, or cheek bone. These fractures begin at the nasofrontal and frontomaxillary sutures and extend posteriorly along the medial wall of the orbit, through the nasolacrimal groove and ethmoid air cells. The sphenoid is thickened posteriorly, limiting fracture extension into the optic canal. Instead, the fracture continues along the orbital floor and infraorbital fissure, continuing through the lateral orbital wall to the zygomaticofrontal junction and zygomatic arch. Within the nose, the fracture extends through the base of the perpendicular plate of the ethmoid air cells, the vomer, which are both part of the nasal septum. As with the other fractures, it also involves the junction of the pterygoids with the maxillary sinuses. CSF rhinorrhea, or leakage of the nutrient laden fluid that bathes the brain, is more commonly seen with these injuries due to ethmoid air cell disruption, as the air cells are located immediately beneath the skull base. Treatment Treatment is surgical, and usually is able to be performed once life-threatening injuries are stabilized, to allow the patient to survive the general anesthesia needed for maxillofacial surgery. First a frontal bar is used, which refers to the thickened frontal bone above the frontonasal sutures and the superior orbital rim. The facial bones are suspended from the bar by open reduction and internal fixation with titanium plates and screws, and each fracture is fixed, first at its superior attachment to the bar, then at the inferior attachment to the displaced bone. For stability, the zygomaticofrontal suture is usually replaced first, and the palate and alveolar ridge are usually fixed last. Finally, after the horizontal and vertical maxillary buttresses are stabilized, the orbital fractures are fixed last. See also Oral and maxillofacial surgery References External links eMedicine - Facial Trauma, Maxillary and Le Fort Fractures - LeFort Fractures
Classified
Classified may refer to: General Classified information, material that a government body deems to be sensitive Classified advertising or "classifieds" Music Classified (rapper) (born 1977), Canadian rapper The Classified, a 1980s American rock band featuring Steve Vai Classified Records, an American record label Albums Classified (Bond album), 2004 Classified (Classified album), 2013 Classified (Sweetbox album), 2001 Classified, by James Booker, 1982 Songs "Classified", by C. W. McCall from Wolf Creek Pass, 1975 "Classified", by the Orb from Metallic Spheres, 2010 "Classified", by Pete Townshend from the compilation Glastonbury Fayre, 1972 "Classifieds", by the Academy Is... from Almost Here, 2005 Other media Classified (1925 film), an American silent film Classified: The Edward Snowden Story, a 2014 Canadian film Classified: The Sentinel Crisis, a 2005 video game for the Xbox See also Classification (disambiguation) Classifier (disambiguation)
Footedness
Footedness is the natural preference of ones left or right foot for various purposes. It is the foot equivalent of handedness. While purposes vary, such as applying the greatest force in a certain foot to complete the action of kick as opposed to stomping, footedness is most commonly associated with the preference of a particular foot in the leading position while engaging in foot- or kicking-related sports, such as association football and kickboxing. A person may thus be left-footed, right-footed or ambipedal (able to use both feet equally well). Ball games In association football, the ball is predominantly struck by the foot. Footedness may refer to the foot a player uses to kick with the greatest force and skill. Most people are right-footed, kicking with the right leg. Capable left-footed footballers are rare and therefore quite sought after. As rare are "two-footed" players, who are equally capable with both feet. Such players make up only one sixth of players in the top professional leagues in Europe. Two-footedness can be learnt, a notable case being England international Tom Finney, but can only be properly developed in the early years. In Australian Rules Football, several players are equally adept at using both feet to kick the ball, such as Sam Mitchell and Charles Bushnell (footballer, retired).In basketball, a sport composed almost solely of right-handed players, it is common for most athletes to have a dominant left leg which they would use when jumping to complete a right-hand layup. Hence, left-handed basketball players tend to use their right leg more as they finish a left handed layup (although both right- and left-handed players are usually able to use both hands when finishing near the basket). In the National Football League, a disproportionate, and increasing, number of punters punt with their left leg, where punting is the position in play that receives and kicks the ball once it leaves the line of scrimmage. At the end of the 2017 NFL season, 10 out of the leagues 32 punters were left-footed, up from four out of 31 (not counting dual-footed punter Chris Hanson, who left the league in 2009) at the beginning of the millennium; in contrast, placekickers were almost exclusively right-footed. The only apparent advantage to punting with the left foot is that, because it is not as common, return specialists are not as experienced handling the ball spinning in the opposite direction. Boardsports In boardsports (e.g., surfing, skateboarding and snowboarding), one stands erect on a single, lightweight board that slides along the ground or on water. The need for balance causes one to position the body perpendicular to the direction of motion, with one foot leading the other. As with handedness, when this task is repetitively performed, one tends to naturally choose a particular foot for the leading position. Goofy stance vs. regular stance Boardsport riders are "footed" in one of two stances, generally called "regular" and "goofy". Riders will generally quickly choose a preferred stance that becomes permanently preferred. A "regular" stance indicates the left foot leading on the board with the right foot pushing, while a "Goofy" stance leads with the right foot on the board, pushing with the left. Professionals seem to be evenly distributed between the stances. Practice can yield a high level of ambidexterity between the two stances, such that even seasoned participants of a boardsport have difficulty discerning the footedness of an unfamiliar rider in action. To increase the difficulty, variety, and aesthetic value of tricks, riders can ride "switch stance" (abbreviated to "switch"). For example, a goofy-footed skateboarder normally performs an ollie with the right foot forward, but a "switch ollie" would have the rider standing with the left foot at the front of the board. In sports where switch riding is common and expected, like street skateboarding, riders have the goal of appearing natural at, and performing the same tricks in, both regular and goofy stances. Some sports like kitesurfing and windsurfing generally require the rider to be able to switch stance depending on the wind or travel direction rather than rider preference. Each time direction is changed, the stance changes. Snowboarders who ride switch may adopt a "duck stance", where the feet are mounted turned out, or pointed away from the mid-line of the body, typically at a roughly 15-degree angle. In this position, the rider will have the leading foot facing forward in either regular or switch stance. Switch, fakie and nollie When a rider rolls backwards, this is called "riding fakie". A "fakie" trick is performed while riding backwards but taking off on the front foot. Although it is the same foot that jumps in ones traditional stance, it is normally the back foot. A rider can also land in the fakie position. While there are some parallels between switch stance and fakie, riding switch implies opening the shoulders more to face the direction headed, though not as much as in traditional stance, while fakie stance implies a slightly more backwards facing, closed shoulder posture. "Nollie" (nose ollie) is when the front foot takes off when one is riding in their normal stance, the same foot that jumps when doing tricks switch. In nollie position, the body and shoulders are facing forward as much as when riding in normal stance. Generally fakie and nollie are done off the nose, whereas normal and switch are done off the tail. In skateboarding, most tricks that are performed riding backwards — with respect to the riders preferred stance — are exclusively categorized as "switch" (in a switch stance) or as fakie, with the general rule that tricks off the tail are almost always described as fakie, and those off the nose are nollie. For example, a jump using the tail rolling backwards is a "fakie ollie" (not a "switch nollie"), and a jump off the nose is a "nollie" (not a "fakie nollie"). Mongo foot Mongo foot refers to the use of the riders front foot for pushing. Normally, a skateboarder will feel more comfortable using their back foot to push, while their front foot remains on the board. In the minority case of mongo-footed skateboarders, the opposite is true. Some skateboarders who do not push mongo in their regular stance may still push mongo when riding in switch stance, rather than push with their weaker back foot. Some well-known skaters who change between mongo and normal when pushing switch include Jacob Vance, Stevie Williams, and Eric Koston. Although its origins remain uncertain, it is widely believed that the term derives from the pejorative use of "mongoloid". BMX In BMX, there is a de facto relationship between footedness and preferences of grinding position and of mid-air turning direction. The terms "regular" and "goofy" do not indicate a foot preference as in boardsports, but rather whether the riders footedness has the usual relationship with their grinding and mid-air turning preferences. For example, consider the following classes of riders: right-footed riders who prefer turning counter-clockwise in the air, and grinding on their right left-footed riders who prefer turning clockwise in the air, and grinding on their left.Both classes are of equal size and would be considered "regular". "Goofy" would describe riders whose trick preferences do not match their footedness: a rider who prefers to grind on the opposite side as do most is considered a "goofy grinder"; one who prefers to turn the opposite direction in mid-air as do most is considered a "goofy spinner". Few riders have either goofy trait, but some riders may have both. See also Handedness Laterality Orthodox stance Southpaw stance Surefootedness == References ==
Lupus
Lupus, technically known as systemic lupus erythematosus (SLE), is an autoimmune disease in which the bodys immune system mistakenly attacks healthy tissue in many parts of the body. Symptoms vary among people and may be mild to severe. Common symptoms include painful and swollen joints, fever, chest pain, hair loss, mouth ulcers, swollen lymph nodes, feeling tired, and a red rash which is most commonly on the face. Often there are periods of illness, called flares, and periods of remission during which there are few symptoms.The cause of SLE is not clear. It is thought to involve a mixture of genetics combined with environmental factors. Among identical twins, if one is affected there is a 24% chance the other one will also develop the disease. Female sex hormones, sunlight, smoking, vitamin D deficiency, and certain infections are also believed to increase a persons risk. The mechanism involves an immune response by autoantibodies against a persons own tissues. These are most commonly anti-nuclear antibodies and they result in inflammation. Diagnosis can be difficult and is based on a combination of symptoms and laboratory tests. There are a number of other kinds of lupus erythematosus including discoid lupus erythematosus, neonatal lupus, and subacute cutaneous lupus erythematosus.There is no cure for SLE, but there are experimental and symptomatic treatments. Treatments may include NSAIDs, corticosteroids, immunosuppressants, hydroxychloroquine, and methotrexate. Although corticosteroids are rapidly effective, long-term use results in side effects. Alternative medicine has not been shown to affect the disease. Life expectancy is lower among people with SLE, but with modern treatment, 80-90% of patients can have a normal life span. SLE significantly increases the risk of cardiovascular disease with this being the most common cause of death. While women with lupus have higher risk pregnancies, most are successful.Rate of SLE varies between countries from 20 to 70 per 100,000. Women of childbearing age are affected about nine times more often than men. While it most commonly begins between the ages of 15 and 45, a wide range of ages can be affected. Those of African, Caribbean, and Chinese descent are at higher risk than those of European descent. Rates of disease in the developing world are unclear. Lupus is Latin for "wolf": the disease was so-named in the 13th century as the rash was thought to appear like a wolfs bite. Signs and symptoms SLE is one of several diseases known as "the great imitator" because it often mimics or is mistaken for other illnesses. SLE is a classical item in differential diagnosis, because SLE symptoms vary widely and come and go unpredictably. Diagnosis can thus be elusive, with some people having unexplained symptoms of SLE for years.Common initial and chronic complaints include fever, malaise, joint pains, muscle pains, and fatigue. Because these symptoms are so often seen in association with other diseases, these signs and symptoms are not part of the diagnostic criteria for SLE. When occurring in conjunction with other signs and symptoms, however, they are considered suggestive.While SLE can occur in both males and females, it is found far more often in women, and the symptoms associated with each sex are different. Females tend to have a greater number of relapses, a low white blood cell count, more arthritis, Raynauds phenomenon, and psychiatric symptoms. Males tend to have more seizures, kidney disease, serositis (inflammation of tissues lining the lungs and heart), skin problems, and peripheral neuropathy. Skin As many as 70% of people with lupus have some skin symptoms. The three main categories of lesions are chronic cutaneous (discoid) lupus, subacute cutaneous lupus, and acute cutaneous lupus. People with discoid lupus may exhibit thick, red scaly patches on the skin. Similarly, subacute cutaneous lupus manifests as red, scaly patches of skin but with distinct edges. Acute cutaneous lupus manifests as a rash. Some have the classic malar rash (commonly known as the butterfly rash) associated with the disease. This rash occurs in 30 to 60% of people with SLE.Hair loss, mouth and nasal ulcers, and lesions on the skin are other possible manifestations. Muscles and bones The most commonly sought medical attention is for joint pain, with the small joints of the hand and wrist usually affected, although all joints are at risk. More than 90 percent of those affected will experience joint or muscle pain at some time during the course of their illness. Unlike rheumatoid arthritis, lupus arthritis is less disabling and usually does not cause severe destruction of the joints. Fewer than ten percent of people with lupus arthritis will develop deformities of the hands and feet. People with SLE are at particular risk of developing osteoarticular tuberculosis.A possible association between rheumatoid arthritis and SLE has been suggested, and SLE may be associated with an increased risk of bone fractures in relatively young women. Blood Anemia is common in children with SLE and develops in about 50% of cases. Low platelet count and white blood cell count may be due to the disease or a side effect of pharmacological treatment. People with SLE may have an association with antiphospholipid antibody syndrome (a thrombotic disorder), wherein autoantibodies to phospholipids are present in their serum. Abnormalities associated with antiphospholipid antibody syndrome include a paradoxical prolonged partial thromboplastin time (which usually occurs in hemorrhagic disorders) and a positive test for antiphospholipid antibodies; the combination of such findings have earned the term "lupus anticoagulant-positive". Another autoantibody finding in SLE is the anti-cardiolipin antibody, which can cause a false positive test for syphilis. Heart SLE may cause pericarditis—inflammation of the outer lining surrounding the heart, myocarditis—inflammation of the heart muscle, or endocarditis—inflammation of the inner lining of the heart. The endocarditis of SLE is non-infectious, and is also called Libman–Sacks endocarditis. It involves either the mitral valve or the tricuspid valve. Atherosclerosis also occurs more often and advances more rapidly than in the general population.Steroids are sometimes prescribed as an anti-inflammatory treatment for lupus; however, they can increase ones risk for heart disease, high cholesterol, and atherosclerosis. Lungs SLE can cause pleuritic pain as well as inflammation of the pleurae known as pleurisy, which can rarely give rise to shrinking lung syndrome involving a reduced lung volume. Other associated lung conditions include pneumonitis, chronic diffuse interstitial lung disease, pulmonary hypertension, pulmonary emboli, and pulmonary hemorrhage. Kidneys Painless passage of blood or protein in the urine may often be the only presenting sign of kidney involvement. Acute or chronic renal impairment may develop with lupus nephritis, leading to acute or end-stage kidney failure. Because of early recognition and management of SLE with immunosuppressive drugs or corticosteroids, end-stage renal failure occurs in less than 5% of cases; except in the black population, where the risk is many times higher. The histological hallmark of SLE is membranous glomerulonephritis with "wire loop" abnormalities. This finding is due to immune complex deposition along the glomerular basement membrane, leading to a typical granular appearance in immunofluorescence testing. Neuropsychiatric Neuropsychiatric syndromes can result when SLE affects the central or peripheral nervous system. The American College of Rheumatology defines 19 neuropsychiatric syndromes in systemic lupus erythematosus. The diagnosis of neuropsychiatric syndromes concurrent with SLE (now termed as NPSLE), is one of the most difficult challenges in medicine, because it can involve so many different patterns of symptoms, some of which may be mistaken for signs of infectious disease or stroke.A common neurological disorder people with SLE have is headache, although the existence of a specific lupus headache and the optimal approach to headache in SLE cases remains controversial. Other common neuropsychiatric manifestations of SLE include cognitive dysfunction, mood disorder, cerebrovascular disease, seizures, polyneuropathy, anxiety disorder, psychosis, depression, and in some extreme cases, personality disorders. Steroid psychosis can also occur as a result of treating the disease. It can rarely present with intracranial hypertension syndrome, characterized by an elevated intracranial pressure, papilledema, and headache with occasional abducens nerve paresis, absence of a space-occupying lesion or ventricular enlargement, and normal cerebrospinal fluid chemical and hematological constituents.More rare manifestations are acute confusional state, Guillain–Barré syndrome, aseptic meningitis, autonomic disorder, demyelinating syndrome, mononeuropathy (which might manifest as mononeuritis multiplex), movement disorder (more specifically, chorea), myasthenia gravis, myelopathy, cranial neuropathy and plexopathy.Neurological disorders contribute to a significant percentage of morbidity and mortality in people with lupus. As a result, the neural side of lupus is being studied in hopes of reducing morbidity and mortality rates. One aspect of this disease is severe damage to the epithelial cells of the blood–brain barrier. In certain regions, depression affects up to 60% of women with SLE. Eyes Eye involvement is seen in up to one-third of people. The most common diseases are dry eye syndrome and secondary Sjögrens syndrome, but episcleritis, scleritis, retinopathy (more often affecting both eyes than one), ischemic optic neuropathy, retinal detachment, and secondary angle-closure glaucoma may occur. In addition, the medications used to treat SLE can cause eye disease: long-term glucocorticoid use can cause cataracts and secondary open-angle glaucoma, and long-term hydroxychloroquine treatment can cause vortex keratopathy and maculopathy. Reproductive While most pregnancies have positive outcomes, there is a greater risk of adverse events occurring during pregnancy. SLE causes an increased rate of fetal death in utero and spontaneous abortion (miscarriage). The overall live-birth rate in people with SLE has been estimated to be 72%. Pregnancy outcome appears to be worse in people with SLE whose disease flares up during pregnancy.Neonatal lupus is the occurrence of SLE symptoms in an infant born from a mother with SLE, most commonly presenting with a rash resembling discoid lupus erythematosus, and sometimes with systemic abnormalities such as heart block or enlargement of the liver and spleen. Neonatal lupus is usually benign and self-limited.Medications for treatment of SLE can carry severe risks for female and male reproduction. Cyclophosphamide (also known as Cytoxan), can lead to infertility by causing premature ovarian insufficiency (POI), the loss of normal function of ones ovaries prior to age forty. Methotrexate can cause termination or deformity in fetuses and is a common abortifacient, and for men taking a high dose and planning to father, a discontinuation period of 6 months is recommended before insemination. Systemic Fatigue in SLE is probably multifactorial and has been related to not only disease activity or complications such as anemia or hypothyroidism, but also to pain, depression, poor sleep quality, poor physical fitness and lack of social support. Causes SLE is presumably caused by a genetic susceptibility coupled with an environmental trigger which results in defects in the immune system. One of the factors associated with SLE is vitamin D deficiency. Genetics SLE does run in families, but no single causal gene has been identified. Instead, multiple genes appear to influence a persons chance of developing lupus when triggered by environmental factors. HLA class I, class II, and class III genes are associated with SLE, but only classes I and II contribute independently to increased risk of SLE. Other genes which contain risk variants for SLE are IRF5, PTPN22, STAT4, CDKN1A, ITGAM, BLK, TNFSF4 and BANK1.Some of the susceptibility genes may be population specific. Genetic studies of the rates of disease in families supports the genetic basis of this disease with a heritability of >66%. Identical (monozygotic) twins were found to share susceptibility to the disease at >35% rate compared to fraternal (dizygotic) twins and other full siblings who only showed a 2–5% concordance in shared inheritance.Since SLE is associated with many genetic regions, it is likely an oligogenic trait, meaning that there are several genes that control susceptibility to the disease.SLE is regarded as a prototype disease due to the significant overlap in its symptoms with other autoimmune diseases. Drug reactions Drug-induced lupus erythematosus is a (generally) reversible condition that usually occurs in people being treated for a long-term illness. Drug-induced lupus mimics SLE. However, symptoms of drug-induced lupus generally disappear once the medication that triggered the episode is stopped. More than 38 medications can cause this condition, the most common of which are procainamide, isoniazid, hydralazine, quinidine, and phenytoin. Non-systemic forms of lupus Discoid (cutaneous) lupus is limited to skin symptoms and is diagnosed by biopsy of rash on the face, neck, scalp or arms. Approximately 5% of people with DLE progress to SLE. Pathophysiology SLE is triggered by environmental factors that are unknown. In SLE, the bodys immune system produces antibodies against self-protein, particularly against proteins in the cell nucleus. These antibody attacks are the immediate cause of SLE.SLE is a chronic inflammatory disease believed to be a type III hypersensitivity response with potential type II involvement. Reticulate and stellate acral pigmentation should be considered a possible manifestation of SLE and high titers of anti-cardiolipin antibodies, or a consequence of therapy.People with SLE have intense polyclonal B-cell activation, with a population shift towards immature B cells. Memory B cells with increased CD27+/IgD—are less susceptible to immunosuppression. CD27-/IgD- memory B cells are associated with increased disease activity and renal lupus. T cells, which regulate B-cell responses and infiltrate target tissues, have defects in signaling, adhesion, co-stimulation, gene transcription, and alternative splicing. The cytokines B-lymphocyte stimulator (BLyS), also known as B-cell activating factor (BAFF), interleukin 6, interleukin 17, interleukin 18, type I interferons, and tumor necrosis factor α (TNFα) are involved in the inflammatory process and are potential therapeutic targets.SLE is associated with low C3 levels in the complement system. Cell death signaling Apoptosis is increased in monocytes and keratinocytes Expression of Fas by B cells and T cells is increased There are correlations between the apoptotic rates of lymphocytes and disease activity. Necrosis is increased in T lymphocytes.Tingible body macrophages (TBMs) – large phagocytic cells in the germinal centers of secondary lymph nodes – express CD68 protein. These cells normally engulf B cells that have undergone apoptosis after somatic hypermutation. In some people with SLE, significantly fewer TBMs can be found, and these cells rarely contain material from apoptotic B cells. Also, uningested apoptotic nuclei can be found outside of TBMs. This material may present a threat to the tolerization of B cells and T cells. Dendritic cells in the germinal center may endocytose such antigenic material and present it to T cells, activating them. Also, apoptotic chromatin and nuclei may attach to the surfaces of follicular dendritic cells and make this material available for activating other B cells that may have randomly acquired self-protein specificity through somatic hypermutation. Necrosis, a pro-inflammatory form of cell death, is increased in T lymphocytes, due to mitochondrial dysfunction, oxidative stress, and depletion of ATP. Clearance deficiency Impaired clearance of dying cells is a potential pathway for the development of this systemic autoimmune disease. This includes deficient phagocytic activity, impaired lysosomal degradation, and scant serum components in addition to increased apoptosis. SLE is associated with defects in apoptotic clearance, and the damaging effects caused by apoptotic debris. Early apoptotic cells express “eat-me” signals, of cell-surface proteins such as phosphatidylserine, that prompt immune cells to engulf them. Apoptotic cells also express find-me signals to attract macrophages and dendritic cells. When apoptotic material is not removed correctly by phagocytes, they are captured instead by antigen-presenting cells, which leads to the development of antinuclear antibodies.Monocytes isolated from whole blood of people with SLE show reduced expression of CD44 surface molecules involved in the uptake of apoptotic cells. Most of the monocytes and tingible body macrophages (TBMs), which are found in the germinal centres of lymph nodes, even show a definitely different morphology; they are smaller or scarce and die earlier. Serum components like complement factors, CRP, and some glycoproteins are, furthermore, decisively important for an efficiently operating phagocytosis. With SLE, these components are often missing, diminished, or inefficient. Macrophages during SLE fail to mature their lysosomes and as a result have impaired degradation of internalized apoptotic debris, which results in chronic activation of Toll-like receptors and permeabilization of the phagolysosomal membrane, allowing activation of cytosolic sensors. In addition, intact apoptotic debris recycles back to the cell membrane and accumulate on the surface of the cell.Recent research has found an association between certain people with lupus (especially those with lupus nephritis) and an impairment in degrading neutrophil extracellular traps (NETs). These were due to DNAse1 inhibiting factors, or NET protecting factors in peoples serum, rather than abnormalities in the DNAse1 itself. DNAse1 mutations in lupus have so far only been found in some Japanese cohorts.The clearance of early apoptotic cells is an important function in multicellular organisms. It leads to a progression of the apoptosis process and finally to secondary necrosis of the cells if this ability is disturbed. Necrotic cells release nuclear fragments as potential autoantigens, as well as internal danger signals, inducing maturation of dendritic cells (DCs) since they have lost their membranes integrity. Increased appearance of apoptotic cells also stimulates inefficient clearance. That leads to the maturation of DCs and also to the presentation of intracellular antigens of late apoptotic or secondary necrotic cells, via MHC molecules.Autoimmunity possibly results from the extended exposure to nuclear and intracellular autoantigens derived from late apoptotic and secondary necrotic cells. B and T cell tolerance for apoptotic cells is abrogated, and the lymphocytes get activated by these autoantigens; inflammation and the production of autoantibodies by plasma cells is initiated. A clearance deficiency in the skin for apoptotic cells has also been observed in people with cutaneous lupus erythematosus (CLE). Germinal centers In healthy conditions, apoptotic lymphocytes are removed in germinal centers (GC) by specialized phagocytes, the tingible body macrophages (TBM), which is why no free apoptotic and potential autoantigenic material can be seen. In some people with SLE, a buildup of apoptotic debris can be observed in GC because of an ineffective clearance of apoptotic cells. Close to TBM, follicular dendritic cells (FDC) are localised in GC, which attach antigen material to their surface and, in contrast to bone marrow-derived DC, neither take it up nor present it via MHC molecules. Autoreactive B cells can accidentally emerge during somatic hypermutation and migrate into the germinal center light zone. Autoreactive B cells, maturated coincidentally, normally do not receive survival signals by antigen planted on follicular dendritic cells and perish by apoptosis. In the case of clearance deficiency, apoptotic nuclear debris accumulates in the light zone of GC and gets attached to FDC. This serves as a germinal centre survival signal for autoreactive B-cells. After migration into the mantle zone, autoreactive B cells require further survival signals from autoreactive helper T cells, which promote the maturation of autoantibody-producing plasma cells and B memory cells. In the presence of autoreactive T cells, a chronic autoimmune disease may be the consequence. Anti-nRNP autoimmunity Anti-nRNP autoantibodies to nRNP A and nRNP C initially targeted restricted, proline-rich motifs. Antibody binding subsequently spread to other epitopes. The similarity and cross-reactivity between the initial targets of nRNP and Sm autoantibodies identifies a likely commonality in cause and a focal point for intermolecular epitope spreading. Others Elevated expression of HMGB1 was found in the sera of people and mice with systemic lupus erythematosus, high mobility group box 1 (HMGB1) is a nuclear protein participating in chromatin architecture and transcriptional regulation. Recently, there is increasing evidence HMGB1 contributes to the pathogenesis of chronic inflammatory and autoimmune diseases due to its inflammatory and immune stimulating properties. Diagnosis Laboratory tests Antinuclear antibody (ANA) testing and anti-extractable nuclear antigen (anti-ENA) form the mainstay of serologic testing for SLE. If ANA is negative the disease can be ruled out.Several techniques are used to detect ANAs. The most widely used is indirect immunofluorescence (IF). The pattern of fluorescence suggests the type of antibody present in the peoples serum. Direct immunofluorescence can detect deposits of immunoglobulins and complement proteins in peoples skin. When skin not exposed to the sun is tested, a positive direct IF (the so-called lupus band test) is evidence of systemic lupus erythematosus.ANA screening yields positive results in many connective tissue disorders and other autoimmune diseases, and may occur in normal individuals. Subtypes of antinuclear antibodies include anti-Smith and anti-double stranded DNA (dsDNA) antibodies (which are linked to SLE) and anti-histone antibodies (which are linked to drug-induced lupus). Anti-dsDNA antibodies are highly specific for SLE; they are present in 70% of cases, whereas they appear in only 0.5% of people without SLE.Laboratory tests can also help distinguish between closely related connective tissue diseases. A multianalyte panel (MAP) of autoantibodies, including ANA, anti-dsDNA, and anti-Smith in combination with the measurement of cell-bound complement activation products (CB-CAPs) with an integrated algorithm has demonstrated 80% diagnostic sensitivity and 86% specificity in differentiating diagnosed SLE from other autoimmune connective tissue diseases. The MAP approach has been further studied in over 40,000 patients tested with either the MAP or traditional ANA testing strategy (tANA), demonstrating patients who test MAP positive are at up to 6-fold increased odds of receiving a new SLE diagnosis and up to 3-fold increased odds of starting a new SLE medication regimen as compared to patients testing positive with the tANA approach.The anti-dsDNA antibody titers also tend to reflect disease activity, although not in all cases. Other ANA that may occur in people with SLE are anti-U1 RNP (which also appears in systemic sclerosis and mixed connective tissue disease), SS-A (or anti-Ro) and SS-B (or anti-La; both of which are more common in Sjögrens syndrome). SS-A and SS-B confer a specific risk for heart conduction block in neonatal lupus.Other tests routinely performed in suspected SLE are complement system levels (low levels suggest consumption by the immune system), electrolytes and kidney function (disturbed if the kidney is involved), liver enzymes, and complete blood count. The lupus erythematosus (LE) cell test was commonly used for diagnosis, but it is no longer used because the LE cells are only found in 50–75% of SLE cases and they are also found in some people with rheumatoid arthritis, scleroderma, and drug sensitivities. Because of this, the LE cell test is now performed only rarely and is mostly of historical significance. Diagnostic criteria Some physicians make a diagnosis based on the American College of Rheumatology (ACR) classification criteria. However, these criteria were primarily established for use in scientific research, including selection for randomized controlled trials, which require higher confidence levels. As a result, many people with SLE may not meet the full ACR criteria. Criteria The American College of Rheumatology (ACR) established eleven criteria in 1982, which were revised in 1997 as a classificatory instrument to operationalise the definition of SLE in clinical trials. They were not intended to be used to diagnose individuals and do not do well in that capacity. For the purpose of identifying people for clinical studies, a person has SLE if any 4 out of 11 symptoms are present simultaneously or serially on two separate occasions. Malar rash (rash on cheeks); sensitivity = 57%; specificity = 96%. Discoid rash (red, scaly patches on skin that cause scarring); sensitivity = 18%; specificity = 99%. Serositis: Pleurisy (inflammation of the membrane around the lungs) or pericarditis (inflammation of the membrane around the heart); sensitivity = 56%; specificity = 86% (pleural is more sensitive; cardiac is more specific). Oral ulcers (includes oral or nasopharyngeal ulcers); sensitivity = 27%; specificity = 96%. Arthritis: nonerosive arthritis of two or more peripheral joints, with tenderness, swelling, or effusion; sensitivity = 86%; specificity = 37%. Photosensitivity (exposure to ultraviolet light causes rash, or other symptoms of SLE flareups); sensitivity = 43%; specificity = 96%. Blood—hematologic disorder—hemolytic anemia (low red blood cell count), leukopenia (white blood cell count<4000/µl), lymphopenia (<1500/µl), or low platelet count (<100000/µl) in the absence of offending drug; sensitivity = 59%; specificity = 89%. Hypocomplementemia is also seen, due to either consumption of C3 and C4 by immune complex-induced inflammation or to congenitally complement deficiency, which may predispose to SLE. Renal disorder: More than 0.5 g per day protein in urine or cellular casts seen in urine under a microscope; sensitivity = 51%; specificity = 94%. Antinuclear antibody test positive; sensitivity = 99%; specificity = 49%. Immunologic disorder: Positive anti-Smith, anti-ds DNA, antiphospholipid antibody, or false positive serological test for syphilis; sensitivity = 85%; specificity = 93%. Presence of anti-ss DNA in 70% of cases (though also positive with rheumatic disease and healthy persons). Neurologic disorder: Seizures or psychosis; sensitivity = 20%; specificity = 98%.Other than the ACR criteria, people with lupus may also have: Fever (over 100 °F/ 37.7 °C) Extreme fatigue Hair loss Fingers turning white or blue when cold (Raynauds phenomenon) Criteria for individual diagnosis Some people, especially those with antiphospholipid syndrome, may have SLE without four of the above criteria, and also SLE may present with features other than those listed in the criteria.Recursive partitioning has been used to identify more parsimonious criteria. This analysis presented two diagnostic classification trees: Simplest classification tree: SLE is diagnosed if a person has an immunologic disorder (anti-DNA antibody, anti-Smith antibody, false positive syphilis test, or LE cells) or m
Lupus
alar rash. It has sensitivity = 92% and specificity = 92%. Full classification tree: Uses 6 criteria. It has sensitivity = 97% and specificity = 95%.Other alternative criteria have been suggested, e.g. the St. Thomas Hospital "alternative" criteria in 1998. Treatment The treatment of SLE involves preventing flares and reducing their severity and duration when they occur. Treatment can include corticosteroids and anti-malarial drugs. Certain types of lupus nephritis such as diffuse proliferative glomerulonephritis require intermittent cytotoxic drugs. These drugs include cyclophosphamide and mycophenolate. Cyclophosphamide increases the risk of developing infections, pancreas problems, high blood sugar, and high blood pressure.Hydroxychloroquine was approved by the FDA for lupus in 1955. Some drugs approved for other diseases are used for SLE off-label. In November 2010, an FDA advisory panel recommended approving belimumab (Benlysta) as a treatment for the pain and flare-ups common in lupus. The drug was approved by the FDA in March 2011.In terms of healthcare utilization and costs, one study found that "patients from the US with SLE, especially individuals with moderate or severe disease, utilize significant healthcare resources and incur high medical costs." Medications Due to the variety of symptoms and organ system involvement with SLE, its severity in an individual must be assessed to successfully treat SLE. Mild or remittent disease may, sometimes, be safely left untreated. If required, nonsteroidal anti-inflammatory drugs and antimalarials may be used. Medications such as prednisone, mycophenolic acid and tacrolimus have been used in the past. Disease-modifying antirheumatic drugs Disease-modifying antirheumatic drugs (DMARDs) are used preventively to reduce the incidence of flares, the progress of the disease, and the need for steroid use; when flares occur, they are treated with corticosteroids. DMARDs commonly in use are antimalarials such as hydroxychloroquine and immunosuppressants (e.g. methotrexate and azathioprine). Hydroxychloroquine is an FDA-approved antimalarial used for constitutional, cutaneous, and articular manifestations. Hydroxychloroquine has relatively few side effects, and there is evidence that it improves survival among people who have SLE.Cyclophosphamide is used for severe glomerulonephritis or other organ-damaging complications. Mycophenolic acid is also used for the treatment of lupus nephritis, but it is not FDA-approved for this indication, and FDA is investigating reports that it may be associated with birth defects when used by pregnant women. Immunosuppressive drugs In more severe cases, medications that modulate the immune system (primarily corticosteroids and immunosuppressants) are used to control the disease and prevent recurrence of symptoms (known as flares). Depending on the dosage, people who require steroids may develop Cushings syndrome, symptoms of which may include obesity, puffy round face, diabetes mellitus, increased appetite, difficulty sleeping, and osteoporosis. These may subside if and when the large initial dosage is reduced, but long-term use of even low doses can cause elevated blood pressure and cataracts. Numerous new immunosuppressive drugs are being actively tested for SLE. Rather than broadly suppressing the immune system, as corticosteroids do, they target the responses of specific types of immune cells. Some of these drugs are already FDA-approved for treatment of rheumatoid arthritis, however due to high-toxicity, their use remains limited. Analgesia Since a large percentage of people with SLE have varying amounts of chronic pain, stronger prescription analgesics (painkillers) may be used if over-the-counter drugs (mainly nonsteroidal anti-inflammatory drugs) do not provide effective relief. Potent NSAIDs such as indomethacin and diclofenac are relatively contraindicated for people with SLE because they increase the risk of kidney failure and heart failure.Pain is typically treated with opioids, varying in potency based on the severity of symptoms. When opioids are used for prolonged periods, drug tolerance, chemical dependency, and addiction may occur. Opiate addiction is not typically a concern since the condition is not likely to ever completely disappear. Thus, lifelong treatment with opioids is fairly common for chronic pain symptoms, accompanied by periodic titration that is typical of any long-term opioid regimen. Intravenous immunoglobulins (IVIGs) Intravenous immunoglobulins may be used to control SLE with organ involvement, or vasculitis. It is believed that they reduce antibody production or promote the clearance of immune complexes from the body, even though their mechanism of action is not well understood. Unlike immunosuppressives and corticosteroids, IVIGs do not suppress the immune system, so there is less risk of serious infections with these drugs. Lifestyle changes Avoiding sunlight in SLE is critical since sunlight is known to exacerbate skin manifestations of the disease. Avoiding activities that induce fatigue is also important since those with SLE fatigue easily and it can be debilitating. These two problems can lead to people becoming housebound for long periods of time. Drugs unrelated to SLE should be prescribed only when known not to exacerbate the disease. Occupational exposure to silica, pesticides, and mercury can also worsen the disease. Kidney transplantation Kidney transplants are the treatment of choice for end-stage kidney disease, which is one of the complications of lupus nephritis, but the recurrence of the full disease is common in up to 30% of people. Antiphospholipid syndrome Approximately 20% of people with SLE have clinically significant levels of antiphospholipid antibodies, which are associated with antiphospholipid syndrome. Antiphospholipid syndrome is also related to the onset of neural lupus symptoms in the brain. In this form of the disease, the cause is very different from lupus: thromboses (blood clots or "sticky blood") form in blood vessels, which prove to be fatal if they move within the bloodstream. If the thromboses migrate to the brain, they can potentially cause a stroke by blocking the blood supply to the brain. If this disorder is suspected in people, brain scans are usually required for early detection. These scans can show localized areas of the brain where blood supply has not been adequate. The treatment plan for these people requires anticoagulation. Often, low-dose aspirin is prescribed for this purpose, although for cases involving thrombosis anticoagulants such as warfarin are used. Management of pregnancy While most infants born to mothers who have SLE are healthy, pregnant mothers with SLE should remain under medical care until delivery. Neonatal lupus is rare, but identification of mothers at the highest risk for complications allows for prompt treatment before or after birth. In addition, SLE can flare up during pregnancy, and proper treatment can maintain the health of the mother longer. Women pregnant and known to have anti-Ro (SSA) or anti-La antibodies (SSB) often have echocardiograms during the 16th and 30th weeks of pregnancy to monitor the health of the heart and surrounding vasculature.Contraception and other reliable forms of pregnancy prevention are routinely advised for women with SLE since getting pregnant during active disease was found to be harmful. Lupus nephritis was the most common manifestation. Prognosis No cure is available for SLE but there are many treatments for the disease.In the 1950s, most people diagnosed with SLE lived fewer than five years. Today, over 90% now survive for more than ten years, and many live relatively symptom-free. 80–90% can expect to live a normal lifespan. Mortality rates are however elevated compared to people without SLE.Prognosis is typically worse for men and children than for women; however, if symptoms are present after age 60, the disease tends to run a more benign course. Early mortality, within 5 years, is due to organ failure or overwhelming infections, both of which can be altered by early diagnosis and treatment. The mortality risk is fivefold when compared to the normal population in the late stages, which can be attributed to cardiovascular disease from accelerated atherosclerosis, the leading cause of death for people with SLE. To reduce the potential for cardiovascular issues, high blood pressure and high cholesterol should be prevented or treated aggressively. Steroids should be used at the lowest dose for the shortest possible period, and other drugs that can reduce symptoms should be used whenever possible. Epidemiology The global rates of SLE are approximately 20–70 per 100,000 people. In females, the rate is highest between 45 and 64 years of age. The lowest overall rate exists in Iceland and Japan. The highest rates exist in the US and France. However, there is not sufficient evidence to conclude why SLE is less common in some countries compared to others; it could be the environmental variability in these countries. For example, different countries receive different levels of sunlight, and exposure to UV rays affects dermatological symptoms of SLE.Certain studies hypothesize that a genetic connection exists between race and lupus which affects disease prevalence. If this is true, the racial composition of countries affects disease and will cause the incidence in a country to change as the racial makeup changes. To understand if this is true, countries with largely homogenous and racially stable populations should be studied to better understand incidence. Rates of disease in the developing world are unclear.The rate of SLE varies between countries, ethnicity, and sex, and changes over time. In the United States, one estimate of the rate of SLE is 53 per 100,000; another estimate places the total affected population at 322,000 to over 1 million (98 to over 305 per 100,000). In Northern Europe the rate is about 40 per 100,000 people. SLE occurs more frequently and with greater severity among those of non-European descent. That rate has been found to be as high as 159 per 100,000 among those of Afro-Caribbean descent. Childhood-onset systemic lupus erythematosus generally presents between the ages of 3 and 15 and is four times more common in girls.While the onset and persistence of SLE can show disparities between genders, socioeconomic status also plays a major role. Women with SLE and of lower socioeconomic status have been shown to have higher depression scores, higher body mass index, and more restricted access to medical care than women of higher socioeconomic statuses with the illness. People with SLE had more self-reported anxiety and depression scores if they were from a lower socioeconomic status. Ethnicity There are assertions that race affects the rate of SLE. However, a 2010 review of studies that correlate race and SLE identified several sources of systematic and methodological error, indicating that the connection between race and SLE may be spurious. For example, studies show that social support is a modulating factor which buffers against SLE-related damage and maintains physiological functionality. Studies have not been conducted to determine whether people of different racial backgrounds receive differing levels of social support.If there is a difference, this could act as a confounding variable in studies correlating race and SLE. Another caveat to note when examining studies about SLE is that symptoms are often self-reported. This process introduces additional sources of methodological error. Studies have shown that self-reported data is affected by more than just the patients experience with the disease- social support, the level of helplessness, and abnormal illness-related behaviors also factor into a self-assessment. Additionally, other factors like the degree of social support that a person receives, socioeconomic status, health insurance, and access to care can contribute to an individuals disease progression.Racial differences in lupus progression have not been found in studies that control for the socioeconomic status [SES] of participants. Studies that control for the SES of its participants have found that non-white people have more abrupt disease onset compared to white people and that their disease progresses more quickly. Non-white patients often report more hematological, serosal, neurological, and renal symptoms. However, the severity of symptoms and mortality are both similar in white and non-white patients. Studies that report different rates of disease progression in late-stage SLE are most likely reflecting differences in socioeconomic status and the corresponding access to care. The people who receive medical care have often accrued less disease-related damage and are less likely to be below the poverty line. Additional studies have found that education, marital status, occupation, and income create a social context that affects disease progression. Sex SLE, like many autoimmune diseases, affects females more frequently than males, at a rate of about 9 to 1. The X chromosome carries immunological related genes, which can mutate and contribute to the onset of SLE. The Y chromosome has no identified mutations associated with autoimmune disease.Hormonal mechanisms could explain the increased incidence of SLE in females. The onset of SLE could be attributed to the elevated hydroxylation of estrogen and the abnormally decreased levels of androgens in females. In addition, differences in GnRH signalling have also been shown to contribute to the onset of SLE. While females are more likely to relapse than males, the intensity of these relapses is the same for both sexes.In addition to hormonal mechanisms, specific genetic influences found on the X chromosome may also contribute to the development of SLE. Studies indicate that the X chromosome can determine the levels of sex hormones. A study has shown an association between Klinefelter syndrome and SLE. XXY males with SLE have an abnormal X–Y translocation resulting in the partial triplication of the PAR1 gene region. Changing rate of disease The rate of SLE in the United States increased from 1.0 in 1955 to 7.6 in 1974. Whether the increase is due to better diagnosis or an increased frequency of the disease is unknown. History The history of SLE can be divided into three periods: classical, neoclassical, and modern. In each period, research and documentation advanced the understanding and diagnosis of SLE, leading to its classification as an autoimmune disease in 1851, and to the various diagnostic options and treatments now available to people with SLE. The advances made by medical science in the diagnosis and treatment of SLE have dramatically improved the life expectancy of a person diagnosed with SLE. Etymology There are several explanations ventured for the term lupus erythematosus. Lupus is Latin for "wolf", and "erythro" is derived from ερυθρός, Greek for "red". All explanations originate with the reddish, butterfly-shaped malar rash that the disease classically exhibits across the nose and cheeks. The reason the term lupus was used to describe this disease comes from the mid-19th century. Many diseases that caused ulceration or necrosis were given the term "lupus" due to the wound being reminiscent of a wolfs bite. This is similar to the naming of lupus vulgaris or chronic facial tuberculosis, where the lesions are ragged and punched out and are said to resemble the bite of a wolf. Classical period The classical period began when the disease was first recognized in the Middle Ages. The term lupus is attributed to 12th-century Italian physician Rogerius Frugard, who used it to describe ulcerating sores on the legs of people. No formal treatment for the disease existed and the resources available to physicians to help people were limited. Neoclassical period The neoclassical period began in 1851 when the skin disease which is now known as discoid lupus was documented by the French physician, Pierre Cazenave. Cazenave termed the illness lupus and added the word erythematosus to distinguish this disease from other illnesses that affected the skin except they were infectious. Cazenave observed the disease in several people and made very detailed notes to assist others in its diagnosis. He was one of the first to document that lupus affected adults from adolescence into the early thirties and that facial rash is its most distinguishing feature.Research and documentation of the disease continued in the neoclassical period with the work of Ferdinand von Hebra and his son-in-law, Moritz Kaposi. They documented the physical effects of lupus as well as some insights into the possibility that the disease caused internal trauma. Von Hebra observed that lupus symptoms could last many years and that the disease could go "dormant" after years of aggressive activity and then re-appear with symptoms following the same general pattern. These observations led Hebra to term lupus a chronic disease in 1872.Kaposi observed that lupus assumed two forms: the skin lesions (now known as discoid lupus) and a more aggravated form that affected not only the skin but also caused fever, arthritis, and other systemic disorders in people. The latter also presented a rash confined to the face, appearing on the cheeks and across the bridge of the nose; he called this the "butterfly rash". Kaposi also observed those patients who developed the butterfly rash were often afflicted with another disease such as tuberculosis, anemia, or chlorisis which often caused death. Kaposi was one of the first people to recognize what is now termed systemic lupus erythematosus in his documentation of the remitting and relapsing nature of the disease and the relationship of skin and systemic manifestations during disease activity.The 19th centurys research into lupus continued with the work of Sir William Osler who, in 1895, published the first of his three papers about the internal complications of erythema exudativum multiforme. Not all the patient cases in his paper had SLE but Oslers work expanded the knowledge of systemic diseases and documented extensive and critical visceral complications for several diseases including lupus. Noting that many people with lupus had a disease that not only affected the skin but many other organs in the body as well, Osler added the word "systemic" to the term lupus erythematosus to distinguish this type of disease from discoid lupus erythematosus.Oslers second paper noted that reoccurrence is a special feature of the disease and that attacks can be sustained for months or even years. Further study of the disease led to a third paper, published in 1903, documenting afflictions such as arthritis, pneumonia, the inability to form coherent ideas, delirium, and central nervous system damage as all affecting patients diagnosed with SLE. Modern period The modern period, beginning in 1920, saw major developments in research into the cause and treatment of discoid and systemic lupus. Research conducted in the 1920s and 1930s led to the first detailed pathologic descriptions of lupus and demonstrated how the disease affected the kidney, heart, and lung tissue. A breakthrough was made in 1948 with the discovery of the LE cell (the lupus erythematosus cell—a misnomer, as it occurs with other diseases as well). Discovered by a team of researchers at the Mayo Clinic, they discovered that the white blood cells contained the nucleus of another cell that was pushing against the whites cell proper nucleus.Noting that the invading nucleus was coated with antibody that allowed it to be ingested by a phagocytic or scavenger cell, they named the antibody that causes one cell to ingest another the LE factor and the two nuclei cell result in the LE cell. The LE cell, it was determined, was a part of an anti-nuclear antibody (ANA) reaction; the body produces antibodies against its own tissue. This discovery led to one of the first definitive tests for lupus since LE cells are found in approximately 60% of all people diagnosed with lupus. The LE cell test is rarely performed as a definitive lupus test today as LE cells do not always occur in people with SLE and can occur in individuals with other autoimmune diseases. Their presence can help establish a diagnosis but no longer indicates a definitive SLE diagnosis. The discovery of the LE cell led to further research and this resulted in more definitive tests for lupus. Building on the knowledge that those with SLE had auto-antibodies that would attach themselves to the nuclei of normal cells, causing the immune system to send white blood cells to fight off these "invaders", a test was developed to look for the anti-nuclear antibody (ANA) rather than the LE cell specifically. This ANA test was easier to perform and led not only to a definitive diagnosis of lupus but also many other related diseases. This discovery led to the understanding of what is now known as autoimmune diseases.To ensure that the person has lupus and not another autoimmune disease, the American College of Rheumatology (ACR) established a list of clinical and immunologic criteria that, in any combination, point to SLE. The criteria include symptoms that the person can identify (e.g. pain) and things that a physician can detect in a physical examination and through laboratory test results. The list was originally compiled in 1971, initially revised in 1982, and further revised and improved in 2009.Medical historians have theorized that people with porphyria (a disease that shares many symptoms with SLE) generated folklore stories of vampires and werewolves, due to the photosensitivity, scarring, hair growth, and porphyrin brownish-red stained teeth in severe recessive forms of porphyria (or combinations of the disorder, known as dual, homozygous, or compound heterozygous porphyrias).Useful medication for the disease was first found in 1894 when quinine was first reported as an effective therapy. Four years later, the use of salicylates in conjunction with quinine was noted to be of still greater benefit. This was the best available treatment until the middle of the twentieth century when Hench discovered the efficacy of corticosteroids in the treatment of SLE. Research A study called BLISS-76 tested the drug belimumab, a fully human monoclonal anti-BAFF (or anti-BLyS) antibody. BAFF stimulates and extends the life of B lymphocytes, which produce antibodies against foreign and self-protein. It was approved by the FDA in March 2011. Genetically engineered immune cells are also being studied in animal models of the disease as of 2019.In September 2022, researchers at the University of Erlangen-Nuremberg published promising results using genetically altered immune cells to treat severely ill patients. Four women and one man received transfusions of CAR T cells modified to attack their B cells, eliminating the aberrant ones. The therapy drove the disease into remission in all five patients, who have been off lupus medication for several months after the treatment ended. See also Canine discoid lupus erythematosus in dogs List of people with lupus References External links Lupus at Curlie Systemic Lupus Erythematosus at the National Institute of Arthritis and Musculoskeletal and Skin Diseases
Types of volcanic eruptions
Several types of volcanic eruptions—during which lava, tephra (ash, lapilli, volcanic bombs and volcanic blocks), and assorted gases are expelled from a volcanic vent or fissure—have been distinguished by volcanologists. These are often named after famous volcanoes where that type of behavior has been observed. Some volcanoes may exhibit only one characteristic type of eruption during a period of activity, while others may display an entire sequence of types all in one eruptive series. There are three different types of eruptions: Magmatic eruptions are the most well-observed type of eruption. They involve the decompression of gas within magma that propels it forward. Phreatic eruptions are driven by the superheating of steam due to the close proximity of magma. This type exhibits no magmatic release, instead causing the granulation of existing rock. Phreatomagmatic eruptions are driven by the direct interaction of magma and water, as opposed to phreatic eruptions, where no fresh magma reaches the surface.Within these wide-defining eruptive types are several subtypes. The weakest are Hawaiian and submarine, then Strombolian, followed by Vulcanian and Surtseyan. The stronger eruptive types are Pelean eruptions, followed by Plinian eruptions; the strongest eruptions are called Ultra-Plinian. Subglacial and phreatic eruptions are defined by their eruptive mechanism, and vary in strength. An important measure of eruptive strength is the Volcanic Explosivity Index an order-of-magnitude scale, ranging from 0 to 8, that often correlates to eruptive types Eruption mechanisms Volcanic eruptions arise through three main mechanisms: Gas release under decompression, causing magmatic eruptions Ejection of entrained particles during steam eruptions, causing phreatic eruptions Thermal contraction from chilling on contact with water, causing phreatomagmatic eruptionsThere are two types of eruptions in terms of activity, explosive eruptions and effusive eruptions. Explosive eruptions are characterized by gas-driven explosions that propels magma and tephra. Effusive eruptions, meanwhile, are characterized by the outpouring of lava without significant explosive eruption.Volcanic eruptions vary widely in strength. On the one extreme there are effusive Hawaiian eruptions, which are characterized by lava fountains and fluid lava flows, which are typically not very dangerous. On the other extreme, Plinian eruptions are large, violent, and highly dangerous explosive events. Volcanoes are not bound to one eruptive style, and frequently display many different types, both passive and explosive, even in the span of a single eruptive cycle. Volcanoes do not always erupt vertically from a single crater near their peak, either. Some volcanoes exhibit lateral and Fissure eruptions. Notably, many Hawaiian eruptions start from rift zones. Scientists believed that pulses of magma mixed together in the magma chamber before climbing upward—a process estimated to take several thousands of years. However, Columbia University volcanologists found that the eruption of Costa Ricas Irazú Volcano in 1963 was likely triggered by magma that took a nonstop route from the mantle over just a few months. Volcanic explosivity index The volcanic explosivity index (commonly shortened to VEI) is a scale, from 0 to 8, for measuring the strength of eruptions. It is used by the Smithsonian Institutions Global Volcanism Program in assessing the impact of historic and prehistoric lava flows. It operates in a way similar to the Richter scale for earthquakes, in that each interval in value represents a tenfold increasing in magnitude (it is logarithmic). The vast majority of volcanic eruptions are of VEIs between 0 and 2. Magmatic eruptions Magmatic eruptions produce juvenile clasts during explosive decompression from gas release. They range in intensity from the relatively small lava fountains on Hawaii to catastrophic Ultra-Plinian eruption columns more than 30 km (19 mi) high, bigger than the eruption of Mount Vesuvius in 79 that buried Pompeii. Hawaiian Hawaiian eruptions are a type of volcanic eruption named after the Hawaiian volcanoes with which this eruptive type is hallmark. Hawaiian eruptions are the calmest types of volcanic events, characterized by the effusive eruption of very fluid basalt-type lavas with low gaseous content. The volume of ejected material from Hawaiian eruptions is less than half of that found in other eruptive types. Steady production of small amounts of lava builds up the large, broad form of a shield volcano. Eruptions are not centralized at the main summit as with other volcanic types, and often occur at vents around the summit and from fissure vents radiating out of the center.Hawaiian eruptions often begin as a line of vent eruptions along a fissure vent, a so-called "curtain of fire." These die down as the lava begins to concentrate at a few of the vents. Central-vent eruptions, meanwhile, often take the form of large lava fountains (both continuous and sporadic), which can reach heights of hundreds of meters or more. The particles from lava fountains usually cool in the air before hitting the ground, resulting in the accumulation of cindery scoria fragments; however, when the air is especially thick with clasts, they cannot cool off fast enough due to the surrounding heat, and hit the ground still hot, the accumulation of which forms spatter cones. If eruptive rates are high enough, they may even form splatter-fed lava flows. Hawaiian eruptions are often extremely long lived; Puʻu ʻŌʻō, a volcanic cone on Kilauea, erupted continuously for over 35 years. Another Hawaiian volcanic feature is the formation of active lava lakes, self-maintaining pools of raw lava with a thin crust of semi-cooled rock. Flows from Hawaiian eruptions are basaltic, and can be divided into two types by their structural characteristics. Pahoehoe lava is a relatively smooth lava flow that can be billowy or ropey. They can move as one sheet, by the advancement of "toes," or as a snaking lava column. Aa lava flows are denser and more viscous than pahoehoe, and tend to move slower. Flows can measure 2 to 20 m (7 to 66 ft) thick. Aa flows are so thick that the outside layers cools into a rubble-like mass, insulating the still-hot interior and preventing it from cooling. Aa lava moves in a peculiar way—the front of the flow steepens due to pressure from behind until it breaks off, after which the general mass behind it moves forward. Pahoehoe lava can sometimes become Aa lava due to increasing viscosity or increasing rate of shear, but Aa lava never turns into pahoehoe flow.Hawaiian eruptions are responsible for several unique volcanological objects. Small volcanic particles are carried and formed by the wind, chilling quickly into teardrop-shaped glassy fragments known as Peles tears (after Pele, the Hawaiian volcano deity). During especially high winds these chunks may even take the form of long drawn-out strands, known as Peles hair. Sometimes basalt aerates into reticulite, the lowest density rock type on earth.Although Hawaiian eruptions are named after the volcanoes of Hawaii, they are not necessarily restricted to them; the highest lava fountain recorded was during the 23 November 2013 eruption of Mount Etna in Italy, which reached a stable height of around 2,500 m (8,200 ft) for 18 minutes, briefly peaking at a height of 3,400 m (11,000 ft).Volcanoes known to have Hawaiian activity include: Puʻu ʻŌʻō, a parasitic cinder cone located on Kilauea on the island of Hawaiʻi which erupted continuously from 1983 to 2018. The eruptions began with a 6 km (4 mi)-long fissure-based "curtain of fire" on 3 January 1983. These gave way to centralized eruptions on the site of Kilaueas east rift, eventually building up the cone. For a list of all of the volcanoes of Hawaii, see List of volcanoes in the Hawaiian – Emperor seamount chain. Mount Etna, Italy. Mount Mihara in 1986 (see above paragraph) Strombolian Strombolian eruptions are a type of volcanic eruption named after the volcano Stromboli, which has been erupting nearly continuously for centuries. Strombolian eruptions are driven by the bursting of gas bubbles within the magma. These gas bubbles within the magma accumulate and coalesce into large bubbles, called gas slugs. These grow large enough to rise through the lava column. Upon reaching the surface, the difference in air pressure causes the bubble to burst with a loud pop, throwing magma in the air in a way similar to a soap bubble. Because of the high gas pressures associated with the lavas, continued activity is generally in the form of episodic explosive eruptions accompanied by the distinctive loud blasts. During eruptions, these blasts occur as often as every few minutes.The term "Strombolian" has been used indiscriminately to describe a wide variety of volcanic eruptions, varying from small volcanic blasts to large eruptive columns. In reality, true Strombolian eruptions are characterized by short-lived and explosive eruptions of lavas with intermediate viscosity, often ejected high into the air. Columns can measure hundreds of meters in height. The lavas formed by Strombolian eruptions are a form of relatively viscous basaltic lava, and its end product is mostly scoria. The relative passivity of Strombolian eruptions, and its non-damaging nature to its source vent allow Strombolian eruptions to continue unabated for thousands of years, and also makes it one of the least dangerous eruptive types. Strombolian eruptions eject volcanic bombs and lapilli fragments that travel in parabolic paths before landing around their source vent. The steady accumulation of small fragments builds cinder cones composed completely of basaltic pyroclasts. This form of accumulation tends to result in well-ordered rings of tephra.Strombolian eruptions are similar to Hawaiian eruptions, but there are differences. Strombolian eruptions are noisier, produce no sustained eruptive columns, do not produce some volcanic products associated with Hawaiian volcanism (specifically Peles tears and Peles hair), and produce fewer molten lava flows (although the eruptive material does tend to form small rivulets).Volcanoes known to have Strombolian activity include: Parícutin, Mexico, which erupted from a fissure in a cornfield in 1943. Two years into its life, pyroclastic activity began to wane, and the outpouring of lava from its base became its primary mode of activity. Eruptions ceased in 1952, and the final height was 424 m (1,391 ft). This was the first time that scientists are able to observe the complete life cycle of a volcano. Mount Etna, Italy, which has displayed Strombolian activity in recent eruptions, for example in 1981, 1999, 2002–2003, and 2009. Mount Erebus in Antarctica, the southernmost active volcano in the world, having been observed erupting since 1972. Eruptive activity at Erebus consists of frequent Strombolian activity. Mount Batutara, Indonesia, exhibited continuous Strombolian eruption since 2014. Stromboli itself. The namesake of the mild explosive activity that it possesses has been active throughout historical time; essentially continuous Strombolian eruptions, occasionally accompanied by lava flows, have been recorded at Stromboli for more than a millennium. Vulcanian Vulcanian eruptions are a type of volcanic eruption named after the volcano Vulcano. It was named so following Giuseppe Mercallis observations of its 1888–1890 eruptions. In Vulcanian eruptions, intermediate viscous magma within the volcano make it difficult for vesiculate gases to escape. Similar to Strombolian eruptions, this leads to the buildup of high gas pressure, eventually popping the cap holding the magma down and resulting in an explosive eruption. However, unlike Strombolian eruptions, ejected lava fragments are not aerodynamic; this is due to the higher viscosity of Vulcanian magma and the greater incorporation of crystalline material broken off from the former cap. They are also more explosive than their Strombolian counterparts, with eruptive columns often reaching between 5 and 10 km (3 and 6 mi) high. Lastly, Vulcanian deposits are andesitic to dacitic rather than basaltic.Initial Vulcanian activity is characterized by a series of short-lived explosions, lasting a few minutes to a few hours and typified by the ejection of volcanic bombs and blocks. These eruptions wear down the lava dome holding the magma down, and it disintegrates, leading to much more quiet and continuous eruptions. Thus an early sign of future Vulcanian activity is lava dome growth, and its collapse generates an outpouring of pyroclastic material down the volcanos slope. Deposits near the source vent consist of large volcanic blocks and bombs, with so-called "bread-crust bombs" being especially common. These deeply cracked volcanic chunks form when the exterior of ejected lava cools quickly into a glassy or fine-grained shell, but the inside continues to cool and vesiculate. The center of the fragment expands, cracking the exterior. However the bulk of Vulcanian deposits are fine grained ash. The ash is only moderately dispersed, and its abundance indicates a high degree of fragmentation, the result of high gas contents within the magma. In some cases these have been found to be the result of interaction with meteoric water, suggesting that Vulcanian eruptions are partially hydrovolcanic.Volcanoes that have exhibited Vulcanian activity include: Sakurajima, Japan has been the site of Vulcanian activity near-continuously since 1955. Tavurvur, Papua New Guinea, one of several volcanoes in the Rabaul Caldera. Irazú Volcano in Costa Rica exhibited Vulcanian activity in its 1965 eruption. Anak Krakatoa, Indonesia, repeated vulcanian activities since its rise in 1930 until the present time.Vulcanian eruptions are estimated to make up at least half of all known Holocene eruptions. Peléan Peléan eruptions (or nuée ardente) are a type of volcanic eruption named after the volcano Mount Pelée in Martinique, the site of a Peléan eruption in 1902 that is one of the worst natural disasters in history. In Peléan eruptions, a large amount of gas, dust, ash, and lava fragments are blown out the volcanos central crater, driven by the collapse of rhyolite, dacite, and andesite lava dome collapses that often create large eruptive columns. An early sign of a coming eruption is the growth of a so-called Peléan or lava spine, a bulge in the volcanos summit preempting its total collapse. The material collapses upon itself, forming a fast-moving pyroclastic flow (known as a block-and-ash flow) that moves down the side of the mountain at tremendous speeds, often over 150 km (93 mi) per hour. These landslides make Peléan eruptions one of the most dangerous in the world, capable of tearing through populated areas and causing serious loss of life. The 1902 eruption of Mount Pelée caused tremendous destruction, killing more than 30,000 people and completely destroying St. Pierre, the worst volcanic event in the 20th century.Peléan eruptions are characterized most prominently by the incandescent pyroclastic flows that they drive. The mechanics of a Peléan eruption are very similar to that of a Vulcanian eruption, except that in Peléan eruptions the volcanos structure is able to withstand more pressure, hence the eruption occurs as one large explosion rather than several smaller ones.Volcanoes known to have Peléan activity include: Mount Pelée, Martinique. The 1902 eruption of Mount Pelée completely devastated the island, destroying St. Pierre and leaving only 3 survivors. The eruption was directly preceded by lava dome growth. Mayon Volcano, the Philippines most active volcano. It has been the site of many different types of eruptions, Peléan included. Approximately 40 ravines radiate from the summit and provide pathways for frequent pyroclastic flows and mudflows to the lowlands below. Mayons most violent eruption occurred in 1814 and was responsible for over 1200 deaths. The 1951 eruption of Mount Lamington. Prior to this eruption the peak had not even been recognized as a volcano. Over 3,000 people were killed, and it has become a benchmark for studying large Peléan eruptions. Mount Sinabung, Indonesia. History of its eruptions since 2013 are showing the volcano emits pyroclastic flows with frequent collapses of its lava domes. Plinian Plinian eruptions (or Vesuvian eruptions) are a type of volcanic eruption named for the historical eruption of Mount Vesuvius in 79 AD that buried the Roman towns of Pompeii and Herculaneum and, specifically, for its chronicler Pliny the Younger. The process powering Plinian eruptions starts in the magma chamber, where dissolved volatile gases are stored in the magma. The gases vesiculate and accumulate as they rise through the magma conduit. These bubbles agglutinate and once they reach a certain size (about 75% of the total volume of the magma conduit) they explode. The narrow confines of the conduit force the gases and associated magma up, forming an eruptive column. Eruption velocity is controlled by the gas contents of the column, and low-strength surface rocks commonly crack under the pressure of the eruption, forming a flared outgoing structure that pushes the gases even faster.These massive eruptive columns are the distinctive feature of a Plinian eruption, and reach up 2 to 45 km (1 to 28 mi) into the atmosphere. The densest part of the plume, directly above the volcano, is driven internally by gas expansion. As it reaches higher into the air the plume expands and becomes less dense, convection and thermal expansion of volcanic ash drive it even further up into the stratosphere. At the top of the plume, powerful prevailing winds drive the plume away from the volcano. These highly explosive eruptions are usually associated with volatile-rich dacitic to rhyolitic lavas, and occur most typically at stratovolcanoes. Eruptions can last anywhere from hours to days, with longer eruptions being associated with more felsic volcanoes. Although they are usually associated with felsic magma, Plinian eruptions can occur at basaltic volcanoes, if the magma chamber differentiates with upper portions rich in silicon dioxide, or if magma ascends rapidly.Plinian eruptions are similar to both Vulcanian and Strombolian eruptions, except that rather than creating discrete explosive events, Plinian eruptions form sustained eruptive columns. They are also similar to Hawaiian lava fountains in that both eruptive types produce sustained eruption columns maintained by the growth of bubbles that move up at about the same speed as the magma surrounding them.Regions affected by Plinian eruptions are subjected to heavy pumice airfall affecting an area 0.5 to 50 km3 (0 to 12 cu mi) in size. The material in the ash plume eventually finds its way back to the ground, covering the landscape in a thick layer of many cubic kilometers of ash. However the most dangerous eruptive feature are the pyroclastic flows generated by material collapse, which move down the side of the mountain at extreme speeds of up to 700 km (435 mi) per hour and with the ability to extend the reach of the eruption hundreds of kilometers. The ejection of hot material from the volcanos summit melts snowbanks and ice deposits on the volcano, which mixes with tephra to form lahars, fast moving mudflows with the consistency of wet concrete that move at the speed of a river rapid.Major Plinian eruptive events include: The AD 79 eruption of Mount Vesuvius buried the Roman towns of Pompeii and Herculaneum under a layer of ash and tephra. It is the model Plinian eruption. Mount Vesuvius has erupted several times since then. Its last eruption was in 1944 and caused problems for the allied armies as they advanced through Italy. It was the contemporary report by Pliny the Younger that led scientists to refer to Vesuvian eruptions as "Plinian". The 1980 eruption of Mount St. Helens in Washington, which ripped apart the volcanos summit, was a Plinian eruption of Volcanic Explosivity Index (VEI) 5. The strongest types of eruptions, with a VEI of 8, are so-called "Ultra-Plinian" eruptions, such as the one at Lake Toba 74 thousand years ago, which put out 2800 times the material erupted by Mount St. Helens in 1980. Hekla in Iceland, an example of basaltic Plinian volcanism being its 1947–48 eruption. The past 800 years have been a pattern of violent initial eruptions of pumice followed by prolonged extrusion of basaltic lava from the lower part of the volcano. Pinatubo in the Philippines on 15 June 1991, which produced 5 km3 (1 cu mi) of dacitic magma, a 40 km (25 mi) high eruption column, and released 17 megatons of sulfur dioxide. Kelud, Indonesia erupted in 2014 and ejected around 120,000,000 to 160,000,000 cubic metres (4.2×109 to 5.7×109 cu ft) volcanic ashes which caused economic disruptions across Java. Phreatomagmatic eruptions Phreatomagmatic eruptions are eruptions that arise from interactions between water and magma. They are driven by thermal contraction of magma when it comes in contact with water (as distinguished from magmatic eruptions, which are driven by thermal expansion). This temperature difference between the two causes violent water-lava interactions that make up the eruption. The products of phreatomagmatic eruptions are believed to be more regular in shape and finer grained than the products of magmatic eruptions because of the differences in eruptive mechanisms.There is debate about the exact nature of phreatomagmatic eruptions, and some scientists believe that fuel-coolant reactions may be more critical to the explosive nature than thermal contraction. Fuel coolant reactions may fragment the volcanic material by propagating stress waves, widening cracks and increasing surface area that ultimately leads to rapid cooling and explosive contraction-driven eruptions. Surtseyan A Surtseyan (or hydrovolcanic) eruption is a type of volcanic eruption characterized by shallow-water interactions between water and lava, named after its most famous example, the eruption and formation of the island of Surtsey off the coast of Iceland in 1963. Surtseyan eruptions are the "wet" equivalent of ground-based Strombolian eruptions, but because they take place in water they are much more explosive. As water is heated by lava, it flashes into steam and expands violently, fragmenting the magma it contacts into fine-grained ash. Surtseyan eruptions are typical of shallow-water volcanic oceanic islands, but they are not confined to seamounts. They can happen on land as well, where rising magma that comes into contact with an aquifer (water-bearing rock formation) at shallow levels under the volcano can cause them. The products of Surtseyan eruptions are generally oxidized palagonite basalts (though andesitic eruptions do occur, albeit rarely), and like Strombolian eruptions Surtseyan eruptions are generally continuous or otherwise rhythmic.A defining feature of a Surtseyan eruption is the formation of a pyroclastic surge (or base surge), a ground hugging radial cloud that develops along with the eruption column. Base surges are caused by the gravitational collapse of a vaporous eruptive column, one that is denser overall than a regular volcanic column. The densest part of the cloud is nearest to the vent, resulting in a wedge shape. Associated with these laterally moving rings are dune-shaped depositions of rock left behind by the lateral movement. These are occasionally disrupted by bomb sags, rock that was flung out by the explosive eruption and followed a ballistic path to the ground. Accumulations of wet, spherical ash known as accretionary lapilli are another common surge indicator.Over time Surtseyan eruptions tend to form maars, broad low-relief volcanic craters dug into the ground, and tuff rings, circular structures built of rapidly quenched lava. These structures are associated with single vent eruptions. However, if eruptions arise along fracture zones, rift zones may be dug out. Such eruptions tend to be more violent than those which form tuff rings or maars, an example being the 1886 eruption of Mount Tarawera. Littoral cones are another hydrovolcanic feature, generated by the explosive deposition of basaltic tephra (although they are not truly volcanic vents). They form when lava accumulates within cracks in lava, superheats and explodes in a steam explosion, breaking the rock apart and depositing it on the volcanos flank. Consecutive explosions of this type eventually generate the cone.Volcanoes known to have Surtseyan activity include: Surtsey, Iceland. The volcano built itself up from depth and emerged above the Atlantic Ocean off the coast of Iceland in 1963. Initial hydrovolcanics were highly explosive, but as the volcano grew, rising lava interacted less with water and more with air, until finally Surtseyan activity waned and became more Strombolian. Ukinrek Maars in Alaska, 1977, and Capelinhos in the Azores, 1957, both examples of above-water Surtseyan activity. Mount Tarawera in New Zealand erupted along a rift zone in 1886, killing 150 people. Ferdinandea, a seamount in the Mediterranean Sea, breached sea level in July 1831 and caused a sovereignty dispute between Italy, France, and Great Britain. The volcano did not build tuff cones strong enough to withstand erosion and soon disappeared back below the waves. The underwater volcano Hunga Tonga in Tonga breached sea level in 2009. Both of its vents exhibited Surtseyan activity for much of the time. It was also the site of an earlier eruption in May 1988. Submarine Submarine eruptions occur underwater. An estimated 75% of volcanic eruptive volume is generated by submarine eruptions near mid ocean ridges alone, however problems detecting deep sea volcanics meant they remained virtually unknown until advances in the 1990s made it possible to observe them.Submarine eruptions may produce seamounts, which may break the surface and form volcanic islands. Submarine volcanism is driven by various processes. Volcanoes near plate boundaries and mid-ocean ridges are built by the decompression melting of mantle rock that rises on an upwelling portion of a convection cell to the crustal surface. Eruptions associated with subducting zones, meanwhile, are driven by subducting plates that add volatiles to the rising plate, lowering its melting point. Each process generates different rock; mid-ocean ridge volcanics are primarily basaltic, whereas subduction flows are mostly calc-alkaline, and more explosive and viscous.Spreading rates along mid-ocean ridges vary widely, from 2 cm (0.8 in) per year at the Mid-Atlantic Ridge, to up to 16 cm (6 in) along the East Pacific Rise. Higher spreading rates are a probable cause for higher levels of volcanism. The technology for studying seamount eruptions did not exist until advancements in hydrophone technology made it possible to "listen" to acoustic waves, known as T-waves, released by submarine earthquakes associated with submarine volcanic eruptions. The reason for this is that land-based seismometers cannot detect sea-based earthquakes below a magnitude of 4, but acoustic waves travel well in water and over long periods of time. A system in the
Types of volcanic eruptions
North Pacific, maintained by the United States Navy and originally intended for the detection of submarines, has detected an event on average every 2 to 3 years.The most common underwater flow is pillow lava, a rounded lava flow named for its unusual shape. Less common are glassy, marginal sheet flows, indicative of larger-scale flows. Volcaniclastic sedimentary rocks are common in shallow-water environments. As plate movement starts to carry the volcanoes away from their eruptive source, eruption rates start to die down, and water erosion grinds the volcano down. The final stages of eruption cap the seamount in alkalic flows. There are about 100,000 deepwater volcanoes in the world, although most are beyond the active stage of their life. Some exemplary seamounts are Kamaʻehuakanaloa (formerly Loihi), Bowie Seamount, Davidson Seamount, and Axial Seamount. Subglacial Subglacial eruptions are a type of volcanic eruption characterized by interactions between lava and ice, often under a glacier. The nature of glaciovolcanism dictates that it occurs at areas of high latitude and high altitude. It has been suggested that subglacial volcanoes that are not actively erupting often dump heat into the ice covering them, producing meltwater. This meltwater mix means that subglacial eruptions often generate dangerous jökulhlaups (floods) and lahars.The study of glaciovolcanism is still a relatively new field. Early accounts described the unusual flat-topped steep-sided volcanoes (called tuyas) in Iceland that were suggested to have formed from eruptions below ice. The first English-language paper on the subject was published in 1947 by William Henry Mathews, describing the Tuya Butte field in northwest British Columbia, Canada. The eruptive process that builds these structures, originally inferred in the paper, begins with volcanic growth below the glacier. At first the eruptions resemble those that occur in the deep sea, forming piles of pillow lava at the base of the volcanic structure. Some of the lava shatters when it comes in contact with the cold ice, forming a glassy breccia called hyaloclastite. After a while the ice finally melts into a lake, and the more explosive eruptions of Surtseyan activity begins, building up flanks made up of mostly hyaloclastite. Eventually the lake boils off from continued volcanism, and the lava flows become more effusive and thicken as the lava cools much more slowly, often forming columnar jointing. Well-preserved tuyas show all of these stages, for example Hjorleifshofdi in Iceland.Products of volcano-ice interactions stand as various structures, whose shape is dependent on complex eruptive and environmental interactions. Glacial volcanism is a good indicator of past ice distribution, making it an important climatic marker. Since they are embedded in ice, as glacial ice retreats worldwide there are concerns that tuyas and other structures may destabilize, resulting in mass landslides. Evidence of volcanic-glacial interactions are evident in Iceland and parts of British Columbia, and it is even possible that they play a role in deglaciation. Glaciovolcanic products have been identified in Iceland, the Canadian province of British Columbia, the U.S. states of Hawaii and Alaska, the Cascade Range of western North America, South America and even on the planet Mars. Volcanoes known to have subglacial activity include: Mauna Kea in tropical Hawaii. There is evidence of past subglacial eruptive activity on the volcano in the form of a subglacial deposit on its summit. The eruptions originated about 10,000 years ago, during the last ice age, when the summit of Mauna Kea was covered in ice. In 2008, the British Antarctic Survey reported a volcanic eruption under the Antarctica ice sheet 2,200 years ago. It is believed to be that this was the biggest eruption in Antarctica in the last 10,000 years. Volcanic ash deposits from the volcano were identified through an airborne radar survey, buried under later snowfalls in the Hudson Mountains, close to Pine Island Glacier. Iceland, well known for both glaciers and volcanoes, is often a site of subglacial eruptions. An example an eruption under the Vatnajökull ice cap in 1996, which occurred under an estimated 2,500 ft (762 m) of ice. As part of the search for life on Mars, scientists have suggested that there may be subglacial volcanoes on the red planet. Several potential sites of such volcanism have been reviewed, and compared extensively with similar features in Iceland:Viable microbial communities have been found living in deep (−2800 m) geothermal groundwater at 349 K and pressures >300 bar. Furthermore, microbes have been postulated to exist in basaltic rocks in rinds of altered volcanic glass. All of these conditions could exist in polar regions of Mars today where subglacial volcanism has occurred. Phreatic eruptions Phreatic eruptions (or steam-blast eruptions) are a type of eruption driven by the expansion of steam. When cold ground or surface water come into contact with hot rock or magma it superheats and explodes, fracturing the surrounding rock and thrusting out a mixture of steam, water, ash, volcanic bombs, and volcanic blocks. The distinguishing feature of phreatic explosions is that they only blast out fragments of pre-existing solid rock from the volcanic conduit; no new magma is erupted. Because they are driven by the cracking of rock strata under pressure, phreatic activity does not always result in an eruption; if the rock face is strong enough to withstand the explosive force, outright eruptions may not occur, although cracks in the rock will probably develop and weaken it, furthering future eruptions.Often a precursor of future volcanic activity, phreatic eruptions are generally weak, although there have been exceptions. Some phreatic events may be triggered by earthquake activity, another volcanic precursor, and they may also travel along dike lines. Phreatic eruptions form base surges, lahars, avalanches, and volcanic block "rain." They may also release deadly toxic gas able to suffocate anyone in range of the eruption.Volcanoes known to exhibit phreatic activity include: Mount St. Helens, which exhibited phreatic activity just prior to its catastrophic 1980 eruption (which was itself Plinian). Taal Volcano, Philippines, 1965 2020 La Soufrière of Guadeloupe (Lesser Antilles), 1975–1976 activity. Soufrière Hills volcano on Montserrat, West Indies, 1995–2012. Poás Volcano, has frequent geyser like phreatic eruptions from its crater lake. Mount Bulusan, well known for its sudden phreatic eruptions. Mount Ontake, all historical eruptions of this volcano have been phreatic including the deadly 2014 eruption. Mount Kerinci, Indonesia, produces almost annual phreatic eruptions. See also List of volcanic eruptions in the 21st century List of Quaternary volcanic eruptions Prediction of volcanic activity – Research to predict volcanic activity Timeline of volcanism on Earth References == Further reading ==
Patient
A patient is any recipient of health care services that are performed by healthcare professionals. The patient is most often ill or injured and in need of treatment by a physician, nurse, optometrist, dentist, veterinarian, or other health care provider. Etymology The word patient originally meant one who suffers. This English noun comes from the Latin word patienscode: lat promoted to code: la , the present participle of the deponent verb, patiorcode: lat promoted to code: la , meaning I am suffering, and akin to the Greek verb πάσχεινcode: ell promoted to code: el (paskhein, to suffer) and its cognate noun πάθοςcode: ell promoted to code: el (pathos). This language has been construed as meaning that the role of patients is to passively accept and tolerate the suffering and treatments prescribed by the healthcare providers, without engaging in shared decision-making about their care. Outpatients and inpatients An outpatient (or out-patient) is a patient who attends an outpatient clinic with no plan to stay beyond the duration of the visit. Even if the patient will not be formally admitted with a note as an outpatient, their attendance is still registered, and the provider will usually give a note explaining the reason for the visit, tests, or procedure/surgery, which should include the names and titles of the participating personnel, the patients name and date of birth, signature of informed consent, estimated pre-and post-service time for history and exam (before and after), any anesthesia, medications or future treatment plans needed, and estimated time of discharge absent any (further) complications. Treatment provided in this fashion is called ambulatory care. Sometimes surgery is performed without the need for a formal hospital admission or an overnight stay, and this is called outpatient surgery or day surgery, which has many benefits including lowered healthcare cost, reducing the amount of medication prescribed, and using the physicians or surgeons time more efficiently. Outpatient surgery is suited best for more healthy patients undergoing minor or intermediate procedures (limited urinary-tract, eye, or ear, nose, and throat procedures and procedures involving superficial skin and the extremities). More procedures are being performed in a surgeons office, termed office-based surgery, rather than in a hospital-based operating room. An inpatient (or in-patient), on the other hand, is "admitted" to stay in a hospital overnight or for an indeterminate time, usually, several days or weeks, though in some extreme cases, such as with coma or persistent vegetative state, patients can stay in hospitals for years, sometimes until death. Treatment provided in this fashion is called inpatient care. The admission to the hospital involves the production of an admission note. The leaving of the hospital is officially termed discharge, and involves a corresponding discharge note, and sometimes an assessment process to consider ongoing needs. In the English National Health Service this may take the form of "Discharge to Assess" - where the assessment takes place after the patient has gone home.Misdiagnosis is the leading cause of medical error in outpatient facilities. When the U.S. Institute of Medicine’s groundbreaking 1999 report, To Err Is Human, found up to 98,000 hospital patients die from preventable medical errors in the U.S. each year, early efforts focused on inpatient safety. While patient safety efforts have focused on inpatient hospital settings for more than a decade, medical errors are even more likely to happen in a doctor’s office or outpatient clinic or center. Day patient A day patient or (day-patient) is a patient who is using the full range of services of a hospital or clinic but is not expected to stay the night. The term was originally used by psychiatric hospital services using of this patient type to care for people needing support to make the transition from in-patient to out-patient care. However, the term is now also heavily used for people attending hospitals for day surgery. Alternative terminology Because of concerns such as dignity, human rights and political correctness, the term "patient" is not always used to refer to a person receiving health care. Other terms that are sometimes used include health consumer, healthcare consumer, customer or client. However, such terminology may be offensive to those receiving public health care, as it implies a business relationship. In veterinary medicine, the client is the owner or guardian of the patient. These may be used by governmental agencies, insurance companies, patient groups, or health care facilities. Individuals who use or have used psychiatric services may alternatively refer to themselves as consumers, users, or survivors. In nursing homes and assisted living facilities, the term resident is generally used in lieu of patient. Similarly, those receiving home health care are called clients. Patient-centered healthcare The doctor–patient relationship has sometimes been characterized as silencing the voice of patients. It is now widely agreed that putting patients at the centre of healthcare by trying to provide a consistent, informative and respectful service to patients will improve both outcomes and patient satisfaction.When patients are not at the centre of healthcare, when institutional procedures and targets eclipse local concerns, then patient neglect is possible. Incidents, such as the Stafford Hospital scandal, Winterbourne View hospital abuse scandal and the Veterans Health Administration controversy of 2014 have shown the dangers of prioritizing cost control over the patient experience. Investigations into these and other scandals have recommended that healthcare systems put patient experience at the center, and especially that patients themselves are heard loud and clear within health services.There are many reasons for why health services should listen more to patients. Patients spend more time in healthcare services than regulators or quality controllers, and can recognize problems such as service delays, poor hygiene, and poor conduct. Patients are particularly good at identifying soft problems, such as attitudes, communication, and caring neglect, that are difficult to capture with institutional monitoring.One important way in which patients can be placed at the centre of healthcare is for health services to be more open about patient complaints. Each year many hundreds of thousands of patients complain about the care they have received, and these complaints contain valuable information for any health services which want to learn about and improve patient experience. See also References External links Jadad AR, Rizo CA, Enkin MW (June 2003). "I am a good patient, believe it or not". BMJ. 326 (7402): 1293–5. doi:10.1136/bmj.326.7402.1293. PMC 1126181. PMID 12805157.a peer-reviewed article published in the British Medical Journals (BMJ) first issue dedicated to patients in its 160-year history Sokol DK (21 February 2004). "How (not) to be a good patient". BMJ. 328 (7437): 471. doi:10.1136/bmj.328.7437.471. PMC 344286.review article with views on the meaning of the words "good doctor" vs. "good patient" "Time Magazines Dr. Scott Haig Proves that Patients Need to Be Googlers!" – Mary Shomons response to the Time Magazine article "When the Patient is a Googler"
Penile fracture
Penile fracture is rupture of one or both of the tunica albuginea, the fibrous coverings that envelop the peniss corpora cavernosa. It is caused by rapid blunt force to an erect penis, usually during vaginal intercourse, or aggressive masturbation. It sometimes also involves partial or complete rupture of the urethra or injury to the dorsal nerves, veins and arteries. Signs and symptoms A popping or cracking sound, significant pain, swelling, immediate loss of erection leading to flaccidity, and skin hematoma of various sizes are commonly associated with the sexual event. Causes Penile fracture is a relatively uncommon clinical condition. Vaginal intercourse and aggressive masturbation are the most common causes. A 2014 study of accident and emergency records at three hospitals in Campinas, Brazil, showed that woman on top positions caused the greatest risk with the missionary position being the safest. The research conjectured that when the receptive partner is on top, they usually control the movement and are not able to interrupt movement when the penis suffers a misaligned penetration. Conversely, when the penetrative partner is controlling the movement, they have better chances of stopping in response to pain from misalignment, minimizing harm.The practice of taqaandan (also taghaandan) also puts men at risk of penile fracture. Taqaandan, which comes from a Kurdish word meaning "to click", involves bending the top part of the erect penis while holding the lower part of the shaft in place, until a click is heard and felt. Taqaandan is said to be painless and has been compared to cracking ones knuckles, but the practice of taqaandan has led to an increase in the prevalence of penile fractures in western Iran. Taqaandan may be performed to achieve detumescence. Diagnosis Imaging studies Ultrasound examination is able to depict the tunica albuginea tear in the majority of cases (as a hypoechoic discontinuity in the normally echogenic tunica). In a study on 25 patients, Zare Mehrjardi et al. concluded that ultrasound is unable to find the tear just when it is located at the penile base. In their study magnetic resonance imaging (MRI) accurately diagnosed all of the tears (as a discontinuity in the normally low signal tunica on both T1- and T2-weighted sequences). They concluded that ultrasound should be considered as the initial imaging method, and MRI can be helpful in cases that ultrasound does not depict any tear but clinical suspicions for fracture are still high. In the same study, authors investigated accuracy of ultrasound and MRI for determining the tear location (mapping of fracture) in order to perform a tailored surgical repair. MRI was more accurate than ultrasound for this purpose, but ultrasound mapping was well correlated with surgical results in cases where the tear was clearly visualized on ultrasound exam. The advantage of ultrasound in the diagnosis of penile fracture is unrivaled when its noninvasive, cost-effective, and nonionising nature are considered.Penile trauma can result from a blunt or penetrating injury, the latter being rarely investigated by imaging methods, almost always requiring immediate surgical exploration. In the erect penis, trauma results from stretching and narrowing of the tunica albuginea, which can undergo segmental rupture of one or both of the corpora cavernosa, constituting a penile fracture.In the ultrasound examination, a lesion of the tunica albuginea presents as an interruption in (loss of continuity of) the echoic line representing it (Figure 4). Small, moderate, or broad hematomas demonstrate the extent of that discontinuity. Intracavernous hematomas, sometimes without the presence of a tunica albuginea fracture, can be observed when there is a lesion of the smooth muscle of the trabeculae surrounding the sinusoid spaces or the subtunical venular plexus. In 10-15% of penile traumas, there can be an accompanying urethral lesion. When blood is observed in the urethral meatus, contrast-enhanced evaluation of the urethra is necessary. In cases in which the ultrasound findings are inconclusive, the use of magnetic resonance imaging can facilitate the diagnosis and is recommended by various authors. Treatment Penile fracture is a medical emergency, and emergency surgical repair is the usual treatment. Delay in seeking treatment increases the complication rate. Non-surgical approaches result in 10–50% complication rates including erectile dysfunction, permanent penile curvature, damage to the urethra and pain during sexual intercourse, while operatively treated patients experience an 11% complication rate.In some cases, retrograde urethrogram may be performed to rule out concurrent urethral injury. Legal issues In the United States, the case of Doe v. Moe, 63 Mass. App. Ct. 516, 827 N.E.2d 240 (2005), tested liability for a penile fracture injury caused during sexual intercourse. The court declined to find duty as between two consensual adults. The plaintiff in this case, a man who suffered a fractured penis, complained that the defendant, his ex-girlfriend, had caused his injury while she was on top of him during sexual intercourse. The court ruled in her favor, determining that her conduct was neither legally wanton nor reckless. References Further reading Ouch! Can You Really Break Your Penis? 2009 Scientific American featuring interview with Hunter Wessells, chair of the urology department at the University of Washington School of Medicine in Seattle Jack GS, Garraway I, Reznichek R, Rajfer J (2004). "Current treatment options for penile fractures". Rev Urol. 6 (3): 114–20. PMC 1472832. PMID 16985591. == External links ==
46
46 may refer to: 46 (number) 46 (album), a 1983 album by Kino "Forty Six", a song by Karma to Burn from the album Appalachian Incantation, 2010 One of the years 46 BC, AD 46, 1946, 2046
Hyperkalemia
Hyperkalemia is an elevated level of potassium (K+) in the blood. Normal potassium levels are between 3.5 and 5.0 mmol/L (3.5 and 5.0 mEq/L) with levels above 5.5 mmol/L defined as hyperkalemia. Typically hyperkalemia does not cause symptoms. Occasionally when severe it can cause palpitations, muscle pain, muscle weakness, or numbness. Hyperkalemia can cause an abnormal heart rhythm which can result in cardiac arrest and death.Common causes of hyperkalemia include kidney failure, hypoaldosteronism, and rhabdomyolysis. A number of medications can also cause high blood potassium including spironolactone, NSAIDs, and angiotensin converting enzyme inhibitors. The severity is divided into mild (5.5–5.9 mmol/L), moderate (6.0–6.4 mmol/L), and severe (>6.5 mmol/L). High levels can be detected on an electrocardiogram (ECG). Pseudohyperkalemia, due to breakdown of cells during or after taking the blood sample, should be ruled out.Initial treatment in those with ECG changes is salts, such as calcium gluconate or calcium chloride. Other medications used to rapidly reduce blood potassium levels include insulin with dextrose, salbutamol, and sodium bicarbonate. Medications that might worsen the condition should be stopped and a low potassium diet should be started. Measures to remove potassium from the body include diuretics such as furosemide, potassium-binders such as polystyrene sulfonate and sodium zirconium cyclosilicate, and hemodialysis. Hemodialysis is the most effective method.Hyperkalemia is rare among those who are otherwise healthy. Among those who are hospitalized, rates are between 1% and 2.5%. It is associated with an increased mortality, whether due to hyperkalaemia itself or as a marker of severe illness, especially in those without chronic kidney disease. The word hyperkalemia comes from hyper- high + kalium potassium + -emia blood condition. Signs and symptoms The symptoms of an elevated potassium level are generally few and nonspecific. Nonspecific symptoms may include feeling tired, numbness and weakness. Occasionally palpitations and shortness of breath may occur. Hyperventilation may indicate a compensatory response to metabolic acidosis, which is one of the possible causes of hyperkalemia. Often, however, the problem is detected during screening blood tests for a medical disorder, or after hospitalization for complications such as cardiac arrhythmia or sudden cardiac death. High levels of potassium (> 5.5 mmol/L) have been associated with cardiovascular events. Causes Ineffective elimination Decreased kidney function is a major cause of hyperkalemia. This is especially pronounced in acute kidney injury where the glomerular filtration rate and tubular flow are markedly decreased, characterized by reduced urine output. This can lead to a dramatically elevated potassium in conditions of increased cell breakdown as the potassium is released from the cells and cannot be eliminated in the kidney. In chronic kidney disease, hyperkalemia occurs as a result of reduced aldosterone responsiveness and reduced sodium and water delivery in distal tubules.Medications that interfere with urinary excretion by inhibiting the renin–angiotensin system is one of the most common causes of hyperkalemia. Examples of medications that can cause hyperkalemia include ACE inhibitors, angiotensin receptor blockers, non-selective beta blockers, and calcineurin inhibitor immunosuppressants such as ciclosporin and tacrolimus. For potassium-sparing diuretics, such as amiloride and triamterene; both the drugs block epithelial sodium channels in the collecting tubules, thereby preventing potassium excretion into urine. Spironolactone acts by competitively inhibiting the action of aldosterone. NSAIDs such as ibuprofen, naproxen, or celecoxib inhibit prostaglandin synthesis, leading to reduced production of renin and aldosterone, causing potassium retention. The antibiotic trimethoprim and the antiparasitic medication pentamidine inhibits potassium excretion, which is similar to mechanism of action by amiloride and triamterene.Mineralocorticoid (aldosterone) deficiency or resistance can also cause hyperkalemia. Primary adrenal insufficiency are: Addisons disease and congenital adrenal hyperplasia (CAH) (including enzyme deficiencies such as 21α hydroxylase, 17α hydroxylase, 11β hydroxylase, or 3β dehydrogenase). Type IV renal tubular acidosis (aldosterone resistance of the kidneys tubules) Gordons syndrome (pseudohypoaldosteronism type II) ("familial hypertension with hyperkalemia"), a rare genetic disorder caused by defective modulators of salt transporters, including the thiazide-sensitive Na-Cl cotransporter. Excessive release from cells Metabolic acidosis can cause hyperkalemia as the elevated hydrogen ions in the cells can displace potassium, causing the potassium ions to leave the cell and enter the bloodstream. However, in respiratory acidosis or organic acidosis such as lactic acidosis, the effect on serum potassium are much less significant although the mechanisms are not completely understood.Insulin deficiency can cause hyperkalemia as the hormone insulin increases the uptake of potassium into the cells. Hyperglycemia can also contribute to hyperkalemia by causing hyperosmolality in extracellular fluid, increasing water diffusion out of the cells and causes potassium to move alongside water out of the cells also. The co-existence of insulin deficiency, hyperglycemia, and hyperosmolality is often seen in those affected by diabetic ketoacidosis. Apart from diabetic ketoacidosis, there are other causes that reduce insulin levels such as the use of the medication octreotide, and fasting which can also cause hyperkalemia. Increased tissue breakdown such as rhabdomyolysis, burns, or any cause of rapid tissue necrosis, including tumor lysis syndrome can cause the release of intracellular potassium into blood, causing hyperkalemia.Beta2-adrenergic agonists act on beta-2 receptors to drive potassium into the cells. Therefore, beta blockers can raise potassium levels by blocking beta-2 receptors. However, the rise in potassium levels is not marked unless there are other co-morbidities present. Examples of drugs that can raise the serum potassium are non-selective beta-blockers such as propranolol and labetalol. Beta-1 selective blockers such as metoprolol do not increase serum potassium levels.Exercise can cause a release of potassium into bloodstream by increasing the number of potassium channels in the cell membrane. The degree of potassium elevation varies with the degree of exercise, which range from 0.3 meq/L in light exercise to 2 meq/L in heavy exercise, with or without accompanying ECG changes or lactic acidosis. However, peak potassium levels can be reduced by prior physical conditioning and potassium levels are usually reversed several minutes after exercise. High levels of adrenaline and noradrenaline have a protective effect on the cardiac electrophysiology because they bind to beta 2 adrenergic receptors, which, when activated, extracellularly decrease potassium concentration.Hyperkalemic periodic paralysis is an autosomal dominant clinical condition where there is a mutation in gene located at 17q23 that regulates the production of protein SCN4A. SCN4A is an important component of sodium channels in skeletal muscles. During exercise, sodium channels would open to allow influx of sodium into the muscle cells for depolarization to occur. But in hyperkalemic periodic paralysis, sodium channels are slow to close after exercise, causing excessive influx of sodium and displacement of potassium out of the cells.Rare causes of hyperkalemia are discussed as follows. Acute digitalis overdose such as digoxin toxicity may cause hyperkalemia through the inhibition of sodium-potassium-ATPase pump. Massive blood transfusion can cause hyperkalemia in infants due to leakage of potassium out of the red blood cells during storage. Giving succinylcholine to people with conditions such as burns, trauma, infection, prolonged immobilisation can cause hyperkalemia due to widespread activation of acetylcholine receptors rather than a specific group of muscles. Arginine hydrochloride is used to treat refractory metabolic alkalosis. The arginine ions can enter cells and displace potassium out of the cells, causing hyperkalemia. Calcineurin inhibitors such as cyclosporine, tacrolimus, diazoxide, and minoxidil can cause hyperkalemia. Box jellyfish venom can also cause hyperkalemia. Excessive intake Excessive intake of potassium is not a primary cause of hyperkalemia because the human body usually can adapt to the rise in the potassium levels by increasing the excretion of potassium into urine through aldosterone hormone secretion and increasing the number of potassium secreting channels in kidney tubules. Acute hyperkalemia in infants is also rare even though their body volume is small, with accidental ingestion of potassium salts or potassium medications. Hyperkalemia usually develops when there are other co-morbidities such as hypoaldosteronism and chronic kidney disease. Pseudohyperkalemia Pseudohyperkalemia occurs when the measured potassium level is falsely elevated. This condition is usually suspected when the patient is clinically well without any ECG changes. Mechanical trauma during blood drawing can cause potassium leakage out of the red blood cells due to haemolysis of the blood sample. Repeated fist clenching during the blood draw can cause a transient rise in potassium levels. Prolonged length of blood storage can also increase serum potassium levels. Hyperkalemia may become apparent when a persons platelet concentration is more than 500,000/microL in a clotted blood sample (serum blood sample). Potassium leaks out of platelets after clotting has occurred. A high white cell count (greater than 120,000/microL) in people with chronic lymphocytic leukemia increases the fragility of red blood cells, thus causing pseudohyperkalemia during blood processing. This problem can be avoided by processing serum samples, because clot formation protects the cells from haemolysis during processing. A familial form of pseudohyperkalemia, a benign condition characterised by increased serum potassium in whole blood stored at cold temperatures, also exists. This is due to increased potassium permeability in red blood cells. Mechanism Physiology Potassium is the most abundant intracellular cation and about 98% of the bodys potassium is found inside cells, with the remainder in the extracellular fluid including the blood. Membrane potential is maintained principally by the concentration gradient and membrane permeability to potassium with some contribution from the Na+/K+ pump. The potassium gradient is critically important for many physiological processes, including maintenance of cellular membrane potential, homeostasis of cell volume, and transmission of action potentials in nerve cells.Potassium is eliminated from the body through the gastrointestinal tract, kidney and sweat glands. In the kidneys, elimination of potassium is passive (through the glomeruli), and reabsorption is active in the proximal tubule and the ascending limb of the loop of Henle. There is active excretion of potassium in the distal tubule and the collecting duct; both are controlled by aldosterone. In sweat glands potassium elimination is quite similar to the kidney, its excretion is also controlled by aldosterone.Regulation of serum potassium is a function of intake, appropriate distribution between intracellular and extracellular compartments, and effective bodily excretion. In healthy individuals, homeostasis is maintained when cellular uptake and kidney excretion naturally counterbalance a patients dietary intake of potassium. When kidney function becomes compromised, the ability of the body to effectively regulate serum potassium via the kidney declines. To compensate for this deficit in function, the colon increases its potassium secretion as part of an adaptive response. However, serum potassium remains elevated as the colonic compensating mechanism reaches its limits. Elevated potassium Hyperkalemia develops when there is excess production (oral intake, tissue breakdown) or ineffective elimination of potassium. Ineffective elimination can be hormonal (in aldosterone deficiency) or due to causes in the kidney that impair excretion.Increased extracellular potassium levels result in depolarization of the membrane potentials of cells due to the increase in the equilibrium potential of potassium. This depolarization opens some voltage-gated sodium channels, but also increases the inactivation at the same time. Since depolarization due to concentration change is slow, it never generates an action potential by itself; instead, it results in accommodation. Above a certain level of potassium the depolarization inactivates sodium channels, opens potassium channels, thus the cells become refractory. This leads to the impairment of neuromuscular, cardiac, and gastrointestinal organ systems. Of most concern is the impairment of cardiac conduction, which can cause ventricular fibrillation and/or abnormally slow heart rhythms. Diagnosis To gather enough information for diagnosis, the measurement of potassium must be repeated, as the elevation can be due to hemolysis in the first sample. The normal serum level of potassium is 3.5 to 5 mmol/L. Generally, blood tests for kidney function (creatinine, blood urea nitrogen), glucose and occasionally creatine kinase and cortisol are performed. Calculating the trans-tubular potassium gradient can sometimes help in distinguishing the cause of the hyperkalemia.Also, electrocardiography (ECG) may be performed to determine if there is a significant risk of abnormal heart rhythms. Physicians taking a medical history may focus on kidney disease and medication use (e.g. potassium-sparing diuretics), both of which are known causes of hyperkalemia. Definitions Normal serum potassium levels are generally considered to be between 3.5 and 5.3 mmol/L. Levels above 5.5 mmol/L generally indicate hyperkalemia, and those below 3.5 mmol/L indicate hypokalemia. ECG findings With mild to moderate hyperkalemia, there is prolongation of the PR interval and development of peaked T waves. Severe hyperkalemia results in a widening of the QRS complex, and the ECG complex can evolve to a sinusoidal shape. There appears to be a direct effect of elevated potassium on some of the potassium channels that increases their activity and speeds membrane repolarisation. Also, (as noted above), hyperkalemia causes an overall membrane depolarization that inactivates many sodium channels. The faster repolarisation of the cardiac action potential causes the tenting of the T waves, and the inactivation of sodium channels causes a sluggish conduction of the electrical wave around the heart, which leads to smaller P waves and widening of the QRS complex. Some of potassium currents are sensitive to extracellular potassium levels, for reasons that are not well understood. As the extracellular potassium levels increase, potassium conductance is increased so that more potassium leaves the myocyte in any given time period. To summarize, classic ECG changes associated with hyperkalemia are seen in the following progression: peaked T wave, shortened QT interval, lengthened PR interval, increased QRS duration, and eventually absence of the P wave with the QRS complex becoming a sine wave. Bradycardia, junctional rhythms and QRS widening are particularly associated with increased risk of adverse outcomesThe serum potassium concentration at which electrocardiographic changes develop is somewhat variable. Although the factors influencing the effect of serum potassium levels on cardiac electrophysiology are not entirely understood, the concentrations of other electrolytes, as well as levels of catecholamines, play a major role.ECG findings are not a reliable finding in hyperkalemia. In a retrospective review, blinded cardiologists documented peaked T-waves in only 3 of 90 ECGs with hyperkalemia. Sensitivity of peaked-Ts for hyperkalemia ranged from 0.18 to 0.52 depending on the criteria for peak-T waves. Prevention Preventing recurrence of hyperkalemia typically involves reduction of dietary potassium, removal of an offending medication, and/or the addition of a diuretic (such as furosemide or hydrochlorothiazide). Sodium polystyrene sulfonate and sorbitol (combined as Kayexalate) are occasionally used on an ongoing basis to maintain lower serum levels of potassium though the safety of long-term use of sodium polystyrene sulfonate for this purpose is not well understood.High dietary sources include vegetables such as avocados, tomatoes and potatoes, fruits such as bananas, oranges and nuts. Treatment Emergency lowering of potassium levels is needed when new arrhythmias occur at any level of potassium in the blood, or when potassium levels exceed 6.5 mmol/L. Several agents are used to temporarily lower K+ levels. The choice depends on the degree and cause of the hyperkalemia, and other aspects of the persons condition. Myocardial excitability Calcium (calcium chloride or calcium gluconate) increases threshold potential through a mechanism that is still unclear, thus restoring normal gradient between threshold potential and resting membrane potential, which is elevated abnormally in hyperkalemia. A standard ampule of 10% calcium chloride is 10 mL and contains 6.8 mmol of calcium. A standard ampule of 10% calcium gluconate is also 10 mL but has only 2.26 mmol of calcium. Clinical practice guidelines recommend giving 6.8 mmol for typical EKG findings of hyperkalemia. This is 10 mL of 10% calcium chloride or 30 mL of 10% calcium gluconate. Though calcium chloride is more concentrated, it is caustic to the veins and should only be given through a central line. Onset of action is less than one to three minutes and lasts about 30–60 minutes. The goal of treatment is to normalise the EKG and doses can be repeated if the EKG does not improve within a few minutes.Some textbooks suggest that calcium should not be given in digoxin toxicity as it has been linked to cardiovascular collapse in humans and increased digoxin toxicity in animal models. Recent literature questions the validity of this concern. Temporary measures Several medical treatments shift potassium ions from the bloodstream into the cellular compartment, thereby reducing the risk of complications. The effect of these measures tends to be short-lived, but may temporise the problem until potassium can be removed from the body. Insulin (e.g. intravenous injection of 10 units of regular insulin along with 50 mL of 50% dextrose to prevent the blood sugar from dropping too low) leads to a shift of potassium ions into cells, secondary to increased activity of the sodium-potassium ATPase. Its effects last a few hours, so it sometimes must be repeated while other measures are taken to suppress potassium levels more permanently. The insulin is usually given with an appropriate amount of glucose to help prevent hypoglycemia following the insulin administration, though hypoglycaemia remains common especially in the context of acute or chronic renal impairment and capillary blood glucose measurements should be taken regularly after administration to identify this. Salbutamol (albuterol), a β2-selective catecholamine, is administered by nebuliser (e.g. 10–20 mg). This medication also lowers blood levels of K+ by promoting its movement into cells, and will work within 30 minutes. It is recommended to use 20 mg for maximum potassium lowering effect, but to use lower doses if the patient is tachycardic or has ischaemic heart disease. Note that 12-40% of patients do not respond to salbutamol therapy for reasons unknown, especially if on beta-blockers, so it should not be used as monotherapy Sodium bicarbonate may be used with the above measures if it is believed the person has metabolic acidosis, though time to effectiveness is longer and its use is controversial. Elimination Severe cases require hemodialysis, which are the most rapid methods of removing potassium from the body. These are typically used if the underlying cause cannot be corrected swiftly while temporising measures are instituted or there is no response to these measures. Loop diuretics (furosemide, bumetanide, torasemide) and thiazide diuretics (e.g., chlortalidone, hydrochlorothiazide, or chlorothiazide) can increase kidney potassium excretion in people with intact kidney function.Potassium can bind to a number of agents in the gastrointestinal tract. Sodium polystyrene sulfonate with sorbitol (Kayexalate) has been approved for this use and can be given by mouth or rectally. However, high quality evidence to demonstrate the effectiveness of sodium polystyrene are lacking, and use of sodium polystyrene sulfonate, particularly with high sorbitol content, is uncommonly but convincingly associated with colonic necrosis. There are no systematic studies (>6 months) looking at the long-term safety of this medication.Patiromer is taken by mouth and works by binding free potassium ions in the gastrointestinal tract and releasing calcium ions for exchange, thus lowering the amount of potassium available for absorption into the bloodstream and increasing the amount lost via the feces. The net effect is a reduction of potassium levels in the blood serum.Sodium zirconium cyclosilicate is a medication that binds potassium in the gastrointestinal tract in exchange for sodium and hydrogen ions. Onset of effects occurs in one to six hours. It is taken by mouth. Epidemiology Hyperkalemia is rare among those who are otherwise healthy. Among those who are in hospital, rates are between 1% and 2.5%. Society and culture In the United States, hyperkalemia is induced by lethal injection in capital punishment cases. Potassium chloride is the last of the three drugs administered and actually causes death. Injecting potassium chloride into the heart muscle disrupts the signal that causes the heart to beat. This same amount of potassium chloride would do no harm if taken orally and not injected directly into the blood. References External links USDA National Nutrient Database for Standard Reference, Release 26 Archived 1 March 2014 at the Wayback Machine List of foods rich in potassium National Kidney Foundation site on potassium content of foods
Hairy cell leukemia
Hairy cell leukemia is an uncommon hematological malignancy characterized by an accumulation of abnormal B lymphocytes. It is usually classified as a subtype of chronic lymphocytic leukemia (CLL). Hairy cell leukemia makes up about 2% of all leukemias, with fewer than 2,000 new cases diagnosed annually in North America and Western Europe combined. Hairy cell leukemia (HCL) was originally described as histiocytic leukemia, malignant reticulosis, or lymphoid myelofibrosis in publications dating back to the 1920s. The disease was formally named leukemic reticuloendotheliosis, and its characterization was significantly advanced by Bertha Bouroncle and colleagues at the Ohio State University College of Medicine in 1958. Its common name, which was coined in 1966, is derived from the "hairy" appearance of the malignant B cells under a microscope. Signs and symptoms In HCL, the "hairy cells" (malignant B lymphocytes) accumulate in the bone marrow, interfering with the production of normal white blood cells, red blood cells, and platelets. Consequently, patients may develop infections related to low white blood cell count, anemia and fatigue due to a lack of red blood cells, or easy bleeding due to a low platelet count. Leukemic cells may gather in the spleen and cause it to swell; this can have the side effect of making the person feel full even when he or she has not eaten much.Hairy cell leukemia is commonly diagnosed after a routine blood count shows unexpectedly low numbers of one or more kinds of normal blood cells, or after unexplained bruises or recurrent infections in an otherwise apparently healthy patient.Platelet function may be somewhat impaired in HCL patients, although this does not appear to have any significant practical effect. It may result in somewhat more mild bruises than would otherwise be expected for a given platelet count or a mildly increased bleeding time for a minor cut, likely the result of producing slightly abnormal platelets in the overstressed bone marrow tissue. Patients with a high tumor burden may also have somewhat reduced levels of cholesterol, especially in patients with an enlarged spleen. Cholesterol levels return to more normal values with successful treatment of HCL. Cause As with many cancers, the cause of HCL is unknown. Exposure to tobacco smoke, ionizing radiation, or industrial chemicals (with the possible exception of diesel) does not appear to increase the risk of developing it. Farming and gardening correlate with an increased risk of HCL development in some studies which does not necessarily imply causation.A 2011 study identified somatic BRAF V600E mutations in all 47 HCL patients studied, and no such mutations in the 193 peripheral B-cell lymphomas/leukemias other than HCL.The U.S. Institute of Medicine (IOM) found a correlation which permits an association between exposure to herbicides and later development of chronic B-cell leukemias and lymphomas in general. The IOM report emphasizes that neither animal nor human studies indicate an association of herbicides with HCL specifically. However, the IOM extrapolated data from chronic lymphocytic leukemia and non-Hodgkin lymphoma to conclude that HCL and other rare B-cell neoplasms may share this risk factor. As a result of the IOM report, the U.S. Department of Veterans Affairs considers HCL an illness presumed to be a service-related disabilityHuman T-lymphotropic virus 2 (HTLV-2) has been isolated in a small number of patients with the variant form of HCL. In the 1980s, HTLV-2 was identified in a patient with a T-cell lymphoproliferative disease; this patient later developed HCL, but HTLV-2 was not found in the hairy cell clones. There is no evidence that HTLV-II causes any sort of hematological malignancy, including HCL. Pathophysiology Pancytopenia in HCL is caused primarily by marrow failure and splenomegaly. Bone-marrow failure is caused by the accumulation of hairy cells and reticulin fibrosis in the bone marrow, as well as by the detrimental effects of dysregulated cytokine production. Splenomegaly reduces blood counts through sequestration, marginalization, and destruction of healthy blood cells inside the spleen.Hairy cells are nearly mature B cells, which are activated clonal cells with signs of VH gene differentiation. They may be related to pre-plasma marginal zone B cells or memory cells. Cytokine production is disturbed in HCL. Hairy cells produce and thrive on TNF-alpha. This cytokine also suppresses normal production of healthy blood cells in the bone marrow.Unlike healthy B cells, hairy cells express and secrete an immune system protein called interleukin-2 receptor (IL-2R). In HCL-V, only part of this receptor is expressed. As a result, disease status can be monitored by measuring changes in the amount of IL-2R in the blood serum. The level increases as hairy cells proliferate, and decreases when they are killed. Although uncommonly used in North America and Northern Europe, this test correlates better with disease status and predicts relapse more accurately than any other test. Hairy cells respond to normal production of some cytokines by T cells with increased growth. Treatment with interferon-alpha suppresses the production of this pro-growth cytokine from T cells. A low level of T cells, which is commonly seen after treatment with cladribine or pentostatin, and the consequent reduction of these cytokines, is also associated with reduced levels of hairy cells. In June 2011, E Tiacci et al discovered that 100% of HCL samples analysed had the oncogenic BRAF mutation V600E, and proposed that this is the diseases driver mutation. Until this point, only a few genomic imbalances had been found in the hairy cells, such as trisomy 5 had been found. The expression of genes is also dysregulated in a complex and specific pattern. The cells underexpress 3p24, 3p21, 3q13.3-q22, 4p16, 11q23, 14q22-q24, 15q21-q22, 15q24-q25, and 17q22-q24 and overexpress 13q31 and Xq13.3-q21. It has not yet been demonstrated that any of these changes have any practical significance to the patient. Diagnosis The diagnosis of HCL may be suggested by abnormal results on a complete blood count (CBC), but additional testing is necessary to confirm the diagnosis. A CBC normally shows low counts for white blood cells, red blood cells, and platelets in HCL patients. However, if large numbers of hairy cells are in the blood stream, then normal or even high lymphocyte counts may be found. On physical examination, 80–90% of patients have an enlarged spleen, which can be massive. This is less likely among patients who are diagnosed at an early stage. Peripheral lymphadenopathy (enlarged lymph nodes) is uncommon (less than 5% of patients), but abdominal lymphadenopathy is a relatively common finding on computed tomography scans. The most important laboratory finding is the presence of hairy cells in the bloodstream. Hairy cells are abnormal white blood cells with hair-like projections of cytoplasm; they can be seen by examining a blood smear or bone marrow biopsy specimen. The blood film examination is done by staining the blood cells with Wrights stain and looking at them under a microscope. Hairy cells are visible in this test in about 85% of cases.Most patients require a bone-marrow biopsy for final diagnosis. The bone marrow biopsy is used both to confirm the presence of HCL and also the absence of any additional diseases, such as Splenic marginal zone lymphoma or B-cell prolymphocytic leukemia. The diagnosis can be confirmed by viewing the cells with a special stain known as TRAP (tartrate resistant acid phosphatase). More recently, DB44 testing assures more accurate results. Definitively diagnosing HCL is also possible through flow cytometry on blood or bone marrow. The hairy cells are larger than normal and positive for CD19, CD20, CD22, CD11c, CD25, CD103, and FMC7 antigens. (CD103, CD22, and CD11c are strongly expressed.)Hairy cell leukemia-variant (HCL-V), which shares some characteristics with B cell prolymphocytic leukemia (B-PLL), does not show CD25 (also called the interleukin-2 receptor, alpha). The differential diagnoses include several kinds of anemia, including myelophthisis and aplastic anemia, and most kinds of blood neoplasms, including hypoplastic myelodysplastic syndrome, atypical chronic lymphocytic leukemia, B-cell prolymphocytic leukemia, or idiopathic myelofibrosis. Classification When not further specified, the "classic" form is often implied, but two variants have been described: Hairy cell leukemia-variant and a Japanese variant. The non-Japanese variant is more difficult to treat than either classic HCL or the Japanese HCL. HCL-V Hairy cell leukemia-variant (HCL-V) is usually described as a prolymphocytic variant of HCL. It was first formally described in 1980 by a paper from the University of Cambridges Hayhoe laboratory. About 10% of people with HCL have this variant form of the disease, representing about 60-75 new cases of HCL-V each year in the U.S. While classic HCL primarily affects men, HCL-V is more evenly divided between males and females. While the disease can appear at any age, the median age at diagnosis is over 70.Similar to B-cell prolymphocytic leukemia (B-PLL) in chronic lymphocytic leukemia, HCL-V is a more aggressive disease. Historically, it has been considered less likely to be treated successfully than is classic HCL, and remissions have tended to be shorter. The introduction of combination therapy with concurrent rituximab and cladribine therapy, though, has shown excellent results in early follow-up. As of 2016, this therapy is considered the first-line treatment of choice for many people with HCL-V.Many older treatment approaches, such as interferon-alpha, the combination chemotherapy regimen "CHOP", and common alkylating agents such as cyclophosphamide, showed very little benefit. Pentostatin and cladribine administered as monotherapy (without concurrent rituximab) provide some benefit to many people with HCL-V, but typically induce shorter remission periods and lower response rates than when they are used in classic HCL. More than half of people respond partially to splenectomy.In terms of B-cell development, the prolymphocytes are less developed than are lymphocytes or plasma cells, but are still more mature than their lymphoblastic precursors. HCL-V differs from classic HCL principally in these respects: Higher white blood cell counts, sometimes exceeding 100,000 cells per microliter A more aggressive course of disease requiring more frequent treatment Hairy cells with an unusually large nucleolus for their size Production of little excess fibronectin produced by classic hairy cells to interfere with bone marrow biopsies Low or no cell-surface expression of CD25 (interleukin-2 [IL-2] receptor alpha chain, or p55).Low levels of CD25, a part of the receptor for a key immunoregulating hormone, may explain why HCL-V cases are generally much more resistant to treatment by immune-system hormones.HCL-V, which usually features a high proportion of hairy cells without a functional p53 tumor suppressor gene, is somewhat more likely to transform into a higher-grade malignancy. A typical transformation rate of 5-6% has been postulated in the UK, similar to the Richters transformation rate for splenic lymphoma with villous lymphocytes (SLVL) and CLL. Among HCL-V patients, the most aggressive cases normally have the least amount of p53 gene activity. Hairy cells without the p53 gene tend, over time, to displace the less aggressive p53(+) hairy cells. Some evidence suggests that a rearrangement of the immunoglobulin gene VH4-34, which is found in about 40% of HCL-V patients and 10% of classic HCL patients, may be a more important poor prognostic factor than variant status, with HCL-V patients without the VH4-34 rearrangement responding about as well as classic HCL patients. Hairy cell leukemia-Japanese variant The variant called hairy cell leukemia-Japanese variant, or HCL-J, is more easily treated. Treatment with cladribine has been reported. Prevention Because the cause is unknown, no effective preventive measures can be taken, but the U.S. Institute of Medicine permits an association to exposure to herbicides (atrazine). The U.S. Department of Veterans Affairs also considers it a service disability related to Agent Orange. Because the disease is rare, routine screening is not cost effective. Treatment Several treatments are available, and successful control of the disease is common. Not everyone needs treatment immediately. Treatment is usually given when the symptoms of the disease interfere with the patients everyday life, or when white blood cell or platelet counts decline to dangerously low levels, such as an absolute neutrophil count below 1000 cells per microliter (1.0 K/μL). Not all patients need treatment immediately upon diagnosis. Treatment delays are less important than in solid tumors. Unlike most cancers, treatment success does not depend on treating the disease at an early stage. Because delays do not affect treatment success, no standards exist for how quickly a patient should receive treatment. Waiting too long, though, can cause its own problems, such as an infection that might have been avoided by proper treatment to restore immune-system function. Also, having a higher number of hairy cells at the time of treatment can make certain side effects somewhat worse, as some side effects are primarily caused by the bodys natural response to the dying hairy cells. This can result in the hospitalization of a patient whose treatment would otherwise be carried out entirely at the hematologists office. Single-drug treatment is typical. Unlike most cancers, only one drug is normally given to a patient at a time. While monotherapy is normal, combination therapy—typically using one first-line therapy and one second-line therapy—is being studied in current clinical trials, and is used more frequently for refractory cases. Combining rituximab with cladribine or pentostatin may or may not produce any practical benefit to the patient. Combination therapy is almost never used with a new patient. Because the success rates with purine analog monotherapy are already so high, the additional benefit from immediate treatment with a second drug in a treatment-naïve patient is assumed to be very low. For example, one round of either cladribine or pentostatin gives the median first-time patient a decade-long remission; the addition of rituximab, which gives the median patient only three or four years, might provide no additional value for this easily treated patient. In a more difficult case, however, the benefit from the first drug may be substantially reduced, so a combination may provide some benefit. First-line therapy Cladribine (2CDA) and pentostatin (DCF) are the two most common first-line therapies. They both belong to a class of medications called purine analogs, which have mild side effects compared to traditional chemotherapy regimens. Cladribine can be administered by injection under the skin, by infusion over a few hours into a vein (inravenous - IV), or by a pump worn by the patient that provides a slow drip into a vein, 24 hours a day for 7 days. Most patients receive cladribine by IV infusion once a day for five to seven days, but more patients are being given the option of taking this drug once a week for six weeks. The different dosing schedules used with cladribine are roughly equally effective and safe. Relatively few patients have significant side effects other than fatigue and a high fever caused by the cancer cells dying, although complications such as infection and acute kidney failure have been seen. Pentostatin is chemically similar to cladribine, and has a similar success rate and side effect profile, but it is always given over a much longer period of time, usually one dose by IV infusion every two weeks for 3–6 months. During the weeks following treatment, the patients immune systems are severely weakened, but their bone marrow will begin to produce normal blood cells again. Treatment often results in long-term remission. About 85% of patients achieve a complete response from treatment with either cladribine or pentostatin, and another 10% receive some benefit from these drugs, although no permanent cure for this disease is known. If the cancer cells return, the treatment may be repeated and should again result in remission, although the odds of success decline with repeated treatment. Remission lengths vary significantly, from one year to more than twenty years. The median patient can expect a treatment-free interval of about ten years. 2-Chlorodeoxyadenosine (Cladribine) induced complete responses in patients with hairy cell leukemia resistant to DCF, suggesting a lack of cross-resistance. Also, 2-CdA is not prohibitively toxic in patients intolerant of DCF (Pentostatin). Second-line therapy If a patient is resistant to either cladribine or pentostatin, then second-line therapy is pursued. Monoclonal antibodies: The most common treatment for cladribine-resistant disease is infusing monoclonal antibodies that destroy cancerous B cells. Rituximab is by far the most commonly used. Most patients receive one IV infusion over several hours each week for 4–8 weeks. Two partial and 10 complete responses resulted fromf 15 patients with relapsed disease, for a total of 80% responding. The median patient (including nonresponders) did not require further treatment for more than 3 years. This eight-dose study had a higher response rate than a four-dose study at Scripps, which achieved only 25% response rate. Rituximab has successfully induced a complete response in Hairy Cell-Variant.Rituximabs major side effect is serum sickness, commonly described as an "allergic reaction", which can be severe, especially on the first infusion. Serum sickness is primarily caused by the antibodies clumping during infusion and triggering the complement cascade. Although most patients find that side effects are adequately controlled by antiallergy drugs, some severe, and even fatal, reactions have occurred. Consequently, the first dose is always given in a hospital setting, although subsequent infusions may be given in a physicians office. Remissions are usually shorter than with the preferred first-line drugs, but hematologic remissions of several years duration are not uncommon. Other B cell-destroying monoclonal antibodies such as alemtuzumab, ibritumomab tiuxetan and I-131 tositumomab may be considered for refractory cases. Interferon-alpha is an immune system hormone that is very helpful to a relatively small number of patients, and somewhat helpful to most patients. In about 65% of patients, the drug helps stabilize the disease or produce a slow, minor improvement for a partial response.The typical dosing schedule injects at least 3 million units of interferon-alpha (not pegylated versions) three times a week, although the original protocol began with 6 months of daily injections. Some patients tolerate IFN-alpha very well after the first fw weeks, while others find that its characteristic flu-like symptoms persist. About 10% of patients develop a level of depression. By maintaining a steadier level of the hormone in the body, daily injections might cause fewer side effects in selected patients. Drinking at least 2 liters of water each day, while avoiding caffeine and alcohol, can reduce many of the side effects. A drop in blood counts is usually seen during the first 1–2 months of treatment. Most patients find that their blood counts get worse for a few weeks immediately after starting treatment, although some patients find their blood counts begin to improve within just 2 weeks.Typically, 6 months are neededto figure out whether this therapy is useful. Common criteria for treatment success include: Normalization of hemoglobin levels (above 12.0 g/dL), A normal or somewhat low platelet count (above 100 K/µL), and A normal or somewhat low absolute neutrophil count (above 1.5 K/µL).If it is well tolerated, patients usually take the hormone for 12 to 18 months. An attempt may be made then to end the treatment, but most patients discover that they need to continue taking the drug for it to be successful. These patients often continue taking this drug indefinitely, until either the disease becomes resistant to this hormone, or the body produces an immune system response that limits the drugs ability to function. A few patients are able to achieve a sustained clinical remission after taking this drug for six months to one year. This may be more likely when IFN-alpha has been initiated shortly after another therapy. IFN-alpha is considered the drug of choice for pregnant women with active HCL, although it carries some risks, such as the potential for decreased blood flow to the placenta. IFN-alpha works by sensitizing the hairy cells to the killing effect of the immune-system hormone TNF-alpha, whose production it promotes. IFN-alpha works best on classic hairy cells that are not protectively adhered to vitronectin or fibronectin, which suggests that patients who encounter less fibrous tissue in their bone-marrow biopsies may be more likely to respond to IFN-alpha therapy. It also explains why unadhered hairy cells, such as those in the bloodstream, disappear during IFN-alpha treatment well before reductions are seen in adhered hairy cells, such as those in the bone marrow and spleen. Other treatments Splenectomy can produce long-term remissions in patients whose spleens seem to be heavily involved, but its success rate is noticeably lower than cladribine or pentostatin. Splenectomies are also performed for patients whose persistently enlarged spleens cause significant discomfort or in patients whose persistently low platelet counts suggest idiopathic thrombocytopenic purpura. Bone marrow transplants are usually shunned in this highly treatable disease because of the inherent risks in the procedure. They may be considered for refractory cases in younger, otherwise healthy individuals. "Minitransplants" are possible. People with low numbers of red blood cells or platelets may also receive red blood cells and platelets through blood transfusions. Blood transfusions are always irradiated to remove white blood cells and thereby reduce the risk of graft-versus-host disease. Affected people may also receive a hormone to stimulate production of red blood cells. These treatments may be medically necessary, but do not kill the hairy cells. People with low neutrophil counts may be given filgrastim or a similar hormone to stimulate production of white blood cells. However, a 1999 study indicates that routine administration of this expensive injected drug has no practical value for HCL patients after cladribine administration. In this study, patients who received filgrastim were just as likely to experience a high fever and to be admitted to the hospital as those who did not, even though the drug artificially inflated their white blood cell counts. This study leaves open the possibility that filgrastim may still be appropriate for patients who have symptoms of infection, or at times other than shortly after cladribine treatment. Although hairy cells are technically long-lived, instead of rapidly dividing, some late-stage patients are treated with broad-spectrum chemotherapy agents such as methotrexate that are effective at killing rapidly dividing cells. This is not typically attempted unless all other options have been exhausted and it is typically unsuccessful. Prognosis Treatment success More than 95% of new patients are treated well or at least adequately by cladribine or pentostatin. A majority of new patients can expect a disease-free remission time span of about ten years, or sometimes much longer after taking one of these drugs just once. If retreatment is necessary in the future, the drugs are normally effective again, although the average length of remission is somewhat shorter in subsequent treatments. There is also the risk of Shingles, and Peripheral Neuropathy after treatment with cladribine. As with B-cell chronic lymphocytic leukemia, mutations in the IGHV on hairy cells are associated with better responses to initial treatments and with prolonged survival.How soon after treatment a patient feels "normal" again depends on several factors, including: how advanced the disease was at the time of treatment; the patients underlying health status; whether the patient had a "complete response" or only a partial response to the treatment; whether the patient experienced any of the rare, but serious side effects such as kidney failure; how aggressive the individuals disease is; whether the patient is experiencing unusual psychological trauma from the "cancer" diagnosis; and how the patient perceived his or her pre-treatment energy level and daily functioning. Lifespan With appropriate treatment, the overall projected lifespan for patients is normal or near-normal. In all patients, the first two years after diagnosis have the highest risk for fatal outcome; generally, surviving five years predicts good control of the disease. After five years clinical remission, patients in the United States with normal blood counts can often qualify for private life insurance with some US companies.Accurately measuring survival for patients with the variant form of the disease (HCL-V) is complicated by the relatively high median age (70 years old) at diagnosis. However, HCL-V patients routinely survive for more than 10 years, and younger patients can likely expect a long life. Follow-up care Despite decade-long remissions and years of living very normal lives after treatment, hairy cell leukemia is officially considered an incurable disease. While survivors of solid tumors are commonly declared to be permanently cured after two, three, or five years, people who have hairy cell leukemia are never considered cured. Relapses of HCL have happened even after more than twenty years of continuous remission. Patients will require lifelong monitoring and should be aware that the disease can recur even after decades of good health. While most oncologists consider Hairy Cell Leukemia to be incurable, there is some evidence that some patients are in fact cured after treatments. Of the original 358-patient cohort treated with cladribine at the Scripps Clinic, 9 of 19 in continuous CR for a median of 16 years were free of HCL MRD by flow cytometry and IHC. This suggests that the disease of at least some patients may be cured.People in remission need regular follow-up examinations after their treatment is over. Most physicians insist on seeing patients at least once a year for the rest of the patients life, and getting blood counts about twice a year. Regular follow-up care ensures that patients are carefully monitored, any changes in health are discussed, and new or recurrent cancer can be detected and treated as soon as possible. Between regularly scheduled appointments, people who have hairy cell leukemia should report any health problems, especially viral or bacterial infections, as soon as they appear. HCL patients are also at a slightly higher than average risk for developing a second kind of cancer, such as colon cancer or lung cancer, at some point during their lives (including before their HCL diagnosis). This appears to relate best to the number of hairy cells, and not to different forms of treatment. On average, patients might reasonably expect to have as much as double the risk of developing another cancer, with a peak about two years after HCL diagnosis and falling steadily after that, assuming that the HCL was successfully treated. Aggressive surveillance and prevention efforts are generally warranted, although the lifetime odds of developing a second cancer after HCL diagnosis are still less than 50%. There is also a higher risk of developing an autoimmune disease. Polyarteritis nodosa has been associated with underlying hairy cell leukemia in certain cases. Autoimmune diseases may also go into remission after treatment of HCL. Epidemiology This disease is rare,
Hairy cell leukemia
with fewer than 1 in 10,000 people being diagnosed with HCL during their lives. Men are four to five times more likely to develop hairy cell leukemia than women. In the United States, the annual incidence is approximately 3 cases per 1,000,000 men each year, and 0.6 cases per 1,000,000 women each year.Most patients are white males over the age of 50, although it has been diagnosed in at least one teenager. It is less common in people of African and Asian descent compared to people of European descent. It does not appear to be hereditary, although occasional familial cases that suggest a predisposition have been reported, usually showing a common Human Leukocyte Antigen (HLA) type. Research directions The Hairy Cell Leukemia Consortium was founded in 2008 to address researchers concerns about the long-term future of research on the disease. Partly because existing treatments are so successful, the field has attracted very few new researchers. In 2013 the Hairy Cell Leukemia Foundation was created when the Hairy Cell Leukemia Consortium and the Hairy Cell Leukemia Research Foundation joined. The HCLF is dedicated to improving outcomes for patients by advancing research into the causes and treatment of hairy cell leukemia, as well as by providing educational resources and comfort to all those affected by hairy cell leukemia.Three immunotoxin drugs have been studied in patients at the NIH National Cancer Institute in the U.S.: BL22, HA22 and LMB-2. All of these protein-based drugs combine part of an anti-B cell antibody with a bacterial toxin to kill the cells on internalization. BL22 and HA22 attack a common protein called CD22, which is present on hairy cells and healthy B cells. LMB-2 attacks a protein called CD25, which is not present in HCL-variant, so LMB-2 is only useful for patients with HCL-classic or the Japanese variant. HA-22, now renamed moxetumomab pasudotox, is being studied in patients with relapsed hairy cell leukemia at the National Cancer Institute in Bethesda, Maryland, MD Anderson Cancer Center in Houston, Texas, and Ohio State University in Columbus, Ohio. Other sites for the study are expected to start accepting patients in late 2014, including The Royal Marsden Hospital in London, England.Other clinical trials are studying the effectiveness of cladribine followed by rituximab in eliminating residual hairy cells that remain after treatment by cladribine or pentostatin. It is not currently known if the elimination of such residual cells will result in more durable remissions. BRAF mutation has been frequently detected in HCL (Tiacci et al. NEJM 2011) and some patients may respond to Vemurafenib. The major remaining research questions are identifying the cause of HCL and determining what prevents hairy cells from maturing normally. See also Annexin A1 List of cutaneous conditions References External links About HCL at the US National Cancer Institute History of HCL and the Godmother of HCL
Flaccid paralysis
Flaccid paralysis is a neurological condition characterized by weakness or paralysis and reduced muscle tone without other obvious cause (e.g., trauma). This abnormal condition may be caused by disease or by trauma affecting the nerves associated with the involved muscles. For example, if the somatic nerves to a skeletal muscle are severed, then the muscle will exhibit flaccid paralysis. When muscles enter this state, they become limp and cannot contract. This condition can become fatal if it affects the respiratory muscles, posing the threat of suffocation. It also occurs in spinal shock stage in complete transection of spinal cord occurred in injuries like gunshots injuries. Causes Polio and other viruses The term acute flaccid paralysis (AFP) is often used to describe an instance with a sudden onset, as might be found with polio.AFP is the most common sign of acute polio, and used for surveillance during polio outbreaks. AFP is also associated with a number of other pathogenic agents including enteroviruses other than polio, echoviruses, West Nile virus, and adenoviruses, among others. Botulism The Clostridium botulinum bacteria are the cause of botulism. Vegetative cells of C. botulinum may be ingested. Introduction of the bacteria may also occur via endospores in a wound. When the bacteria are in vivo, they induce flaccid paralysis. This happens because C. botulinum produces a toxin that blocks the release of acetylcholine. Botulism toxin blocks the exocytosis of presynaptic vesicles containing acetylcholine (ACh). When this occurs, the muscles are unable to contract. Other symptoms associated with infection from this neurotoxin include double vision, blurred vision, drooping eyelids, slurred speech, difficulty swallowing, dry mouth, and muscle weakness. Botulism prevents muscle contraction by blocking the release of acetylcholine, thereby halting postsynaptic activity of the neuromuscular junction. If its effects reach the respiratory muscles, then it can lead to respiratory failure, leading to death. Curare Curare is a plant poison derived from – among other species – Chondrodendron tomentosum and various species belonging to the genus Strychnos, which are native to the rainforests of South America. Certain peoples indigenous to the region – notably the Macushi – crush and cook the roots and stems of these and certain other plants and then mix the resulting decoction with various other plant poisons and animal venoms to create a syrupy liquid in which to dip their arrow heads and the tips of their blowgun darts. Curare has also been used medicinally by South Americans to treat madness, dropsy, edema, fever, kidney stones, and bruises. Curare acts as a neuromuscular blocking agent that induces flaccid paralysis. This poison binds to the acetylcholine (ACh) receptors on the muscle, blocking them from binding to ACh. As a result, ACh accumulates within the neuromuscular junction, but since ACh cannot bind to the receptors on the muscle, the muscle cannot be stimulated. This poison must enter the bloodstream for it to work. If curare affects the respiratory muscles, then its effects can become life-threatening, placing the victim at risk for suffocation. Other Flaccid paralysis can be associated with a lower motor neuron lesion. This is in contrast to an upper motor neuron lesion, which often presents with spasticity, although early on this may present with flaccid paralysis.Included in AFPs list are poliomyelitis (polio), transverse myelitis, Guillain–Barré syndrome, enteroviral encephalopathy, traumatic neuritis, Reyes syndrome, etc. An AFP surveillance programme is conducted to increase case yield of poliomyelitis. This includes collection of two stool samples within fourteen days of onset of paralysis and identification of virus, and control of the outbreak and strengthening immunization in that area.Historical records from the 1950s, modern CDC reports, and recent analysis of patterns in India suggest that flaccid paralysis may be caused in some cases by oral polio vaccinations.Venomous snakes that contain neurotoxic venom such as kraits, mambas, and cobras can also cause complete flaccid paralysis. Some chemical warfare nerve agents such as VX can also cause complete flaccid paralysis.In some situations, prominently in those of oriental descent hyperthyroidism can affect the consumption and restoration equilibrium of potassium ions in neurons resulting in hypokalaemic paralysis. References Further reading External links WHO Programme for Immunization Preventable Diseases (IPD) A Collaboration between World Health Organization and Government of Nepal
Prothrombin G20210A
Prothrombin G20210A is a genetic condition that increases the risk of blood clots including from deep vein thrombosis, and of pulmonary embolism. One copy of the mutation increases the risk of a blood clot from 1 in 1,000 per year to 2.5 in 1,000. Two copies increases the risk to up to 20 in 1,000 per year. Most people never develop a blood clot in their lifetimes.It is due to a specific gene mutation in which a guanine (G) is changed to an adenine (A) at position 20210 of the DNA of the prothrombin gene. Other blood clotting pathway mutations that increase the risk of clots include factor V Leiden. Prothrombin G20210A was identified in the 1990s. About 2% of Caucasians carry the variant, while it is less common in other populations. It is estimated to have originated in Caucasians about 20,000 years ago. Signs and symptoms The variant causes elevated plasma prothrombin levels (hyperprothrombinemia), possibly due to increased pre-mRNA stability. Prothrombin is the precursor to thrombin, which plays a key role in causing blood to clot (blood coagulation). G20210A can thus contribute to a state of hypercoagulability, but not particularly with arterial thrombosis. A 2006 meta-analysis showed only a 1.3-fold increased risk for coronary disease. Deficiencies in the anticoagulants Protein C and Protein S further increase the risk five- to tenfold. Behind non-O blood type and factor V Leiden, prothrombin G20210A is one of the most common genetic risk factors for venous thromboembolism (VTE). Increased production of prothrombin heightens the risk of blood clotting. Moreover, individuals who carry the mutation can pass it on to their offspring.The mutation increases the risk of developing deep vein thrombosis (DVT), which can cause pain and swelling, and sometimes post-thrombotic syndrome, ulcers, or pulmonary embolism. Most individuals do not require treatment but do need to be cautious during periods when the possibility of blood clotting are increased; for example, during pregnancy, after surgery, or during long flights. Occasionally, blood-thinning medication may be indicated to reduce the risk of clotting.Heterozygous carriers who take combined birth control pills are at a 15-fold increased risk of VTE, while carriers also heterozygous with factor V Leiden have an approximate 20-fold higher risk. In a recommendation statement on VTE, genetic testing for G20210A in adults that developed unprovoked VTE was disadvised, as was testing in asymptomatic family members related to G20210A carriers who developed VTE. In those who develop VTE, the results of thrombophilia tests (wherein the variant can be detected) rarely play a role in the length of treatment. Cause The polymorphism is located in a noncoding region of the prothrombin gene (3 untranslated region nucleotide 20210), replacing guanine with adenine. The position is at or near where the pre-mRNA will have the poly-A tail attached. Diagnosis Diagnosis of the prothrombin G20210A mutation is straightforward because the mutation involves a single base change (point mutation) that can be detected by genetic testing, which is unaffected by intercurrent illness or anticoagulant use.Measurement of an elevated plasma prothrombin level cannot be used to screen for the prothrombin G20210A mutation, because there is too great of an overlap between the upper limit of normal and levels in affected patients. Treatment Patients with the prothrombin mutation are treated similarly to those with other types of thrombophilia, with anticoagulation for at least three to six months. Continuing anticoagulation beyond three to six months depends on the circumstances surrounding thrombosis, for example, if the patient experiences a thromboembolic event that was unprovoked, continuing anticoagulation would be recommended. The choice of anticoagulant (warfarin versus a direct oral anticoagulant) is based on a number of different factors (the severity of thrombosis, patient preference, adherence to therapy, and potential drug and dietary interactions).Patients with the prothrombin G20210A mutation who have not had a thromboembolic event are generally not treated with routine anticoagulation. However, counseling the patient is recommended in situations with increased thrombotic risk is recommended (pregnancy, surgery, and acute illness). Oral contraceptives should generally be avoided in women with the mutation as they increase the thrombotic risk. Terminology Because prothrombin is also known as factor II, the mutation is also sometimes referred to as the factor II mutation or simply the prothrombin mutation; in either case, the names may appear with or without the accompanying G20210A location specifier (unhelpfully, since prothrombin mutations other than G20210A are known). Notes References External links Mannucci, P. M. & Franchini, M. (2015). "Classic thrombophilic gene variants". Thrombosis and Haemostasis. 114 (5): 885–889. doi:10.1160/th15-02-0141. PMID 26018405. Archived from the original (review) on 10 June 2016. Retrieved 21 May 2016.
Medical procedure
A medical procedure is a course of action intended to achieve a result in the delivery of healthcare. A medical procedure with the intention of determining, measuring, or diagnosing a patient condition or parameter is also called a medical test. Other common kinds of procedures are therapeutic (i.e., intended to treat, cure, or restore function or structure), such as surgical and physical rehabilitation procedures. Definition "An activity directed at or performed on an individual with the object of improving health, treating disease or injury, or making a diagnosis." "The act or conduct of diagnosis, treatment, or operation." "A series of steps by which a desired result is accomplished." "The sequence of steps to be followed in establishing some course of action." List of medical procedures Propaedeutic Auscultation Medical inspection (body features) Palpation Percussion (medicine) Vital signs measurement, such as blood pressure, body temperature, or pulse (or heart rate) Diagnostic Lab tests Biopsy test Blood test Stool test Urinalysis Cardiac stress test Electrocardiography Electrocorticography Electroencephalography Electromyography Electroneuronography Electronystagmography Electrooculography Electroretinography Endoluminal capsule monitoring Endoscopy Colonoscopy Colposcopy Cystoscopy Gastroscopy Laparoscopy Laryngoscopy Ophthalmoscopy Otoscopy Sigmoidoscopy Esophageal motility study Evoked potential Magnetoencephalography Medical imaging Angiography Aortography Cerebral angiography Coronary angiography Lymphangiography Pulmonary angiography Ventriculography Chest photofluorography Computed tomography Echocardiography Electrical impedance tomography Fluoroscopy Magnetic resonance imaging Diffuse optical imaging Diffusion tensor imaging Diffusion-weighted imaging Functional magnetic resonance imaging Positron emission tomography Radiography Scintillography SPECT Ultrasonography Contrast-enhanced ultrasound Gynecologic ultrasonography Intravascular ultrasound Obstetric ultrasonography Thermography Virtual colonoscopy Neuroimaging Posturography Therapeutic Thrombosis prophylaxis Precordial thump Politzerization Hemodialysis Hemofiltration Plasmapheresis Apheresis Extracorporeal membrane oxygenation (ECMO) Cancer immunotherapy Cancer vaccine Cervical conization Chemotherapy Cytoluminescent therapy Insulin potentiation therapy Low-dose chemotherapy Monoclonal antibody therapy Photodynamic therapy Radiation therapy Targeted therapy Tracheal intubation Unsealed source radiotherapy Virtual reality therapy Physical therapy/Physiotherapy Speech therapy Phototerapy Hydrotherapy Heat therapy Shock therapy Insulin shock therapy Electroconvulsive therapy Symptomatic treatment Fluid replacement therapy Palliative care Hyperbaric oxygen therapy Oxygen therapy Gene therapy Enzyme replacement therapy Intravenous therapy Phage therapy Respiratory therapy Vision therapy Electrotherapy Transcutaneous electrical nerve stimulation (TENS) Laser therapy Combination therapy Occupational therapy Immunization Vaccination Immunosuppressive therapy Psychotherapy Drug therapy Acupuncture Antivenom Magnetic therapy Craniosacral therapy Chelation therapy Hormonal therapy Hormone replacement therapy Opiate replacement therapy Cell therapy Stem cell treatments Intubation Nebulization Inhalation therapy Particle therapy Proton therapy Fluoride therapy Cold compression therapy Animal-Assisted Therapy Negative Pressure Wound Therapy Nicotine replacement therapy Oral rehydration therapy Surgical Ablation Amputation Biopsy Cardiopulmonary resuscitation (CPR) Cryosurgery Endoscopic surgery Facial rejuvenation General surgery Hand surgery Hemilaminectomy Image-guided surgery Knee cartilage replacement therapy Laminectomy Laparoscopic surgery Lithotomy Lithotriptor Lobotomy Neovaginoplasty Radiosurgery Stereotactic surgery Radiosurgery Vaginoplasty Xenotransplantation Anesthesia Dissociative anesthesia General anesthesia Local anesthesia Topical anesthesia (surface) Epidural (extradural) block Spinal anesthesia (subarachnoid block) Regional anesthesia Other Interventional radiology Screening (medicine) See also Algorithm (medical) Autopsy Complication (medicine) Consensus (medical) Contraindication Course (medicine) Drug interaction Extracorporeal Guideline (medical) Iatrogenesis Invasive (medical) List of surgical instruments Medical error Medical prescription Medical test Minimally invasive Nocebo Non-invasive Physical examination Responsible drug use Surgical instruments Vital signs == References ==
Cardiac catheterization
Cardiac catheterization (heart cath) is the insertion of a catheter into a chamber or vessel of the heart. This is done both for diagnostic and interventional purposes. A common example of cardiac catheterization is coronary catheterization that involves catheterization of the coronary arteries for coronary artery disease and myocardial infarctions ("heart attacks"). Catheterization is most often performed in special laboratories with fluoroscopy and highly maneuverable tables. These "cath labs" are often equipped with cabinets of catheters, stents, balloons, etc. of various sizes to increase efficiency. Monitors show the fluoroscopy imaging, electrocardiogram (ECG), pressure waves, and more. Uses Coronary angiography is a diagnostic procedure that allows visualization of the coronary vessels. Fluoroscopy is used to visualize the lumens of the arteries as a 2-D projection. Should these arteries show narrowing or blockage, then techniques exist to open these arteries. Percutaneous coronary intervention is a blanket term that involves the use of mechanical stents, balloons, etc. to increase blood flow to previously blocked (or occluded) vessels.Measuring pressures in the heart is also an important aspect of catheterization. The catheters are fluid filled conduits that can transmit pressures to outside the body to pressure transducers. This allows measuring pressure in any part of the heart that a catheter can be maneuvered into.Measuring blood flow is also possible through several methods. Most commonly, flows are estimated using the Fick principle and thermodilution. These methods have drawbacks, but give invasive estimations of the cardiac output, which can be used to make clinical decisions (e.g., cardiogenic shock, heart failure) to improve the persons condition.Cardiac catheterization can be used as part of a therapeutic regimen to improve outcomes for survivors of out-of-hospital cardiac arrest.Cardiac catheterization often requires the use of fluoroscopy to visualize the path of the catheter as it enters the heart or as it enters the coronary arteries. The coronary arteries are known as "epicardial vessels" as they are located in the epicardium, the outermost layer of the heart. The use of fluoroscopy requires radiopaque contrast, which in rare cases can lead to contrast-induced kidney injury (see Contrast-induced nephropathy). People are constantly exposed to low doses of ionizing radiation during procedures. Ideal table positioning between the x-ray source and receiver, and radiation monitoring via thermoluminescent dosimetry, are two main ways of reducing a persons exposure to radiation. People with certain comorbidities (people who have more than one condition at the same time) have a higher risk of adverse events during the cardiac catheterization procedure. These comorbidity conditions include aortic aneurysm, aortic stenosis, extensive three-vessel coronary artery disease, diabetes, uncontrolled hypertension, obesity, chronic kidney disease, and unstable angina. Left heart catheterization Left heart catheterization (LHC) is an ambiguous term and sometime clarification is required: LHC can mean measuring the pressures of the left side of the heart. LHC can be synonymous with coronary angiography.technique is also used to assess the amount of occlusion (or blockage) in a coronary artery, often described as a percentage of occlusion. A thin, flexible wire is inserted into either the femoral artery or the radial artery and threaded toward the heart until it is in the ascending aorta. Radial access is not associated with an increased risk of stroke over femoral access. At this point, a catheter is guided over the wire into the ascending aorta, where it can be maneuvered into the coronary arteries through the coronary ostia. In this position, the interventional cardiologist can inject contrast and visualize the flow through the vessel. If necessary, the physician can utilize percutaneous coronary intervention techniques, including the use of a stent (either bare-metal or drug-eluting) to open the blocked vessel and restore appropriate blood flow. In general, occlusions greater than 70% of the width of the vessel lumen are thought to require intervention. However, in cases where multiple vessels are blocked (so-called "three-vessel disease"), the interventional cardiologist may opt instead to refer the patient to a cardiothoracic surgeon for coronary artery bypass graft (CABG; see Coronary artery bypass surgery) surgery. Right heart catheterization Right heart catheterization (RHC) allows the physician to determine the pressures within the heart (intracardiac pressures). The heart is most often accessed via the internal jugular or femoral vein; arteries are not used. Values are commonly obtained for the right atrium, right ventricle, pulmonary artery, and pulmonary capillary "wedge" pressures. Right heart catheterizations also allow the physician to estimate the cardiac output, the amount of blood that flows from the heart each minute, and the cardiac index, a hemodynamic parameter that relates the cardiac output to a patients body size. Determination of cardiac output can be done by releasing a small amount of saline solution (either chilled or at room temperature) in one area of the heart and measuring the change in blood temperature over time in another area of the heart.Right heart catheterization is often done for pulmonary hypertension, heart failure, and cardiogenic shock. The pulmonary artery catheter can be placed, used, and removed, or it can be placed and left in place for continuous monitoring. The latter can be done an intensive care unit (ICU) to permit frequent measurement of the hemodynamic parameters in response to interventions.Parameters obtainable from a right heart catheterization: Right atrial pressure Right ventricular pressure Pulmonary artery pressure Pulmonary capillary wedge pressure Systemic vascular resistance Pulmonary vascular resistance Cardiac output Blood oxygenationImplantation of a CardioMEMS is done during a right heart catheterization. This device is implanted into the pulmonary artery to permit real-time measurement of the pulmonary artery pressure over time. Coronary catheterization Coronary catheterization is an invasive process and comes with risks that include stroke, heart attack, and death. Like any procedure, the benefits should outweigh the risks and so this procedure is reserved for those with symptoms of serious heart diseases and is never used for screening purposes. Other, non-invasive tests are better used when the diagnosis or certainty of the diagnosis is not as clear.Indications for cardiac catheterization include the following: Acute coronary syndromes: ST elevation MI (STEMI), non-ST Elevation MI (NSTEMI), and unstable angina Evaluation of coronary artery disease as indicated by Abnormal stress test As part of the pre-op evaluation for other cardiac procedures (e.g., valve replacement) as coronary artery bypass grafting may be done at the same time Risk stratification for high cardiac risk surgeries (e.g., endovascular aneurysm repair) Persistent chest pain despite medical therapy thought to be cardiac in origin New-onset unexplained heart failure Survival of sudden cardiac death or dangerous cardiac arrhythmias Workup of suspected Prinzmetal angina (coronary vasospasm)Right heart catheterization, along with pulmonary function testing and other testing should be done to confirm pulmonary hypertension prior to having vasoactive pharmacologic treatments approved and initiated. to measure intracardiac and intravascular blood pressures to take tissue samples for biopsy to inject various agents for measuring blood flow in the heart; also to detect and quantify the presence of an intracardiac shunt to inject contrast agents in order to study the shape of the heart vessels and chambers and how they change as the heart beats Pacemakers and defibrillators Placement of internal pacemakers and defibrillators are done through catheterization as well. An exception to this is placement of electrodes on the outer surface of the heart (called epicardial electrodes). Otherwise, electrodes are placed through the venous system into the heart and left there permanently. Typically, these devices are placed in the left upper chest and enter the left subclavian vein and electrodes are placed in the right atrium, right ventricle, and coronary sinus (for the left ventricle stimulation). Valve assessment Echocardiography is a non-invasive method to evaluate the heart valves. However, sometimes the valve pressure gradients need to be measured directly because echo is equivocal for the severity of valve disease. Invasive assessment of the valve can be done with catheterization by placing a catheter across the valve and measuring the pressures simultaneously on each side of the valve to obtain the pressure gradient. In conjunction with a right heart catheterization, the valve area can be estimated. For example, in aortic valve area calculation the Gorlin equation can be used to calculate the area if the cardiac output, pressure gradient, systolic period, and heart rate are known. Pulmonary angiography Evaluation of the blood flow to the lungs can be done invasively through catheterization. Contrast is injected into the pulmonary trunk, left or right pulmonary artery, or segment of the pulmonary artery. Shunt evaluation Cardiac shunts can be evaluated through catheterization. Using oxygen as a marker, the oxygen saturation of blood can be sampled at various locations in and around the heart. For example, a left-to-right atrial septal defect will show a marked increase in oxygen saturation in the right atrium, ventricle, and pulmonary artery as compared to the mixed venous oxygen saturation from the oxygenated blood from the lungs mixing into the venous return to the heart. Utilizing the Fick principle, the ratio of blood flow in the lungs (Qp) and system circulations (Qs) can calculate the Qp:Qs ratio. Elevation of the Qp:Qs ratio above 1.5 to 2.0 suggests that there is a hemodynamically significant left-to-right shunt (such that the blood flow through the lungs is 1.5 to 2.0 times more than the systemic circulation). This ratio can be evaluated non-invasively with echocardiography too, however.A "shunt run" is often done when evaluating for a shunt by taking blood samples from superior vena cava (SVC), inferior vena cava (IVC), right atrium, right ventricle, pulmonary artery, and system arterial. Abrupt increases in oxygen saturation support a left-to-right shunt and lower than normal systemic arterial oxygen saturation supports a right-to-left shunt. Samples from the SVC & IVC are used to calculate mixed venous oxygen saturation. Ventriculography By injecting contrast into the left ventricle, the outline of the ventricle can be measured in both systole and diastole to estimate the ejection fraction (a marker of heart function). Due to the high contrast volumes and injection pressures, this is often not performed unless other, non-invasive methods are not acceptable, not possible, or conflicting. Percutaneous aortic valve replacement Advancements in cardiac catheterization have permitted replacement of heart valves by means of blood vessels. This method allows valve replacement without open heart surgery and can be performed on people who are high-risk for such a surgery. Balloon septostomy Catheterization can also be used to perform balloon septostomy, which is the widening of a foramen ovale, patent foramen ovale (PFO), or atrial septal defect (ASD) using a balloon catheter. This can be done in certain congenital heart diseases in which the mechanical shunting is required to sustain life such as in transposition of the great vessels. Alcohol septal ablation Hypertrophic cardiomyopathy is a disease in which the myocardium is thickened and can cause blood flow obstruction. If hemodynamically significant, this excess muscle can be removed to improve blood flow. Surgically, this can be done with septal myectomy. However, it can be done through catheterization and by injecting ethanol to destroy the tissue in an alcohol septal ablation. This is done by selected an appropriate septal artery supplying the intended area and, essentially, causing a localized, controlled myocardial infarction of the area with ethanol. Complications Complications of cardiac catheterization and tools used during catheterization include, but not limited to: Death Stroke Heart attack Ventricular ectopy and ventricular arrhythmias Pericardial effusion Bleeding: internal and external Infection Radiation burn Contrast induced nephropathy from contrast useThe likelihood of these risks depends on many factors that include the procedure being performed, the overall health state of the patient, situational (elective vs emergent), medications (e.g., anticoagulation), and more. Procedure "Cardiac catheterization" is a general term for a group of procedures. Access to the heart is obtained through a peripheral artery or vein. Commonly, this includes the radial artery, internal jugular vein, and femoral artery/vein. Each blood vessel has its advantages and disadvantages. Once access is obtained, plastic catheters (tiny hollow tubes) and flexible wires are used to navigate to and around the heart. Catheters come in numerous shapes, lengths, diameters, number of lumens, and other special features such as electrodes and balloons. Once in place, they are used to measure or intervene. Imaging is an important aspect to catheterization and commonly includes fluoroscopy but can also include forms of echocardiography (TTE, TEE, ICE) and ultrasound (IVUS).Obtaining access uses the Seldinger technique by puncturing the vessel with a needle, placing a wire through the needle into the lumen of the vessel, and then exchanging the needle for a larger plastic sheath. Finding the vessel with a needle can be challenging and both ultrasound and fluoroscopy can be used to aid in finding and confirming access. Sheaths typically have a side port that can be used to withdraw blood or injection fluids/medications, and they also have an end hole that permits introducing the catheters, wires, etc. coaxially into the blood vessel.Once access is obtained, what is introduced into the vessel depends on the procedure being performed. Some catheters are formed to a particular shape and can really only be manipulated by inserting/withdrawing the catheter in the sheath and rotating the catheter. Others may include internal structures that permit internal manipulation (e.g., intracardiac echocardiography).Finally, when the procedure is completed, the catheters are removed and the sheath is removed. With time, the hole made in the blood vessel will heal. Vascular closure devices can be used to speed along hemostasis. Equipment Much equipment is required for a facility to perform the numerous possible procedures for cardiac catheterization. General: Catheters Film or Digital Camera Electrocardiography monitors External defibrillator Fluoroscopy Pressure transducers SheathsPercutaneous coronary intervention: Coronary stents: bare-metal stent (BMS) and drug-eluting stent (DES) Angioplasty balloons Atherectomy lasers and rotational devices Left atrial appendage occlusion devicesElectrophysiology: Ablation catheters: radiofrequency (RF) and cryo Pacemakers Defibrillators History The history of cardiac catheterization dates back to Stephen Hales (1677-1761) and Claude Bernard (1813-1878), who both used it on animal models. Clinical application of cardiac catheterization begins with Dr. Werner Forssmann in 1929, who inserted a catheter into the vein of his own forearm, guided it fluoroscopically into his right atrium, and took an X-ray picture of it. However, even after this achievement, hospital administrators removed Forssmann from his position owing to his unorthodox methods. During World War II, André Frédéric Cournand, a physician at NewYork-Presbyterian/Columbia, then Columbia-Bellevue, opened the first catheterization lab. In 1956, Forssmann and Cournand were co-recipients of the Nobel Prize in Physiology or Medicine for the development of cardiac catheterization. Dr. Eugene A. Stead performed research in the 1940s, which paved the way for cardiac catheterization in the USA. References External links MedlinePlus Medical Encyclopedia: Cardiac catheterization eMedicine: Cardiac Catheterization (Left Heart)
GM2 gangliosidoses
The GM2 gangliosidoses are a group of three related genetic disorders that result from a deficiency of the enzyme beta-hexosaminidase. This enzyme catalyzes the biodegradation of fatty acid derivatives known as gangliosides. The diseases are better known by their individual names: Tay–Sachs disease, AB variant, and Sandhoff disease. Beta-hexosaminidase is a vital hydrolytic enzyme, found in the lysosomes, that breaks down lipids. When beta-hexosaminidase is no longer functioning properly, the lipids accumulate in the nervous tissue of the brain and cause problems. Gangliosides are made and biodegraded rapidly in early life as the brain develops. Except in some rare, late-onset forms, the GM2 gangliosidoses are fatal.All three disorders are rare in the general population. Tay–Sachs disease has become famous as a public health model because an enzyme assay test for TSD was discovered and developed in the late 1960s and early 1970s, providing one of the first "mass screening" tools in medical genetics. It became a research and public health model for understanding and preventing all autosomal genetic disorders.Tay–Sachs disease, AB variant, and Sandhoff disease might easily have been defined together as a single disease, because the three disorders are associated with failure of the same metabolic pathway and have the same outcome. Classification and naming for many genetic disorders reflects history, because most diseases were first observed and classified based on biochemistry and pathophysiology before genetic diagnosis was available. However, the three GM2 gangliosidoses were discovered and named separately. Each represents a distinct molecular point of failure in a subunit that is required for activation of the enzyme. Tay–Sachs disease Tay–Sachs disease is a rare autosomal recessive genetic disorder that causes a progressive deterioration of nerve cells and of mental and physical abilities that begins around six months of age and usually results in death by the age of four. It is the most common of the GM2 gangliosidoses. The disease occurs when harmful quantities of cell membrane gangliosides accumulate in the brains nerve cells, eventually leading to the premature death of the cells. Sandhoff disease Sandhoff disease is a rare, autosomal recessive metabolic disorder that causes progressive destruction of nerve cells in the brain and spinal cord. The disease results from mutations on chromosome 5 in the HEXB gene, critical for the lysosomal enzymes beta-N-acetylhexosaminidase A and B. Sandhoff disease is clinically indistinguishable from Tay–Sachs disease. The most common form, infantile Sandhoff disease, is usually fatal by early childhood. AB variant GM2-gangliosidosis, AB variant is a rare, autosomal recessive metabolic disorder that causes progressive destruction of nerve cells in the brain and spinal cord. Mutations in the GM2A gene cause AB variant. The GM2A gene provides instructions for making a protein called the GM2 activator. This protein is a cofactor that is required for the normal function of beta-hexosaminidase A. The disease is usually fatal by early childhood. Treatment There are no authorized therapies for the treatment of the GM2 Gangliosidosis (Tay-Sachs and Sandhoff disease). The current standard of care for GM2 Gangliosidosis disease is limited to supportive care and aimed at providing adequate nutrition and hydration.This supportive care may substantially improve the quality of life of people affected by GM2. The therapeutic team may include specialists in neurology, pulmonology, gastroenterology, psychiatrist, orthopaedics, nutrition, physical therapy and occupational therapy. N-Acetyl-Leucine N-Acetyl-Leucine is an orally administered, modified amino acid that is being developed as a novel treatment for multiple rare and common neurological disorders by IntraBio Inc (Oxford, United Kingdom).N-Acetyl-Leucine has been granted multiple orphan drug designations from the U.S. Food & Drug Administration (FDA) and the European Medicines Agency (EMA) and the European Medicines Agency (EMA) for the treatment of a various genetic diseases, including GM2 Gangliosidosis (Tay-Sachs and Sandhoff). The US FDA has granted IntraBio a Rare Pediatric Disease Designation for N-Acetyl-Leucine for the treatment of GM2 Gangliosidosis.Compassionate use studies in both Tay-Sachs and Sandhoff patients have demonstrated the positive clinical effects of treatment with N-Acetyl-Leucine for GM2 Gangliosidosis These studies further demonstrated that the treatment is well tolerated, with a good safety profile.A multinational clinical trial investigating N-Acetyl-L-Leucine for the treatment of GM2 Gangliosidosis (Tay-Sachs and Sandhoff) began in 2019 Recruitment is ongoing. IntraBio is also conducting parallel clinical trials with N-Acetyl-L-Leucine for the treatment of Niemann-Pick disease type C and Ataxia-Telangiectasia. Future opportunities to develop N-Acetyl-Leucine include Lewy body dementia, amyotrophic lateral sclerosis, restless leg syndrome, multiple sclerosis, and migraine. See also GM2 (ganglioside) References External links GeneReview/NIH/UW entry on Hexosaminidase A Deficiency
Acrophobia
Acrophobia is an extreme or irrational fear or phobia of heights, especially when one is not particularly high up. It belongs to a category of specific phobias, called space and motion discomfort, that share both similar causes and options for treatment. Most people experience a degree of natural fear when exposed to heights, known as the fear of falling. On the other hand, those who have little fear of such exposure are said to have a head for heights. A head for heights is advantageous for those hiking or climbing in mountainous terrain and also in certain jobs such as steeplejacks or wind turbine mechanics. People with acrophobia can experience a panic attack in high places and become too agitated to get themselves down safely. Approximately 2–5% of the general population has acrophobia, with twice as many women affected as men. The term is from the Greek: ἄκρον, ákron, meaning "peak, summit, edge" and φόβος, phóbos, "fear". Confusion with vertigo "Vertigo" is often used to describe a fear of heights, but it is more accurately a spinning sensation that occurs when one is not actually spinning. It can be triggered by looking down from a high place, by looking straight up at a high place or tall object, or even by watching something (i.e. a car or a bird) go past at high speed, but this alone does not describe vertigo. True vertigo can be triggered by almost any type of movement (e.g. standing up, sitting down, walking) or change in visual perspective (e.g. squatting down, walking up or down stairs, looking out of the window of a moving car or train). Vertigo is called height vertigo when the sensation of vertigo is triggered by heights. Height vertigo is caused by a conflict between vision, vestibular and somatosensory senses. This occurs when vestibular and somatosensory systems sense a body movement that is not detected by the eyes. More research indicate that this conflict leads to both motion sickness and anxiety. Causes Traditionally, acrophobia has been attributed, like other phobias, to conditioning or a traumatic experience. Recent studies have cast doubt on this explanation. Individuals with acrophobia are found to be lacking in traumatic experiences. Nevertheless, this may be due to the failure to recall the experiences, as memory fades as time passes. To address the problems of self report and memory, a large cohort study with 1000 participants was conducted from birth; the results showed that participants with less fear of heights had more injuries because of falling. Psychologists Richie Poulton, Simon Davies, Ross G. Menzies, John D. Langley, and Phil A. Silva sampled subjects from the Dunedin Multidisciplinary Health and Development Study who had been injured in a fall between the ages of 5 and 9, compared them to children who had no similar injury, and found that at age 18, acrophobia was present in only 2 percent of the subjects who had an injurious fall but was present among 7 percent of subjects who had no injurious fall (with the same sample finding that typical basophobia was 7 times less common in subjects at age 18 who had injurious falls as children than subjects that did not).More studies have suggested a possible explanation for acrophobia is that it emerges through accumulation of non-traumatic experiences of falling that are not memorable but can influence behaviours in the future. Also, fear of heights may be acquired when infants learn to crawl. If they fell, they would learn the concepts about surfaces, posture, balance, and movement. Cognitive factors may also contribute to the development of acrophobia. People tend to wrongly interpret visuo-vestibular discrepancies as dizziness and nausea and associate them with a forthcoming fall. A traumatic conditional event of falling may not be necessary at this point. A fear of falling, along with a fear of loud noises, is one of the most commonly suggested inborn or "non-associative" fears. The newer non-association theory is that a fear of heights is an evolved adaptation to a world where falls posed a significant danger. If this fear is inherited, it is possible that people can get rid of it by frequent exposure of heights in habituation. In other words, acrophobia could be attributed to the lack of exposure in early times. The degree of fear varies and the term phobia is reserved for those at the extreme end of the spectrum. Researchers have argued that a fear of heights is an instinct found in many mammals, including domestic animals and humans. Experiments using visual cliffs have shown human infants and toddlers, as well as other animals of various ages, to be reluctant in venturing onto a glass floor with a view of a few meters of apparent fall-space below it. Although human infants initially experienced fear when crawling on the visual cliff, most of them overcame the fear through practice, exposure and mastery and retained a level of healthy cautiousness. While an innate cautiousness around heights is helpful for survival, an extreme fear can interfere with the activities of everyday life, such as standing on a ladder or chair, or even walking up a flight of stairs. Still, it is uncertain if acrophobia is related to the failure to reach a certain developmental stage. Besides associative accounts, a diathetic-stress model is also very appealing for considering both vicarious learning and hereditary factors such as personality traits (i.e., neuroticism). Another possible contributing factor is a dysfunction in maintaining balance. In this case the anxiety is both well founded and secondary. The human balance system integrates proprioceptive, vestibular and nearby visual cues to reckon position and motion. As height increases, visual cues recede and balance becomes poorer even in normal people. However, most people respond by shifting to more reliance on the proprioceptive and vestibular branches of the equilibrium system. Some people are known to be more dependent on visual signals than others. People who rely more on visual cues to control body movements are less physically stable. An acrophobic, however, continues to over-rely on visual signals whether because of inadequate vestibular function or incorrect strategy. Locomotion at a high elevation requires more than normal visual processing. The visual cortex becomes overloaded, resulting in confusion. Some proponents of the alternative view of acrophobia warn that it may be ill-advised to encourage acrophobics to expose themselves to height without first resolving the vestibular issues. Research is underway at several clinics. Recent studies found that participants experienced increased anxiety not only during elevation in height, but also when they were required to move sideways in a fixed height.A recombinant model of the development of acrophobia is very possible, in which learning factors, cognitive factors (e.g. interpretations), perceptual factors (e.g. visual dependence), and biological factors (e.g. heredity) interact to provoke fear or habituation. Assessment ICD-10 and DSM-5 are used to diagnose acrophobia. Acrophobia Questionnaire (AQ) is a self report that contains 40 items, assessing anxiety level on a 0–6 point scale and degree of avoidance on a 0–2 point scale. The Attitude Towards Heights Questionnaires (ATHQ) and Behavioural Avoidance Tests (BAT) are also used.However, acrophobic individuals tend to have biases in self-reporting. They often overestimate the danger and question their abilities of addressing height relevant issues. A Height Interpretation Questionnaire (HIQ) is a self-report to measure these height relevant judgements and interpretations. The Depression Scale of the Depression Anxiety Stress Scales short form (DASS21-DS) is a self report used to examine validity of the HIQ. Treatment Traditional treatment of phobias is still in use today. Its underlying theory states that phobic anxiety is conditioned and triggered by a conditional stimulus. By avoiding phobic situations, anxiety is reduced. However, avoidance behaviour is reinforced through negative reinforcement. Wolpe developed a technique called systematic desensitization to help participants avoid "avoidance". Research results have suggested that even with a decrease in therapeutic contact, densensitization is still very effective. However, other studies have shown that therapists play an essential role in acrophobia treatment. Treatments like reinforced practice and self-efficacy treatments also emerged.There have been a number of studies into using virtual reality therapy for acrophobia. Botella and colleagues and Schneider were the first to use VR in treatment. Specifically, Schneider utilised inverted lenses in binoculars to "alter" the reality. Later in the mid-1990s, VR became computer-based and was widely available for therapists. A cheap VR equipment uses a normal PC with head-mounted display (HMD). In contrast, VRET uses an advanced computer automatic virtual environment (CAVE). VR has several advantages over in vivo treatment: (1) therapist can control the situation better by manipulating the stimuli, in terms of their quality, intensity, duration and frequency; (2) VR can help participants avoid public embarrassment and protect their confidentiality; (3) therapists office can be well-maintained; (4) VR encourages more people to seek treatment; (5) VR saves time and money, as participants do not need to leave the consulting room.Many different types of medications are used in the treatment of phobias like fear of heights, including traditional anti-anxiety drugs such as benzodiazepines, and newer options such as antidepressants and beta-blockers. Prognosis Some desensitization treatments produce short-term improvements in symptoms. Long-term treatment success has been elusive. Epidemiology Approximately 2–5% of the general population has acrophobia, with twice as many women affected as men.A related, milder form of visually triggered fear or anxiety is called visual height intolerance (vHI). Up to one-third of people may have some level of visual height intolerance. Pure vHI usually has smaller impact on individuals compared to acrophobia, in terms of intensity of symptoms load, social life, and overall life quality. However, few people with visual height intolerance seek professional help. Society and culture In the Alfred Hitchcock film Vertigo, John "Scottie" Ferguson, played by James Stewart, has to resign from the police force after an incident which causes him to develop both acrophobia and vertigo. The word "vertigo" is only mentioned once, while "acrophobia" is mentioned several times. Early on in the film, Ferguson faints while climbing a stepladder. There are numerous references throughout the film to fear of heights and falling. See also Acclimatization Fear of falling Head for heights List of phobias Citations General and cited sources Sartorius, N.; Henderson, A.S.; Strotzka, H.; Lipowski, Z.; Yu-cun, S.; You-xin, X.; Strömgren, E.; Glatzel, J.; et al. "The ICD-10 Classification of Mental and Behavioural Disorders Clinical descriptions and diagnostic guidelines" (PDF). World Health Organization. p. 114. Retrieved 23 June 2021. External links "The scariest path in the world?", a direct test, video shot on El Camino del Rey, approaching Makinodromo "Fear of Heights"—A comprehensive guide with useful resources on Acrophobia known as Fear of Heights.
Rectocele
In gynecology, a rectocele ( REK-tə-seel) or posterior vaginal wall prolapse results when the rectum bulges (herniates) into the vagina. Two common causes of this defect are childbirth and hysterectomy. Rectocele also tends to occur with other forms of pelvic organ prolapse, such as enterocele, sigmoidocele and cystocele.Although the term applies most often to this condition in females, males can also develop it. Rectoceles in men are uncommon, and associated with prostatectomy. Signs and symptoms Mild cases may simply produce a sense of pressure or protrusion within the vagina, and the occasional feeling that the rectum has not been completely emptied after a bowel movement. Moderate cases may involve difficulty passing stool (because the attempt to evacuate pushes the stool into the rectocele instead of out through the anus), discomfort or pain during evacuation or intercourse, constipation, and a general sensation that something is "falling down" or "falling out" within the pelvis. Severe cases may cause vaginal bleeding, intermittent fecal incontinence, or even the prolapse of the bulge through the mouth of the vagina, or rectal prolapse through the anus. Digital evacuation, or, manual pushing, on the posterior wall of the vagina helps to aid in bowel movement in a majority of cases of rectocele. Rectocele can be a cause of symptoms of obstructed defecation. Causes Rectoceles result from the weakening of the pelvic floor also called pelvic organ prolapse. Weakened pelvic structures occur as a result of an episiotomy during previous births, even decades later. Other causes of pelvic floor prolapse can be advanced age, multiple vaginal deliveries, and birthing trauma. Birthing trauma includes vacuum delivery, forceps delivery, and perineal tear. In addition, a history of chronic constipation and excessive straining with bowel movements are thought to play a role in rectocele. Multiple gynecological or rectal surgeries can also lead to weakening of the pelvic floor. Births that involve babies over nine pounds in weight, or rapid births can contribute to the development of rectocele.A hysterectomy or other pelvic surgery can be a cause, as can chronic constipation and straining to pass bowel movements. It is more common in older women than in younger ones; estrogen which helps to keep the pelvic tissues elastic decreases after menopause. Treatment Non-surgical Treatment depends on the severity of the problem, and may include non-surgical methods such as changes in diet (increase in fiber and water intake), pelvic floor exercises such as Kegel exercises, use of stool softeners, hormone replacement therapy for post-menopausal women and insertion of a pessary into the vagina. A high fiber diet, consisting of 25–30 grams of fiber daily, as well as increased water intake (typically 6–8 glasses daily), help to avoid constipation and straining with bowel movements, and can relieve symptoms of rectocele. Surgical Surgery can be done to correct rectocele when symptoms continue despite the use of non-surgical management, and are significant enough to interfere with activities of daily living.Surgery to correct the rectocele may involve the reattachment of the muscles that previously supported the pelvic floor. Another procedure is posterior colporrhaphy, which involves suturing of vaginal tissue. Surgery may also involve insertion of a supporting mesh (that is, a patch). There are also surgical techniques directed at repairing or strengthening the rectovaginal septum, rather than simple excision or plication of vaginal skin which provides no support. Both gynecologists and colorectal surgeons can address this problem. Potential complications of surgical correction of a rectocele include bleeding, infection, dyspareunia (pain during intercourse), as well as recurrence or even worsening of the rectocele symptoms. The use of synthetic or biologic grafts has been questioned. References External links Rectocele details, description, video (in Russia)
Pallor
Pallor is a pale color of the skin that can be caused by illness, emotional shock or stress, stimulant use, or anemia, and is the result of a reduced amount of oxyhaemoglobin and may also be visible as pallor of the conjunctivae of the eyes on physical examination. Pallor is more evident on the face and palms. It can develop suddenly or gradually, depending on the cause. It is not usually clinically significant unless it is accompanied by a general pallor (pale lips, tongue, palms, mouth and other regions with mucous membranes). It is distinguished from similar presentations such as hypopigmentation (lack or loss of skin pigment) or simply a fair complexion. Causes migraine attack or headache excess estradiol and/or estrone osteoporosis emotional response, due to fear, embarrassment, grief, rage anorexia anemia, due to blood loss, poor nutrition, or underlying disease such as sickle cell anemia iron deficiency vitamin B12 deficiency shock, a medical emergency caused by illness or injury acute compartment syndrome frostbite common cold cancer hypoglycaemia bradycardia panic attack medications ketorolac amphetamines ethanol cannabis lead poisoning motion sickness heart disease Peripheral vascular disease hypothyroidism hypopituitarism scurvy tuberculosis sleep deprivation pheochromocytoma squeamishness visceral larva migrans Orthostatic hypotension methyldopa loss of appetite Space adaptation syndrome fibromyalgia Buergers disease Hypovolemia References == External links ==
Pruritus scroti
Pruritus scroti is itchiness of the scrotum that may be secondary to an infectious cause.: 55 See also Pruritus vulvae Pruritus ani Pruritus References == External links ==
Gastrointestinal bleeding
Gastrointestinal bleeding (GI bleed), also called gastrointestinal hemorrhage (GIB), is all forms of bleeding in the gastrointestinal tract, from the mouth to the rectum. When there is significant blood loss over a short time, symptoms may include vomiting red blood, vomiting black blood, bloody stool, or black stool. Small amounts of bleeding over a long time may cause iron-deficiency anemia resulting in feeling tired or heart-related chest pain. Other symptoms may include abdominal pain, shortness of breath, pale skin, or passing out. Sometimes in those with small amounts of bleeding no symptoms may be present.Bleeding is typically divided into two main types: upper gastrointestinal bleeding and lower gastrointestinal bleeding. Causes of upper GI bleeds include: peptic ulcer disease, esophageal varices due to liver cirrhosis and cancer, among others. Causes of lower GI bleeds include: hemorrhoids, cancer, and inflammatory bowel disease among others. Diagnosis typically begins with a medical history and physical examination, along with blood tests. Small amounts of bleeding may be detected by fecal occult blood test. Endoscopy of the lower and upper gastrointestinal tract may locate the area of bleeding. Medical imaging may be useful in cases that are not clear.Initial treatment focuses on resuscitation which may include intravenous fluids and blood transfusions. Often blood transfusions are not recommended unless the hemoglobin is less than 70 or 80 g/L. Treatment with proton pump inhibitors, octreotide, and antibiotics may be considered in certain cases. If other measures are not effective, an esophageal balloon may be attempted in those with presumed esophageal varices. Endoscopy of the esophagus, stomach, and duodenum or endoscopy of the large bowel are generally recommended within 24 hours and may allow treatment as well as diagnosis.An upper GI bleed is more common than lower GI bleed. An upper GI bleed occurs in 50 to 150 per 100,000 adults per year. A lower GI bleed is estimated to occur in 20 to 30 per 100,000 per year. It results in about 300,000 hospital admissions a year in the United States. Risk of death from a GI bleed is between 5% and 30%. Risk of bleeding is more common in males and increases with age. Classification Gastrointestinal bleeding can be roughly divided into two clinical syndromes: upper gastrointestinal bleeding and lower gastrointestinal bleeding. About 2/3 of all GI bleeds are from upper sources and 1/3 from lower sources. Common causes of gastrointestinal bleeding include infections, cancers, vascular disorders, adverse effects of medications, and blood clotting disorders. Obscure gastrointestinal bleeding (OGIB) is when a source is unclear following investigation. Upper gastrointestinal Upper gastrointestinal bleeding is from a source between the pharynx and the ligament of Treitz. An upper source is characterised by hematemesis (vomiting up blood) and melena (tarry stool containing altered blood). About half of cases are due to peptic ulcer disease (gastric or duodenal ulcers). Esophageal inflammation and erosive disease are the next most common causes. In those with liver cirrhosis, 50–60% of bleeding is due to esophageal varices. Approximately half of those with peptic ulcers have an H. pylori infection. Other causes include Mallory-Weiss tears, cancer, and angiodysplasia.A number of medications are found to cause upper GI bleeds. NSAIDs or COX-2 inhibitors increase the risk about fourfold. SSRIs, corticosteroids, and anticoagulants may also increase the risk. The risk with dabigatran is 30% greater than that with warfarin. Lower gastrointestinal Lower gastrointestinal bleeding is typically from the colon, rectum or anus. Common causes of lower gastrointestinal bleeding include hemorrhoids, cancer, angiodysplasia, ulcerative colitis, Crohns disease, and aortoenteric fistula. It may be indicated by the passage of fresh red blood rectally, especially in the absence of bloody vomiting. Lower gastrointestinal bleeding could also lead to melena if the bleeding occurs in the small intestine or proximal colon. Signs and symptoms Gastrointestinal bleeding can range from small non-visible amounts, which are only detected by laboratory testing, to massive bleeding where bright red blood is passed and shock develops. Rapid bleeding may cause syncope. The presence of bright red blood in stool, known as hematochezia, typically indicates lower gastrointestinal bleeding. Digested blood from the upper gastrointestinal tract may appear black rather than red, resulting in "coffee ground" vomit or melena. Other signs and symptoms include feeling tired, dizziness, and pale skin color.A number of foods and medications can turn the stool either red or black in the absence of bleeding. Bismuth found in many antacids may turn stools black as may activated charcoal. Blood from the vagina or urinary tract may also be confused with blood in the stool. Diagnosis Diagnosis is often based on direct observation of blood in the stool or vomit. Although fecal occult blood testing has been used in an emergency setting, this use is not recommended as the test has only been validated for colon cancer screening. Differentiating between upper and lower bleeding in some cases can be difficult. The severity of an upper GI bleed can be judged based on the Blatchford score or Rockall score. The Rockall score is the more accurate of the two. As of 2008 there is no scoring system useful for lower GI bleeds. Clinical Gastric aspiration and or lavage, where a tube is inserted into the stomach via the nose in an attempt to determine if there is blood in the stomach, if negative does not rule out an upper GI bleed but if positive is useful for ruling one in. Clots in the stool indicate a lower GI source while melana stools an upper one. Laboratory testing Recommended laboratory blood testing includes: cross-matching blood, hemoglobin, hematocrit, platelets, coagulation time, and electrolytes. If the ratio of blood urea nitrogen to creatinine is greater than 30 the source is more likely from the upper GI tract. Imaging A CT angiography is useful for determining the exact location of the bleeding within the gastrointestinal tract. Nuclear scintigraphy is a sensitive test for detecting occult gastrointestinal bleeding when direct imaging with upper and lower endoscopies are negative. Direct angiography allows for embolization of a bleeding source, but requires a bleeding rate faster than 1mL/minute. Prevention In those with significant varices or cirrhosis nonselective β-blockers reduce the risk of future bleeding. With a target heart rate of 55 beats per minute they reduce the absolute risk of bleeding by 10%. Endoscopic band ligation (EBL) is also effective at improving outcomes. Either B-blockers or EBL are recommended as initial preventative measures. In those who have had a previous variceal bleed both treatments are recommended. Some evidence supports the addition of isosorbide mononitrate. Testing for and treating those who are positive for H. pylori is recommended. Transjugular intrahepatic portosystemic shunting (TIPS) may be used to prevent bleeding in people who re-bleed despite other measures.Among people admitted to the ICU who are at high risk, a PPI or H2RA if appears useful. Treatment The initial focus is on resuscitation beginning with airway management and fluid resuscitation using either intravenous fluids and or blood. A number of medications may improve outcomes depending on the source of the bleeding. Peptic ulcers Based on evidence from people with other health problems crystalloid and colloids are believed to be equivalent for peptic ulcer bleeding. Proton pump inhibitor (PPI) treatment before endoscopy may decrease the need for endoscopic hemostatic treatment, however it is not clear if this treatment reduces mortality, the risk of re-bleeding, or the and the need for surgery. Oral and intravenous formulations may be equivalent; however, the evidence to support this is suboptimal. In those with less severe disease and where endoscopy is rapidly available, they are of less immediate clinical importance. There is tentative evidence of benefit for tranexamic acid which inhibits clot breakdown. Somatostatin and octreotide, while recommended for varicial bleeding, have not been found to be of general use for non variceal bleeds. After treatment of a high risk bleeding ulcer endoscopically giving a PPI once or a day rather than as an infusion appears to work just as well and is less expensive (the method may be either by mouth or intravenously). Variceal bleeding For initial fluid replacement, colloids or albumin is preferred in people with cirrhosis. Medications typically include octreotide or, if not available, vasopressin and nitroglycerin to reduce portal venous pressures. Terlipressin appears to be more effective than octreotide, but it is not available in many areas of the world. It is the only medication that has been shown to reduce mortality in acute variceal bleeding. This is in addition to endoscopic banding or sclerotherapy for the varices. If this is sufficient then beta blockers and nitrates may be used for the prevention of re-bleeding. If bleeding continues, balloon tamponade with a Sengstaken-Blakemore tube or Minnesota tube may be used in an attempt to mechanically compress the varices. This may then be followed by a transjugular intrahepatic portosystemic shunt. In those with cirrhosis, antibiotics decrease the chance of bleeding again, shorten the length of time spent in hospital, and decrease mortality. Octreotide reduces the need for blood transfusions and may decrease mortality. No trials of vitamin K have been conducted. Blood products The evidence for benefit of blood transfusions in GI bleed is poor with some evidence finding harm. In those in shock O-negative packed red blood cells are recommended. If large amounts of pack red blood cells are used additional platelets and fresh frozen plasma (FFP) should be administered to prevent coagulopathies. In alcoholics FFP is suggested before confirmation of a coagulopathy due to presumed blood clotting problems. Evidence supports holding off on blood transfusions in those who have a hemoglobin greater than 7 to 8 g/dL and moderate bleeding, including in those with preexisting coronary artery disease.If the INR is greater than 1.5 to 1.8 correction with fresh frozen plasma or prothrombin complex may decrease mortality. Evidence of a harm or benefit of recombinant activated factor VII in those with liver diseases and gastrointestinal bleeding is not determined. A massive transfusion protocol may be used, but there is a lack of evidence for this indication. Procedures The benefits versus risks of placing a nasogastric tube in those with upper GI bleeding are not determined. Endoscopy within 24 hours is recommended, in addition to medical management. A number of endoscopic treatments may be used, including: epinephrine injection, band ligation, sclerotherapy, and fibrin glue depending on what is found. Prokinetic agents such as erythromycin before endoscopy can decrease the amount of blood in the stomach and thus improve the operators view. They also decrease the amount of blood transfusions required. Early endoscopy decreases hospital and the amount of blood transfusions needed. A second endoscopy within a day is routinely recommended by some but by others only in specific situations. Proton pump inhibitors, if they have not been started earlier, are recommended in those in whom high risk signs for bleeding are found. High and low dose PPIs appear equivalent at this point. It is also recommended that people with high risk signs are kept in hospital for at least 72 hours. Those at low risk of re-bleeding may begin eating typically 24 hours following endoscopy. If other measures fail or are not available, esophageal balloon tamponade may be attempted. While there is a success rate up to 90%, there are some potentially significant complications including aspiration and esophageal perforation.Colonoscopy is useful for the diagnosis and treatment of lower GI bleeding. A number of techniques may be employed including clipping, cauterizing, and sclerotherapy. Preparation for colonoscopy takes a minimum of six hours which in those bleeding briskly may limit its applicability. Surgery, while rarely used to treat upper GI bleeds, is still commonly used to manage lower GI bleeds by cutting out the part of the intestines that is causing the problem. Angiographic embolization may be used for both upper and lower GI bleeds. Transjugular intrahepatic portosystemic shunting (TIPS) may also be considered. Prognosis Death in those with a GI bleed is more commonly due to other illnesses (some of which may have contributed to the bleed, such as cancer or cirrhosis) than the bleeding itself. Of those admitted to a hospital because of a GI bleed, death occurs in about 7%. Despite treatment, re-bleeding occurs in about 7–16% of those with upper GI bleeding. In those with esophageal varices, bleeding occurs in about 5–15% a year and if they have bled once, there is a higher risk of further bleeding within six weeks. Testing and treating H. pylori if found can prevent re-bleeding in those with peptic ulcers. The benefits versus risks of restarting blood thinners such as aspirin or warfarin and anti-inflammatories such as NSAIDs need to be carefully considered. If aspirin is needed for cardiovascular disease prevention, it is reasonable to restart it within seven days in combination with a PPI for those with nonvariceal upper GI bleeding. Epidemiology Gastrointestinal bleeding from the upper tract occurs in 50 to 150 per 100,000 adults per year. It is more common than lower gastrointestinal bleeding which is estimated to occur at the rate of 20 to 30 per 100,000 per year. Risk of bleeding is more common in males and increases with age. References External links "Gastrointestinal Bleeding". MedlinePlus. U.S. National Library of Medicine.
Diastematomyelia
Diastematomyelia (occasionally diastomyelia) is a congenital disorder in which a part of the spinal cord is split, usually at the level of the upper lumbar vertebra in the longitudinal (sagittal) direction. Females are affected much more commonly than males. This condition occurs in the presence of an osseous, cartilaginous or fibrous septum in the central portion of the spinal canal which then produces a complete or incomplete sagittal division of the spinal cord into two hemicords. When the split does not reunite distally to the spur, the condition is referred to as diplomyelia, which is true duplication of the spinal cord. Signs and symptoms The signs and symptoms of diastematomyelia may appear at any time of life, although the diagnosis is usually made in childhood. Cutaneous lesions (or stigmata), such as a hairy patch, dimple, Hemangioma, subcutaneous mass, Lipoma or Teratoma over the affected area of the spine is found in more than half of cases. Neurological symptoms are nonspecific, indistinguishable from other causes of cord tethering. The symptoms are caused by tissue attachments that limit the movement of the spinal cord within the spinal column. These attachments cause an abnormal stretching of the spinal cord. The course of the disorder is progressive. In children, symptoms may include the "stigmata" mentioned above and/or foot and spinal deformities; weakness in the legs; low back pain; scoliosis; and incontinence. In adulthood, the signs and symptoms often include progressive sensory and motor problems and loss of bowel and bladder control. This delayed presentation of symptoms is related to the degree of strain placed on the spinal cord over time. Tethered spinal cord syndrome appears to be the result of improper growth of the neural tube during fetal development, and is closely linked to spina bifida. Tethering may also develop after spinal cord injury and scar tissue can block the flow of fluids around the spinal cord. Fluid pressure may cause cysts to form in the spinal cord, a condition called syringomyelia. This can lead to additional loss of movement, feeling or the onset of pain or autonomic symptoms. Cervical diastematomyelia can become symptomatic as a result of acute trauma, and can cause major neurological deficits, like hemiparesis, to result from otherwise mild trauma.The following definitions may help to understand some of the related entities: Diastematomyelia (di·a·stem·a·to·my·elia) is a congenital anomaly, often associated with spina bifida, in which the spinal cord is split into halves by a bony spicule or fibrous band, each half being surrounded by a dural sac. Myeloschisis (my·elos·chi·sis) is a developmental anomaly characterized by a cleft spinal cord, owing to failure of the neural plate to form a complete neural tube or to rupture of the neural tube after closure. Diplomyelia (diplo.my.elia) is a true duplication of spinal cord in which these are two dural sacs with two pairs of anterior and posterior nerve roots. Pathophysiology Diastematomyelia is a "dysraphic state" of unknown embryonic origin, but is probably initiated by an accessory neurenteric canal (an additional embryonic spinal canal.) This condition may be an isolated phenomenon or may be associated with other segmental anomalies of the vertebral bodies such as spina bifida, kyphoscoliosis, butterfly vertebra, hemivertebra and block vertebrae which are observed in most of the cases. Scoliosis is identified in more than half of these patients. In most of the symptomatic patients, the spinal cord is split into halves by a bony spicule or fibrous band, each half being surrounded by a dural sac. Other conditions, such as intramedullary tumors, tethered cord, dermoids, lipoma, syringomyelia, hydromyelia and Arnold–Chiari malformations have been described in medical literature, but they are exceptionally rare. Diastematomyelia usually occurs between 9th thoracic and 1st sacral levels of the spinal column with most being at the level of the upper lumbar vertebra. Cervical diastematomyelia is a very rare entity. The extent (or length of spinal cord involved) varies from one affected individual to another. In approximately 60% of patients with diastematomyelia, the two hemicords, each covered by an intact layer of pia arachnoid, travel through a single subarachnoid space surrounded by a single dural sac. Each hemicord has its own anterior spinal artery. This form of diastematomyelia is not accompanied by any bony spur or fibrous band and is rarely symptomatic unless hydromyelia or tethering is present. The other 40% of patients have a bony spur or a fibrous band that passes through the two hemicords. In these cases, the dura and arachnoid are split into two separate dural and arachnoidal sacs, each surrounding the corresponding hemicord which are not necessarily symmetric. Each hemicord contains a central canal, one dorsal horn (giving rise to a dorsal nerve root), and one ventral horn (giving rise to a ventral nerve root.) One study showed the bony spur typically situated at the most inferior aspect of the dural cleft. They advised that if the imaging appears to show otherwise, a second spur (present in about 5% of patients with diastematomyelia) is likely to be present. The conus medullaris is situated below the L2 level in more than 75% of these diastematomyelia patients. Thickening of the filum terminale is seen in over half of the cases. While the level of the cleft is variable, it is most commonly found in the lumbar region. The two hemicords usually reunite caudally to the cleft. Occasionally, however, the cleft will extend unusually low and the cord will end with two separate coni medullarae and two fila terminale ("Diplomyelia"). Diagnosis Adult presentation in diastematomyelia is unusual. With modern imaging techniques, various types of spinal dysraphism are being diagnosed in adults with increasing frequency. The commonest location of the lesion is at first to third lumbar vertebrae. Lumbosacral adult diastematomyelia is even rarer. Bony malformations and dysplasias are generally recognized on plain x-rays. MRI scanning is often the first choice of screening and diagnosis. MRI generally give adequate analysis of the spinal cord deformities although it has some limitations in giving detailed bone anatomy. Combined myelographic and post-myelographic CT scan is the most effective diagnostic tool in demonstrating the detailed bone, intradural and extradural pathological anatomy of the affected and adjacent spinal canal levels and of the bony spur. Prenatal ultrasound diagnosis of this anomaly is usually possible in the early to mid third-trimester. An extra posterior echogenic focus between the fetal spinal laminae is seen with splaying of the posterior elements, thus allowing for early surgical intervention and have a favorable prognosis. Prenate ultrasound could also detect whether the diastematomyelia is isolated, with the skin intact or association with any serious neural tube defects. Progressive neurological lesions may result from the "tethering cord syndrome" (fixation of the spinal cord) by the diastematomyelia phenomenon or any of the associated disorders such as myelodysplasia, dysraphia of the spinal cord. Treatment Surgery Surgical intervention is warranted in patients who present with new onset neurological signs and symptoms or have a history of progressive neurological manifestations that can be related to this abnormality. The surgical procedure required for the effective treatment of diastematomyelia includes decompression (surgery) of neural elements and removal of bony spur. This may be accomplished with or without resection and repair of the duplicated dural sacs. Resection and repair of the duplicated dural sacs is preferred since the dural abnormality may partly contribute to the "tethering" process responsible for the symptoms of this condition. Post-myelographic CT scanning provides individualized detailed maps that enable surgical treatment of cervical diastematomyelia, first performed in 1983.Observation Asymptomatic patients do not require surgical treatment. These patients should have regular neurological examinations since it is known that the condition can deteriorate. If any progression is identified, then a resection should be performed. References == External links ==
Heredofamilial amyloidosis
Heredofamilial amyloidosis is an inherited condition that may be characterized by systemic or localized deposition of amyloid in body tissues.: 522 See also Amyloidosis List of cutaneous conditions == References ==
H
H, or h, is the eighth letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is aitch (pronounced , plural aitches), or regionally haitch . History The original Semitic letter Heth most likely represented the voiceless pharyngeal fricative (ħ). The form of the letter probably stood for a fence or posts. The Greek Eta Η in archaic Greek alphabets, before coming to represent a long vowel, /ɛː/, still represented a similar sound, the voiceless glottal fricative /h/. In this context, the letter eta is also known as Heta to underline this fact. Thus, in the Old Italic alphabets, the letter Heta of the Euboean alphabet was adopted with its original sound value /h/. While Etruscan and Latin had /h/ as a phoneme, almost all Romance languages lost the sound—Romanian later re-borrowed the /h/ phoneme from its neighbouring Slavic languages, and Spanish developed a secondary /h/ from /f/, before losing it again; various Spanish dialects have developed [h] as an allophone of /s/ or /x/ in most Spanish-speaking countries, and various dialects of Portuguese use it as an allophone of /ʀ/. H is also used in many spelling systems in digraphs and trigraphs, such as ch, which represents /tʃ/ in Spanish, Galician, and Old Portuguese; /ʃ/ in French and modern Portuguese; /k/ in Italian and French. Name in English For most English speakers, the name for the letter is pronounced as and spelled "aitch" or occasionally "eitch". The pronunciation and the associated spelling "haitch" is often considered to be h-adding and is considered non-standard in England. It is, however, a feature of Hiberno-English, and occurs sporadically in various other dialects. The perceived name of the letter affects the choice of indefinite article before initialisms beginning with H: for example "an H-bomb" or "a H-bomb". The pronunciation /heɪtʃ/ may be a hypercorrection formed by analogy with the names of the other letters of the alphabet, most of which include the sound they represent.The haitch pronunciation of h has spread in England, being used by approximately 24% of English people born since 1982, and polls continue to show this pronunciation becoming more common among younger native speakers. Despite this increasing number, the pronunciation without the /h/ sound is still considered to be standard in England, although the pronunciation with /h/ is also attested as a legitimate variant. In Northern Ireland, the pronunciation of the letter has been used as a shibboleth, with Catholics typically pronouncing it with the /h/ and Protestants pronouncing the letter without it.Authorities disagree about the history of the letters name. The Oxford English Dictionary says the original name of the letter was [ˈaha] in Latin; this became [ˈaka] in Vulgar Latin, passed into English via Old French [atʃ], and by Middle English was pronounced [aːtʃ]. The American Heritage Dictionary of the English Language derives it from French hache from Latin haca or hic. Anatoly Liberman suggests a conflation of two obsolete orderings of the alphabet, one with H immediately followed by K and the other without any K: reciting the formers ..., H, K, L,... as [...(h)a ka el ...] when reinterpreted for the latter ..., H, L,... would imply a pronunciation [(h)a ka] for H. Use in writing systems English In English, ⟨h⟩ occurs as a single-letter grapheme (being either silent or representing the voiceless glottal fricative () and in various digraphs, such as ⟨ch⟩ , , , or ), ⟨gh⟩ (silent, /ɡ/, /k/, /p/, or /f/), ⟨ph⟩ (/f/), ⟨rh⟩ (/r/), ⟨sh⟩ (), ⟨th⟩ ( or ), ⟨wh⟩ (/hw/). The letter is silent in a syllable rime, as in ah, ohm, dahlia, cheetah, pooh-poohed, as well as in certain other words (mostly of French origin) such as hour, honest, herb (in American but not British English) and vehicle (in certain varieties of English). Initial /h/ is often not pronounced in the weak form of some function words including had, has, have, he, her, him, his, and in some varieties of English (including most regional dialects of England and Wales) it is often omitted in all words (see ⟨h⟩-dropping). It was formerly common for an rather than a to be used as the indefinite article before a word beginning with /h/ in an unstressed syllable, as in "an historian", but use of a is now more usual (see English articles § Indefinite article). In English, The pronunciation of ⟨h⟩ as /h/ can be analyzed as a voiceless vowel. That is, when the phoneme /h/ precedes a vowel, /h/ may be realized as a voiceless version of the subsequent vowel. For example the word ⟨hit⟩, /hɪt/ is realized as [ɪ̥ɪt]. H is the eighth most frequently used letter in the English language (after S, N, I, O, A, T, and E), with a frequency of about 4.2% in words. When h is placed after certain other consonants, it modifies their pronunciation in various ways, e.g. for ch, gh, ph, sh, and th. Other languages In the German language, the name of the letter is pronounced /haː/. Following a vowel, it often silently indicates that the vowel is long: In the word erhöhen (heighten), the second ⟨h⟩ is mute for most speakers outside of Switzerland. In 1901, a spelling reform eliminated the silent ⟨h⟩ in nearly all instances of ⟨th⟩ in native German words such as thun (to do) or Thür (door). It has been left unchanged in words derived from Greek, such as Theater (theater) and Thron (throne), which continue to be spelled with ⟨th⟩ even after the last German spelling reform. In Spanish and Portuguese, ⟨h⟩ ("hache" in Spanish, pronounced [atʃe], or agá in Portuguese, pronounced [aˈɣa] or [ɐˈɡa]) is a silent letter with no pronunciation, as in hijo [ˈixo] (son) and húngaro [ˈũɡaɾu] (Hungarian). The spelling reflects an earlier pronunciation of the sound /h/. In words where the ⟨h⟩ is derived from a Latin /f/, it is still sometimes pronounced with the value [h] in some regions of Andalusia, Extremadura, Canarias, Cantabria, and the Americas. Some words beginning with [je] or [we], such as hielo, ice and huevo, egg, were given an initial ⟨h⟩ to avoid confusion between their initial semivowels and the consonants ⟨j⟩ and ⟨v⟩. This is because ⟨j⟩ and ⟨v⟩ used to be considered variants of ⟨i⟩ and ⟨u⟩ respectively. ⟨h⟩ also appears in the digraph ⟨ch⟩, which represents /tʃ/ in Spanish and northern Portugal, and /ʃ/ in varieties that have merged both sounds (the latter originally represented by ⟨x⟩ instead), such as most of the Portuguese language and some Spanish dialects, prominently Chilean Spanish. In French, the name of the letter is written as "ache" and pronounced /aʃ/. The French orthography classifies words that begin with this letter in two ways, one of which can affect the pronunciation, even though it is a silent letter either way. The H muet, or "mute" ⟨h⟩, is considered as though the letter were not there at all, so for example the singular definite article le or la, which is elided to l before a vowel, elides before an H muet followed by a vowel. For example, le + hébergement becomes lhébergement (the accommodation). The other kind of ⟨h⟩ is called h aspiré ("aspirated ⟨h⟩", though it is not normally aspirated phonetically), and does not allow elision or liaison. For example in le homard (the lobster) the article le remains unelided, and may be separated from the noun with a bit of a glottal stop. Most words that begin with an H muet come from Latin (honneur, homme) or from Greek through Latin (hécatombe), whereas most words beginning with an H aspiré come from Germanic (harpe, hareng) or non-Indo-European languages (harem, hamac, haricot); in some cases, an orthographic ⟨h⟩ was added to disambiguate the [v] and semivowel [ɥ] pronunciations before the introduction of the distinction between the letters ⟨v⟩ and ⟨u⟩: huit (from uit, ultimately from Latin octo), huître (from uistre, ultimately from Greek through Latin ostrea). In Italian, ⟨h⟩ has no phonological value. Its most important uses are in the digraphs ch /k/ and gh /ɡ/, as well as to differentiate the spellings of certain short words that are homophones, for example some present tense forms of the verb avere (to have) (such as hanno, they have, vs. anno, year), and in short interjections (oh, ehi). Some languages, including Czech, Slovak, Hungarian, Finnish, and Estonian use ⟨h⟩ as a breathy voiced glottal fricative [ɦ], often as an allophone of otherwise voiceless /h/ in a voiced environment. In Hungarian, the letter has no fewer than five pronunciations, with three additional uses as a productive and non-productive element of digraphs. The letter h may represent /h/ as in the name of the Székely town Hargita; intervocalically it represents /ɦ/ as in tehén; it represents /x/ in the word doh; it represents /ç/ in ihlet; and it is silent in cseh. As part of a digraph, it represents, in archaic spelling, /t͡ʃ/ with the letter c as in the name Széchenyi; it represents, again, with the letter c, /x/ in pech (which is pronounced [pɛxː]); in certain environments it breaks palatalization of a consonant, as in the name Beöthy which is pronounced [bøːti] (without the intervening h, the name Beöty could be pronounced [bøːc]); and finally, it acts as a silent component of a digraph, as in the name Vargha, pronounced [vɒrgɒ]. In Ukrainian and Belarusian, when written in the Latin alphabet, ⟨h⟩ is also commonly used for /ɦ/, which is otherwise written with the Cyrillic letter ⟨г⟩. In Irish, ⟨h⟩ is not considered an independent letter, except for a very few non-native words, however ⟨h⟩ placed after a consonant is known as a "séimhiú" and indicates lenition of that consonant; ⟨h⟩ began to replace the original form of a séimhiú, a dot placed above the consonant, after the introduction of typewriters. In most dialects of Polish, both ⟨h⟩ and the digraph ⟨ch⟩ always represent /x/. In Basque, during the 20th century it was not used in the orthography of the Basque dialects in Spain but it marked an aspiration in the North-Eastern dialects. During the standardization of Basque in the 1970s, the compromise was reached that h would be accepted if it were the first consonant in a syllable. Hence, herri ("people") and etorri ("to come") were accepted instead of erri (Biscayan) and ethorri (Souletin). Speakers could pronounce the h or not. For the dialects lacking the aspiration, this meant a complication added to the standardized spelling. Other systems As a phonetic symbol in the International Phonetic Alphabet (IPA), it is used mainly for the so-called aspirations (fricative or trills), and variations of the plain letter are used to represent two sounds: the lowercase form ⟨h⟩ represents the voiceless glottal fricative, and the small capital form ⟨ʜ⟩ represents the voiceless epiglottal fricative (or trill). With a bar, minuscule ⟨ħ⟩ is used for a voiceless pharyngeal fricative. Specific to the IPA, a hooked ⟨ɦ⟩ is used for a voiced glottal fricative, and a superscript ⟨ʰ⟩ is used to represent aspiration. Related characters Descendants and related characters in the Latin alphabet H with diacritics: Ĥ ĥ Ȟ ȟ Ħ ħ Ḩ ḩ Ⱨ ⱨ ẖ ẖ Ḥ ḥ Ḣ ḣ Ḧ ḧ Ḫ ḫ ꞕ Ꜧ ꜧ IPA-specific symbols related to H: ʜ ɦ ʰ ʱ ɥ ᶣ ɧ Superscript IPA symbols related to H: 𐞖 𐞕 ꟸ: Modifier letter capital H with stroke is used in VoQS to represent faucalized voice. ᴴ : Modifier letter H is used in the Uralic Phonetic Alphabet ₕ : Subscript small h was used in the Uralic Phonetic Alphabet prior to its formal standardization in 1902 ʰ : Modifier letter small h is used in Indo-European studies ʮ and ʯ : Turned H with fishhook and turned H with fishhook and tail are used in Sino-Tibetanist linguistics Ƕ ƕ : Latin letter hwair, derived from a ligature of the digraph hv, and used to transliterate the Gothic letter 𐍈 (which represented the sound [hʷ]) Ⱶ ⱶ : Claudian letters Ꟶ ꟶ : Reversed half h used in Roman inscriptions from the Roman provinces of Gaul Ancestors, siblings, and descendants in other alphabets 𐤇 : Semitic letter Heth, from which the following symbols derive Η η : Greek letter Eta, from which the following symbols derive 𐌇 : Old Italic H, the ancestor of modern Latin H ᚺ, ᚻ : Runic letter haglaz, which is probably a descendant of Old Italic H Һ һ : Cyrillic letter Shha, which derives from Latin H И и : Cyrillic letter И, which derives from the Greek letter Eta 𐌷 : Gothic letter haalArmenian letter ho (Հ) Derived signs, symbols, and abbreviations h : Planck constant ℏ : reduced Planck constant H {\displaystyle \mathbb {H} } : Blackboard bold capital H used in quaternion notation Computing codes 1 and all encodings based on ASCII, including the DOS, Windows, ISO-8859, and Macintosh families of encodings. Other representations See also American Sign Language grammar List of Egyptian hieroglyphs#H References External links The dictionary definition of H at Wiktionary The dictionary definition of h at Wiktionary Lubliner, Coby. 2008. "The Story of H." (essay on origins and uses of the letter "h")