page_title
stringlengths 1
91
| page_text
stringlengths 0
34.2k
|
---|---|
Gestational trophoblastic disease | Gestational trophoblastic disease (GTD) is a term used for a group of pregnancy-related tumours. These tumours are rare, and they appear when cells in the womb start to proliferate uncontrollably. The cells that form gestational trophoblastic tumours are called trophoblasts and come from tissue that grows to form the placenta during pregnancy.
There are several different types of GTD. A hydatidiform mole also known as a molar pregnancy, is the most common and is usually benign. Sometimes it may develop into an invasive mole, or, more rarely into a choriocarcinoma. A choriocarcinoma is likely to spread quickly, but is very sensitive to chemotherapy, and has a very good prognosis. Trophoblasts are of particular interest to cell biologists because, like cancer, they can invade tissue (the uterus), but unlike cancer, they usually "know" when to stop.GTD can simulate pregnancy, because the uterus may contain fetal tissue, albeit abnormal. This tissue may grow at the same rate as a normal pregnancy, and produces chorionic gonadotropin, a hormone which is measured to monitor fetal well-being.While GTD overwhelmingly affects women of child-bearing age, it may rarely occur in postmenopausal women.
Types
GTD is the common name for five closely related tumours (one benign tumour, and four malignant tumours):
The benign tumour
Hydatidiform moleHere, first a fertilised egg implants into the uterus, but some cells around the fetus (the chorionic villi) do not develop properly. The pregnancy is not viable, and the normal pregnancy process turns into a benign tumour. There are two subtypes of hydatidiform mole: complete hydatidiform mole, and partial hydatidiform mole.
The four malignant tumours
Invasive mole
Choriocarcinoma
Placental site trophoblastic tumour
Epithelioid trophoblastic tumourAll five closely related tumours develop in the placenta. All five tumours arise from trophoblast cells that form the outer layer of the blastocyst in the early development of the fetus. In a normal pregnancy, trophoblasts aid the implantation of the fertilised egg into the uterine wall. But in GTD, they develop into tumour cells.
Cause
Two main risk factors increase the likelihood for the development of GTD: 1) The woman being under 20 years of age, or over 35 years of age, and 2) previous GTD.
Although molar pregnancies affect women of all ages, women under 16 and over 45 years of age have an increased risk of developing a molar pregnancy. Being from Asia/of Asian ethnicity is an important risk factor.Hydatidiform moles are abnormal conceptions with excessive placental development. Conception takes place, but placental tissue grows very fast, rather than supporting the growth of a fetus.Complete hydatidiform moles have no fetal tissue and no maternal DNA, as a result of a maternal ovum with no functional DNA. Most commonly, a single spermatozoon duplicates and fertilises an empty ovum. Less commonly, two separate spermatozoa fertilise an empty ovum (dispermic fertilisation).
Partial hydatidiform moles have a fetus or fetal cells. They are triploid in origin, containing one set of maternal haploid genes and two sets of paternal haploid genes. They almost always occur following dispermic fertilisation of a normal ovum. Malignant forms of GTD are very rare. About 50% of malignant forms of GTD develop from a hydatidiform mole.
Diagnosis
Cases of GTD can be diagnosed through routine tests given during pregnancy, such as blood tests and ultrasound, or through tests done after miscarriage or abortion. Vaginal bleeding, enlarged uterus, pelvic pain or discomfort, and vomiting too much (hyperemesis) are the most common symptoms of GTD. But GTD also leads to elevated serum hCG (human chorionic gonadotropin hormone). Since pregnancy is by far the most common cause of elevated serum hCG, clinicians generally first suspect a pregnancy with a complication. However, in GTD, the beta subunit of hCG (beta hCG) is also always elevated. Therefore, if GTD is clinically suspected, serum beta hCG is also measured.The initial clinical diagnosis of GTD should be confirmed histologically, which can be done after the evacuation of pregnancy (see Treatment below) in women with hydatidiform mole. However, malignant GTD is highly vascular. If malignant GTD is suspected clinically, biopsy is contraindicated, because biopsy may cause life-threatening haemorrhage.
Women with persistent abnormal vaginal bleeding after any pregnancy, and women developing acute respiratory or neurological symptoms after any pregnancy, should also undergo hCG testing, because these may be signs of a hitherto undiagnosed GTD.
There might be some signs and symptoms of hyperthyroidism as well as an increase in the levels of thyroid hormones in some patients. The proposed mechanism is attaching hCG to TSH receptors and acting like TSH weakly.
Differential diagnosis
These are not GTD, and they are not tumoursExaggerated placental site
Placental site noduleBoth are composed of intermediate trophoblast, but their morphological features and clinical presentation can differ significantly.
Exaggerated placental site is a benign, non cancerous lesion with an increased number of implantation site intermediate trophoblastic cells that infiltrate the endometrium and the underlying myometrium. An exaggerated placental site may occur with normal pregnancy, or after an abortion. No specific treatment or follow up is necessary.
Placental site nodules are lesions of chorionic type intermediate trophoblast, usually small. 40 to 50% of placental site nodules are found in the cervix. They almost always are incidental findings after a surgical procedure. No specific treatment or follow up is necessary.
Treatment
Treatment is always necessary.The treatment for hydatidiform mole consists of the evacuation of pregnancy. Evacuation will lead to the relief of symptoms, and also prevent later complications. Suction curettage is the preferred method of evacuation. Hysterectomy is an alternative if no further pregnancies are wished for by the female patient. Hydatidiform mole also has successfully been treated with systemic (intravenous) methotrexate.The treatment for invasive mole or choriocarcinoma generally is the same. Both are usually treated with chemotherapy. Methotrexate and dactinomycin are among the chemotherapy drugs used in GTD. In women with low risk gestational trophoblastic neoplasia, a review has found that Actinomycin D is probably more effective as a treatment and more likely to achieve a cure in the first instance than methotrexate. Only a few women with GTD have poor prognosis metastatic gestational trophoblastic disease. Their treatment usually includes chemotherapy. Radiotherapy can also be given to places where the cancer has spread, e.g. the brain.Women who undergo chemotherapy are advised not to conceive for one year after completion of treatment. These women also are likely to have an earlier menopause. It has been estimated by the Royal College of Obstetricians and Gynaecologists that the age at menopause for women who receive single agent chemotherapy is advanced by one year, and by three years for women who receive multi agent chemotherapy.
Follow up
Follow up is necessary in all women with gestational trophoblastic disease, because of the possibility of persistent disease, or because of the risk of developing malignant uterine invasion or malignant metastatic disease even after treatment in some women with certain risk factors.The use of a reliable contraception method is very important during the entire follow up period, as patients are strongly advised against pregnancy at that time. If a reliable contraception method is not used during the follow-up, it could be initially unclear to clinicians as to whether a rising hCG level is caused by the patient becoming pregnant again, or by the continued presence of GTD.
In women who have a malignant form of GTD, hCG concentrations stay the same (plateau) or they rise. Persistent elevation of serum hCG levels after a non molar pregnancy (i.e., normal pregnancy [term pregnancy], or preterm pregnancy, or ectopic pregnancy [pregnancy taking place in the wrong place, usually in the fallopian tube], or abortion) always indicate persistent GTD (very frequently due to choriocarcinoma or placental site trophoblastic tumour), but this is not common, because treatment mostly is successful.
In rare cases, a previous GTD may be reactivated after a subsequent pregnancy, even after several years. Therefore, the hCG tests should be performed also after any subsequent pregnancy in all women who had had a previous GTD (6 and 10 weeks after the end of any subsequent pregnancy).
Prognosis
Women with a hydatidiform mole have an excellent prognosis. Women with a malignant form of GTD usually have a very good prognosis.Choriocarcinoma, for example, is an uncommon, yet almost always curable cancer. Although choriocarcinoma is a highly malignant tumour and a life-threatening disease, it is very sensitive to chemotherapy. Virtually all women with non-metastatic disease are cured and retain their fertility; the prognosis is also very good for those with metastatic (spreading) cancer, in the early stages, but fertility may be lost. Hysterectomy (surgical removal of the uterus) can also be offered to patients > 40 years of age or those for whom sterilisation is not an obstacle. Only a few women with GTD have a poor prognosis, e.g. some forms of stage IV GTN. The FIGO staging system is used. The risk can be estimated by scoring systems such as the Modified WHO Prognostic Scoring System, wherein scores between 1 and 4 from various parameters are summed together:
In this scoring system, women with a score of 7 or greater are considered at high risk.
It is very important for malignant forms of GTD to be discovered in time. In Western countries, women with molar pregnancies are followed carefully; for instance, in the UK, all women who have had a molar pregnancy are registered at the National Trophoblastic Screening Centre. There are efforts in this direction in the developing countries too, and there have been improvements in these countries in the early detection of choriocarcinoma, thereby significantly reducing the mortality rate also in developing countries.
Becoming pregnant again
Most women with GTD can become pregnant again and can have children again. The risk of a further molar pregnancy is low. More than 98% of women who become pregnant following a molar pregnancy will not have a further hydatidiform mole or be at increased risk of complications.
In the past, it was seen as important not to get pregnant straight away after a GTD. Specialists recommended a waiting period of six months after the hCG levels become normal. Recently, this standpoint has been questioned. New medical data suggest that a significantly shorter waiting period after the hCG levels become normal is reasonable for approximately 97% of the patients with hydatidiform mole.
Risk of a repeat GTD
The risk of a repeat GTD is approximately 1 in 100, compared with approximately 1 in 1000 risk in the general population. Especially women whose hCG levels remain significantly elevated are at risk of developing a repeat GTD.
Persistent trophoblastic disease
The term «persistent trophoblastic disease» (PTD) is used when after treatment of a molar pregnancy, some molar tissue is left behind and again starts growing into a tumour. Although PTD can spread within the body like a malignant cancer, the overall cure rate is nearly 100%.In the vast majority of patients, treatment of PTD consist of chemotherapy. Only about 10% of patients with PTD can be treated successfully with a second curettage.
GTD coexisting with a normal fetus, also called "twin pregnancy"
In some very rare cases, a GTD can coexist with a normal fetus. This is called a "twin pregnancy". These cases should be managed only by experienced clinics, after extensive consultation with the patient. Because successful term delivery might be possible, the pregnancy should be allowed to proceed if the mother wishes, following appropriate counselling. The probability of achieving a healthy baby is approximately 40%, but there is a risk of complications, e.g. pulmonary embolism and pre-eclampsia. Compared with women who simply had a GTD in the past, there is no increased risk of developing persistent GTD after such a twin pregnancy.In few cases, a GTD had coexisted with a normal pregnancy, but this was discovered only incidentally after a normal birth.
Epidemiology
Overall, GTD is a rare disease. Nevertheless, the incidence of GTD varies greatly between different parts of the world. The reported incidence of hydatidiform mole ranges from 23 to 1299 cases per 100,000 pregnancies. The incidence of the malignant forms of GTD is much lower, only about 10% of the incidence of hydatidiform mole. The reported incidence of GTD from Europe and North America is significantly lower than the reported incidence of GTD from Asia and South America. One proposed reason for this great geographical variation is differences in healthy diet in the different parts of the world (e.g., carotene deficiency).However, the incidence of rare diseases (such as GTD) is difficult to measure, because epidemiologic data on rare diseases is limited. Not all cases will be reported, and some cases will not be recognised. In addition, in GTD, this is especially difficult, because one would need to know all gestational events in the total population. Yet, it seems very likely that the estimated number of births that occur at home or outside of a hospital has been inflated in some reports.
Terminology
Gestational trophoblastic disease (GTD) may also be called gestational trophoblastic tumour (GTT). Hydatidiform mole (one type of GTD) may also be called molar pregnancy.Persistent disease; persistent GTD: If there is any evidence of persistence of GTD, usually defined as persistent elevation of beta hCG (see «Diagnosis» below), the condition may also be referred to as gestational trophoblastic neoplasia (GTN).
See also
Trophoblastic neoplasms
References
== External links == |
Giardiasis | Giardiasis is a parasitic disease caused by Giardia duodenalis (also known as G. lamblia and G. intestinalis). Infected individuals who experience symptoms (about 10% have no symptoms) may have diarrhea, abdominal pain, and weight loss. Less common symptoms include vomiting and blood in the stool. Symptoms usually begin 1 to 3 weeks after exposure and, without treatment, may last two to six weeks or longer.Giardiasis usually spreads when Giardia duodenalis cysts within feces contaminate food or water that is later consumed orally. The disease can also spread between people and through other animals. Cysts may survive for nearly three months in cold water. Giardiasis is diagnosed via stool tests.Prevention may be improved through proper hygiene practices. Asymptomatic cases often do not need treatment. When symptoms are present, treatment is typically provided with either tinidazole or metronidazole. Infection may cause a person to become lactose intolerant, so it is recommended to temporarily avoid lactose following an infection. Resistance to treatment may occur in some patients.Giardiasis occurs worldwide. It is one of the most common parasitic human diseases. Infection rates are as high as 7% in the developed world and 30% in the developing world. In 2013, there were approximately 280 million people worldwide with symptomatic cases of giardiasis. The World Health Organization classifies giardiasis as a neglected disease. It is popularly known as beaver fever in North America.
Signs and symptoms
Symptoms vary from none to severe diarrhea with poor absorption of nutrients. The cause of this wide range in severity of symptoms is not fully known but the intestinal flora of the infected host may play a role. Diarrhea is less likely to occur in people from developing countries.Symptoms typically develop 9–15 days after exposure, but may occur as early as one day. The most common and prominent symptom is chronic diarrhea, which can occur for weeks or months if untreated. Diarrhea is often greasy and foul-smelling, with a tendency to float. This characteristic diarrhea is often accompanied by a number of other symptoms, including gas, abdominal cramps, and nausea or vomiting. Some people also experience symptoms outside of the gastrointestinal tract, such as itchy skin, hives, and swelling of the eyes and joints, although these are less common. Fever occurs in only about 15% of people, in spite of the nickname "beaver fever".Prolonged disease is often characterized by diarrhea, along with malabsorption of nutrients in the intestine. This malabsorption results in fatty stools, substantial weight loss, and fatigue. Additionally, those with giardiasis often have difficulty absorbing lactose, vitamin A, folate, and vitamin B12. In children, prolonged giardiasis can cause failure to thrive and may impair mental development. Symptomatic infections are well recognized as causing lactose intolerance, which, while usually temporary, may become permanent.
Cause
Giardiasis is caused by the protozoan Giardia duodenalis. The infection occurs in many animals, including beavers, other rodents, cows, and sheep. Animals are believed to play a role in keeping infections present in an environment.G. duodenalis has been sub-classified into eight genetic assemblages (designated A–H). Genotyping of G. duodenalis isolated from various hosts has shown that assemblages A and B infect the largest range of host species, and appear to be the main and possibly only G. duodenalis assemblages that infect humans.
Risk factors
According to the United States Centers for Disease Control and Prevention (CDC), people at greatest risk of infection are:
People in childcare settings
People who are in close contact with someone who has the disease
Travelers within areas that have poor sanitation
People who have contact with feces during sexual activity
Backpackers or campers who drink untreated water from springs, lakes, or rivers
Swimmers who swallow water from swimming pools, hot tubs, interactive fountains, or untreated recreational water from springs, lakes, or rivers
People who get their household water from a shallow well
People with weakened immune systems
People who have contact with infected animals or animal environments contaminated with fecesFactors that increase infection risk for people from developed countries include changing diapers, consuming raw food, owning a dog, and travelling in the developing world. However, 75% of infections in the United Kingdom are acquired in the UK, not through travel elsewhere. In the United States, giardiasis occurs more often in summer, which is believed to be due to a greater amount of time spent on outdoor activities and traveling in the wilderness.
Transmission
Giardiasis is transmitted via the fecal-oral route with the ingestion of cysts. Primary routes are personal contact and contaminated water and food. The cysts can stay infectious for up to three months in cold water.Many people with Giardia infections have no or few symptoms. They may, however, still spread the disease.
Pathophysiology
The life cycle of Giardia consists of a cyst form and a trophozoite form. The cyst form is infectious and once it has found a host, transforms into the trophozoite form. This trophozoite attaches to the intestinal wall and replicates within the gut. As trophozoites continue along the gastrointestinal tract, they convert back to their cyst form which is then excreted with feces. Ingestion of only a few of these cysts is needed to generate infection in another host.Infection with Giardia results in decreased expression of brush border enzymes, morphological changes to the microvillus, increased intestinal permeability, and programmed cell death of small intestinal epithelial cells. Both trophozoites and cysts are contained within the gastrointestinal tract and do not invade beyond it.The attachment of trophozoites causes villous flattening and inhibition of enzymes that break down disaccharide sugars in the intestines. Ultimately, the community of microorganisms that lives in the intestine may overgrow and may be the cause of further symptoms, though this idea has not been fully investigated. The alteration of the villi leads to an inability of nutrient and water absorption from the intestine, resulting in diarrhea, one of the predominant symptoms. In the case of asymptomatic giardiasis, there can be malabsorption with or without histological changes to the small intestine. The degree to which malabsorption occurs in symptomatic and asymptomatic cases is highly varied.The species Giardia intestinalis uses enzymes that break down proteins to attack the villi of the brush border and appears to increase crypt cell proliferation and crypt length of crypt cells existing on the sides of the villi. On an immunological level, activated host T lymphocytes attack endothelial cells that have been injured in order to remove the cell. This occurs after the disruption of proteins that connect brush border endothelial cells to one another. The result is increased intestinal permeability.There appears to be a further increase in programmed enterocyte cell death by Giardia intestinalis, which further damages the intestinal barrier and increases permeability. There is significant upregulation of the programmed cell death cascade by the parasite, and, furthermore, substantial downregulation of the anti-apoptotic protein Bcl-2 and upregulation of the proapoptotic protein Bax. These connections suggest a role of caspase-dependent apoptosis in the pathogenesis of giardiasis.Giardia protects its own growth by reducing the formation of the gas nitric oxide by consuming all local arginine, which is the amino acid necessary to make nitric oxide. Arginine starvation is known to be a cause of programmed cell death, and local removal is a strong apoptotic agent.
Host defense
Host defense against Giardia consists of natural barriers, production of nitric oxide, and activation of the innate and adaptive immune systems.
Natural barriers
Natural barriers defend against parasite entering the hosts body. Natural barriers consist of mucus layers, bile salt, proteases, and lipases. Additionally, peristalsis and the renewal of enterocytes provide further protection against parasites.
Nitric oxide production
Nitric oxide does not kill the parasite, but it inhibits the growth of trophozoites as well as excystation and encystation.
Innate immune system
Lectin pathway of complement
The lectin pathway of complement is activated by mannose-binding lectin (MBL) which binds to N-acetylglucosamine. N-acetylglucosamine is a ligand for MBL and is present on the surface of Giardia.
The classical pathway of complement
The classical pathway of complement is activated by antibodies specific against Giardia.
Adaptive immune system
Antibodies
Antibodies inhibit parasite replication and also induce parasite death via the classical pathway of complement.Infection with Giardia typically results in a strong antibody response against the parasite. While IgG is made in significant amounts, IgA is believed to be more important in parasite control. IgA is the most abundant isotype in intestinal secretions, and it is also the dominant isotype in mothers milk. Antibodies in mothers milk protect children against giardiasis (passive immunization).
T cells
The major aspect of adaptive immune responses is the T cell response. Giardia is an extracellular pathogen. Therefore CD4+ helper T cells are primarily responsible for this protective effect.One role of helper T cells is to promote antibody production and isotype switching. Other roles include cytokine production (Il-4,IL-9) to help recruit other effector cells of the immune response.
Diagnosis
According to the CDC, detection of antigens on the surface of organisms in stool specimens is the current test of choice for diagnosis of giardiasis and provides increased sensitivity over more common microscopy techniques.
A trichrome stain of preserved stool is another method used to detect Giardia.
Microscopic examination of the stool can be performed for diagnosis. This method is not preferred, however, due to inconsistent shedding of trophozoites and cysts in infected hosts. Multiple samples over a period of time, typically one week, must be examined.
The Entero-Test uses a gelatin capsule with an attached thread. One end is attached to the inner aspect of the hosts cheek, and the capsule is swallowed. Later, the thread is withdrawn and shaken in saline to release trophozoites which can be detected with a microscope. The sensitivity of this test is low, however, and is not routinely used for diagnosis.
Immunologic enzyme-linked immunosorbent assay (ELISA) testing may be used for diagnosis. These tests are capable of a 90% detection rate or more.Although hydrogen breath tests indicate poorer rates of carbohydrate absorption in those asymptomatically infected, such tests are not diagnostic of infection. Serological tests are not helpful in diagnosis.
Prevention
The CDC recommends hand-washing and avoiding potentially contaminated food and untreated water.Boiling water contaminated with Giardia effectively kills infectious cysts. Chemical disinfectants or filters may be used. Iodine-based disinfectants are preferred over chlorination as the latter is ineffective at destroying cysts.Although the evidence linking the drinking of water in the North American wilderness and giardiasis has been questioned, a number of studies raise concern. Most if not all CDC verified backcountry giardiasis outbreaks have been attributed to water. Surveillance data (for 2013 and 2014) reports six outbreaks (96 cases) of waterborne giardiasis contracted from rivers, streams or springs and less than 1% of reported giardiasis cases are associated with outbreaks.Person-to-person transmission accounts for the majority of Giardia infections, and is usually associated with poor hygiene and sanitation. Giardia is often found on the surface of the ground, in the soil, in undercooked foods, and in water, and on hands that have not been properly cleaned after handling infected feces. Water-borne transmission is associated with the ingestion of contaminated water. In the U.S., outbreaks typically occur in small water systems using inadequately treated surface water. Venereal transmission happens through fecal-oral contamination. Additionally, diaper changing and inadequate handwashing are risk factors for transmission from infected children. Lastly, food-borne epidemics of Giardia have developed through the contamination of food by infected food-handlers.
Vaccine
There are no vaccines for humans yet, however there are several vaccine candidates in development. They are targeting: recombinant proteins, DNA vaccine, variant-specific surface proteins (VSP), cyst wall proteins (CWP), giadins and enzymes.At present, one commercially available vaccine exists – GiardiaVax, made from G. lamblia whole trophozoite lysate. It is a vaccine for veterinary use only in dogs and cats. GiardiaVax should promote production of specific antibodies.
Treatment
Treatment is not always necessary as the infection usually resolves on its own. However, if the illness is acute or symptoms persist and medications are needed to treat it, a nitroimidazole medication is used such as metronidazole, tinidazole, secnidazole or ornidazole.The World Health Organization and Infectious Disease Society of America recommend metronidazole as first line therapy. The US CDC lists metronidazole, tinidazole, and nitazoxanide as effective first-line therapies; of these three, only nitazoxanide and tinidazole are approved for the treatment of giardiasis by the US FDA. A meta-analysis done by the Cochrane Collaboration found that compared to the standard of metronidazole, albendazole had equivalent efficacy while having fewer side effects, such as gastrointestinal or neurologic issues. Other meta-analyses have reached similar conclusions. Both medications need a five to 10 day long course; albendazole is taken once a day, while metronidazole needs to be taken three times a day. The evidence for comparing metronidazole to other alternatives such as mebendazole, tinidazole or nitazoxanide was felt to be of very low quality. While tinidazole has side effects and efficacy similar to those of metronidazole, it is administered with a single dose.Resistance has been seen clinically to both nitroimidazoles and albendazole, but not nitazoxanide, though nitazoxanide resistance has been induced in research laboratories. The exact mechanism of resistance to all of these medications is not well understood. In the case of nitroimidazole-resistant strains of Giardia, other drugs are available which have showed efficacy in treatment including quinacrine, nitazoxanide, bacitracin zinc, furazolidone and paromomycin. Mepacrine may also be used for refractory cases.Probiotics, when given in combination with the standard treatment, has been shown to assist with clearance of Giardia.During pregnancy, paromomycin is the preferred treatment drug because of its poor intestinal absorption, resulting in less exposure to the fetus. Alternatively, metronidazole can be used after the first trimester as there has been wide experience in its use for trichomonas in pregnancy.
Prognosis
In people with a properly functioning immune system, infection may resolve without medication. A small portion, however, develop a chronic infection. People with an impaired immune system are at higher risk of chronic infection. Medication is an effective cure for nearly all people although there is growing drug-resistance.Children with chronic giardiasis are at risk for failure to thrive as well as more long-lasting sequelae such as growth stunting. Up to half of infected people develop a temporary lactose intolerance leading symptoms that may mimic a chronic infection. Some people experience post-infectious irritable bowel syndrome after the infection has cleared. Giardiasis has also been implicated in the development of food allergies. This is thought to be due to its effect on intestinal permeability.
Epidemiology
In some developing countries Giardia is present in 30% of the population. In the United States it is estimated that it is present in 3–7% of the population.The number of reported cases in the United States in 2018 was 15,584. All states that classify giardiasis as a notifiable disease had cases of giardiasis. The states of Illinois, Kentucky, Mississippi, North Carolina, Oklahoma, Tennessee, Texas, and Vermont did not notify the Center for Disease Control regarding cases in 2018. The states with the highest number of cases in 2018 were California, New York, Florida, and Wisconsin. There are seasonal trends associated with giardiasis. July, August, and September are the months with the highest incidence of giardiasis in the United States.In the ECDCs (European Centre for Disease Prevention and Control) annual epidemiological report containing 2014 data, 17,278 confirmed giardiasis cases were reported by 23 of the 31 countries that are members of the EU/EEA. Germany reported the highest number at 4,011 cases. Following Germany, the UK reported 3,628 confirmed giardiasis cases. Together, this accounts for 44% of total reported cases.
Research
Some intestinal parasitic infections may play a role in irritable bowel syndrome and other long-term sequelae such as chronic fatigue syndrome. The mechanism of transformation from cyst to trophozoites has not been characterized but may be helpful in developing drug targets for treatment-resistant Giardia. The interaction between Giardia and host immunity, internal flora, and other pathogens is not well understood.The main congress about giardiasis is the "International Giardia and Cryptosporidium Conference" (IGCC). A summary of results presented at the most recent edition (2019, in Rouen, France) is available.
Other animals
In both dogs and cats, giardiasis usually responds to metronidazole and fenbendazole. Metronidazole in pregnant cats can cause developmental malformations. Many cats dislike the taste of fenbendazole. Giardiasis has been shown to decrease weight in livestock.
References
External links
Giardiasis Fact Sheet |
Gingivitis | Gingivitis is a non-destructive disease that causes inflammation of the gums. The most common form of gingivitis, and the most common form of periodontal disease overall, is in response to bacterial biofilms (also called plaque) that is attached to tooth surfaces, termed plaque-induced gingivitis. Most forms of gingivitis are plaque-induced.While some cases of gingivitis never progress to periodontitis, periodontitis is always preceded by gingivitis.Gingivitis is reversible with good oral hygiene; however, without treatment, gingivitis can progress to periodontitis, in which the inflammation of the gums results in tissue destruction and bone resorption around the teeth. Periodontitis can ultimately lead to tooth loss.
Signs and symptoms
The symptoms of gingivitis are somewhat non-specific and manifest in the gum tissue as the classic signs of inflammation:
Swollen gums
Bright red or purple gums
Gums that are tender or painful to the touch
Bleeding gums or bleeding after brushing and/or flossing
Bad breath (halitosis)Additionally, the stippling that normally exists in the gum tissue of some individuals will often disappear and the gums may appear shiny when the gum tissue becomes swollen and stretched over the inflamed underlying connective tissue. The accumulation may also emit an unpleasant odor. When the gingiva are swollen, the epithelial lining of the gingival crevice becomes ulcerated and the gums will bleed more easily with even gentle brushing, and especially when flossing.
Complications
Recurrence of gingivitis
Periodontitis
Infection or abscess of the gingiva or the jaw bones
Trench mouth (bacterial infection and ulceration of the gums)
Swollen lymph nodes
Associated with premature birth and low birth weight
Alzheimers and dementia
A new study from 2018 found evidence that gingivitis bacteria may be linked to Alzheimers disease. Scientists agree that more research is needed to prove a cause and effect link. "Studies have also found that the bacteria P. gingivalis – which are responsible for many forms of gum disease – can migrate from the mouth to the brain in mice. And on entry to the brain, P. gingivalis can reproduce all of the characteristic features of Alzheimer’s disease."
Cause
The cause of plaque-induced gingivitis is bacterial plaque, which acts to initiate the bodys host response. This, in turn, can lead to destruction of the gingival tissues, which may progress to destruction of the periodontal attachment apparatus. The plaque accumulates in the small gaps between teeth, in the gingival grooves and in areas known as plaque traps: locations that serve to accumulate and maintain plaque. Examples of plaque traps include bulky and overhanging restorative margins, clasps of removable partial dentures and calculus (tartar) that forms on teeth. Although these accumulations may be tiny, the bacteria in them produce chemicals, such as degradative enzymes, and toxins, such as lipopolysaccharide (LPS, otherwise known as endotoxin) or lipoteichoic acid (LTA), that promote an inflammatory response in the gum tissue. This inflammation can cause an enlargement of the gingiva and subsequent formation. Early plaque in health consists of a relatively simple bacterial community dominated by Gram-positive cocci and rods. As plaque matures and gingivitis develops, the communities become increasingly complex with higher proportions of Gram-negative rods, fusiforms, filaments, spirilla and spirochetes. Later experimental gingivitis studies, using culture, provided more information regarding the specific bacterial species present in plaque. Taxa associated with gingivitis included Fusobacterium nucleatum subspecies polymorphum, Lachnospiraceae [G-2] species HOT100, Lautropia species HOTA94, and Prevotella oulorum (a species of Prevotella bacterium), whilst Rothia dentocariosa was associated with periodontal health. Further study of these taxa is warranted and may lead to new therapeutic approaches to prevent periodontal disease.
Risk factors
Risk factors associated with gingivitis include the following:
age
osteoporosis
low dental care utilization
poor oral hygiene
overly aggressive oral hygiene such as brushing with stiff bristles
mouth breathing during sleep
Orthodontic braces
medications and conditions that dry the mouth
cigarette smoking
genetic factors
stress
mental health issues such as depression
pre-existing conditions such as diabetes
Diagnosis
Gingivitis is a category of periodontal disease in which there is no loss of bone but inflammation and bleeding are present.
Each tooth is divided into four gingival units (mesial, distal, buccal, and lingual) and given a score from 0–3 based on the gingival index. The four scores are then averaged to give each tooth a single score.
The diagnosis of the periodontal disease gingivitis is done by a dentist. The diagnosis is based on clinical assessment data acquired during a comprehensive periodontal exam. Either a registered dental hygienist or a dentist may perform the comprehensive periodontal exam but the data interpretation and diagnosis are done by the dentist. The comprehensive periodontal exam consists of a visual exam, a series of radiographs, probing of the gingiva, determining the extent of current or past damage to the periodontium and a comprehensive review of the medical and dental histories.
Current research shows that activity levels of the following enzymes in saliva samples are associated with periodontal destruction: aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma glutamyl transferase (GGT), alkaline phosphatase (ALP), and acid phosphatase (ACP). Therefore, these enzyme biomarkers may be used to aid in the diagnosis and treatment of gingivitis and periodontitis.
A dental hygienist or dentist will check for the symptoms of gingivitis, and may also examine the amount of plaque in the oral cavity. A dental hygienist or dentist will also look for signs of periodontitis using X-rays or periodontal probing as well as other methods.
If gingivitis is not responsive to treatment, referral to a periodontist (a specialist in diseases of the gingiva and bone around teeth and dental implants) for further treatment may be necessary.
Classification
1999 Classification
As defined by the 1999 World Workshop in Clinical Periodontics, there are two primary categories of gingival diseases, each with numerous subgroups:
Dental plaque-induced gingival diseases.
Gingivitis associated with plaque only
Gingival diseases modified by systemic factors
Gingival diseases modified by medications
Gingival diseases modified by malnutrition
Non-plaque-induced gingival lesions
Gingival diseases of specific bacterial origin
Gingival diseases of viral origin
Gingival diseases of fungal origin
Gingival diseases of genetic origin
Gingival manifestations of systemic conditions
Traumatic lesions
Foreign body reactions
Not otherwise specified
2017 Classification
As defined by the 2017 World Workshop, periodontal health, gingival diseases/ conditions have been categorised into the following:
Periodontal health and gingival health
Clinical gingival health on an intact periodontium
Clinical gingival health on a reduced periodontium
Stable periodontitis patient
Non-periodontitis patient
Gingivitis – dental biofilm-induced
Associated with dental biofilm alone
Mediated by systemic or local risk factors
Drug-influenced gingival enlargement
Gingival diseases – non-dental biofilm induced
Genetic/ developmental disorders
Specific infections
Inflammatory and immune conditions
Reactive processes
Neoplasms
Endocrine, nutritional & metabolic diseases
Traumatic lesions
Gingival pigmentation
Prevention
Gingivitis can be prevented through regular oral hygiene that includes daily brushing and flossing. Hydrogen peroxide, saline, alcohol or chlorhexidine mouth washes may also be employed. In a 2004 clinical study, the beneficial effect of hydrogen peroxide on gingivitis has been highlighted. The use of oscillation type brushes might reduce the risk of gingivitis compared to manual brushing.Rigorous plaque control programs along with periodontal scaling and curettage also have proved to be helpful, although according to the American Dental Association, periodontal scaling and root planing are considered as a treatment for periodontal disease, not as a preventive treatment for periodontal disease. In a 1997 review of effectiveness data, the U.S. Food and Drug Administration (FDA) found clear evidence showing that toothpaste containing triclosan was effective in preventing gingivitis. In 2017 the FDA banned triclosan in many consumer products but allowed it to remain in toothpaste because of its effectiveness against gingivitis. In 2019, Colgate, under pressure from health advocates, removed triclosan from the last toothpaste on the market containing it, Colgate Total.
Treatment
The focus of treatment is to remove plaque. Therapy is aimed at the reduction of oral bacteria and may take the form of regular periodic visits to a dental professional together with adequate oral hygiene home care. Thus, several of the methods used in the prevention of gingivitis can also be used for the treatment of manifest gingivitis, such as scaling, root planing, curettage, mouth washes containing chlorhexidine or hydrogen peroxide, and flossing. Interdental brushes also help remove any causative agents.
Powered toothbrushes work better than manual toothbrushes in reducing the disease.The active ingredients that "reduce plaque and demonstrate effective reduction of gingival inflammation over a period of time" are triclosan, chlorhexidine digluconate, and a combination of thymol, menthol, eucalyptol, and methyl salicylate. These ingredients are found in toothpaste and mouthwash. Hydrogen peroxide was long considered a suitable over-the-counter agent to treat gingivitis. There has been evidence to show the positive effect on controlling gingivitis in short-term use. A study indicates the fluoridated hydrogen peroxide-based mouth rinse can remove teeth stain and reduce gingivitis.Based on a limited evidence, mouthwashes with essential oils may also be useful, as they contain ingredients with anti-inflammatory properties, such as thymol, menthol and eucalyptol.The bacteria that causes gingivitis can be controlled by using an oral irrigator daily with a mouthwash containing an antibiotic. Either amoxicillin, cephalexin, or minocycline in 500 grams of a non-alcoholic fluoride mouthwash is an effective mixture.Overall, intensive oral hygiene care has been shown to improve gingival health in individuals with well-controlled type 2 diabetes. Periodontal destruction is also slowed due to the extensive oral care. Intensive oral hygiene care (oral health education plus supra-gingival scaling) without any periodontal therapy improves gingival health, and may prevent progression of gingivitis in well-controlled diabetes.
See also
Pericoronitis
"Full width gingivitis" of orofacial granulomatosis
Desquamative gingivitis
References
== External links == |
Glioblastoma | Glioblastoma, previously known as glioblastoma multiforme (GBM), is one of the most aggressive types of cancer that begin within the brain. Initially, signs and symptoms of glioblastoma are nonspecific. They may include headaches, personality changes, nausea, and symptoms similar to those of a stroke. Symptoms often worsen rapidly and may progress to unconsciousness.The cause of most cases of glioblastoma is not known. Uncommon risk factors include genetic disorders, such as neurofibromatosis and Li–Fraumeni syndrome, and previous radiation therapy. Glioblastomas represent 15% of all brain tumors. They can either start from normal brain cells or develop from an existing low-grade astrocytoma. The diagnosis typically is made by a combination of a CT scan, MRI scan, and tissue biopsy.There is no known method of preventing the cancer. Treatment usually involves surgery, after which chemotherapy and radiation therapy are used. The medication temozolomide is frequently used as part of chemotherapy. High-dose steroids may be used to help reduce swelling and decrease symptoms. Surgical removal (decompression) of the tumor is linked to increased survival, but only by some months.Despite maximum treatment, the cancer almost always recurs. The typical duration of survival following diagnosis is 10–13 months, with fewer than 5–10% of people surviving longer than five years. Without treatment, survival is typically three months. It is the most common cancer that begins within the brain and the second-most common brain tumor, after meningioma. About 3 in 100,000 people develop the disease per year. The average age at diagnosis is 64, and the disease occurs more commonly in males than females.
Signs and symptoms
Common symptoms include seizures, headaches, nausea and vomiting, memory loss, changes to personality, mood or concentration, and localized neurological problems. The kind of symptoms produced depends more on the location of the tumor than on its pathological properties. The tumor can start producing symptoms quickly, but occasionally is an asymptomatic condition until it reaches an enormous size.
Risk factors
The cause of most cases is unclear. About 5% develop from another type of brain tumor known as a low-grade astrocytoma.
Genetics
Uncommon risk factors include genetic disorders such as neurofibromatosis, Li–Fraumeni syndrome, tuberous sclerosis, or Turcot syndrome. Previous radiation therapy is also a risk. For unknown reasons, it occurs more commonly in males.
Environmental
Other associations include exposure to smoking, pesticides, and working in petroleum refining or rubber manufacturing.Glioblastoma has been associated with the viruses SV40, HHV-6, and cytomegalovirus.
Other
Research has been done to see if consumption of cured meat is a risk factor. No risk had been confirmed as of 2013. Similarly, exposure to radiation during medical imaging, formaldehyde, and residential electromagnetic fields, such as from cell phones and electrical wiring within homes, have been studied as risk factors. As of 2015, they had not been shown to cause GBM.
Pathogenesis
The cellular origin of glioblastoma is unknown. Because of the similarities in immunostaining of glial cells and glioblastoma, gliomas such as glioblastoma have long been assumed to originate from glial-type cells. More recent studies suggest that astrocytes, oligodendrocyte progenitor cells, and neural stem cells could all serve as the cell of origin.Glioblastomas are characterized by the presence of small areas of necrotizing tissue that are surrounded by anaplastic cells. This characteristic, as well as the presence of hyperplastic blood vessels, differentiates the tumor from grade 3 astrocytomas, which do not have these features.GBMs usually form in the cerebral white matter, grow quickly, and can become very large before producing symptoms. Fewer than 10% form more slowly following degeneration of low-grade astrocytoma or anaplastic astrocytoma. These are called secondary GBMs and are more common in younger patients (mean age 45 versus 62 years). The tumor may extend into the meninges or ventricular wall, leading to high protein content in the cerebrospinal fluid (CSF) (> 100 mg/dl), as well as an occasional pleocytosis of 10 to 100 cells, mostly lymphocytes. Malignant cells carried in the CSF may spread (rarely) to the spinal cord or cause meningeal gliomatosis. However, metastasis of GBM beyond the central nervous system is extremely unusual. About 50% of GBMs occupy more than one lobe of a hemisphere or are bilateral. Tumors of this type usually arise from the cerebrum and may exhibit the classic infiltration across the corpus callosum, producing a butterfly (bilateral) glioma.
Glioblastoma classification
Brain tumor classification has been traditionally based on histopathology at macroscopic level, measured in hematoxylin-eosin sections. The World Health Organization published the first standard classification in 1979 and has been doing so since. The 2007 WHO Classification of Tumors of the Central Nervous System was the last classification mainly based on microscopy features. The new 2016 WHO Classification of Tumors of the Central Nervous System was a paradigm shift: some of the tumors were defined also by their genetic composition as well as their cell morphology.
The grading of gliomas changed importantly and glioblastoma was now mainly classified according to the status of isocitrate dehydrogenase (IDH) mutation: IDH-wildtype or IDH-mutant.
Molecular alterations
Four subtypes of glioblastoma have been identified based on gene expression:
Classical: Around 97% of tumors in this subtype carry extra copies of the epidermal growth factor receptor (EGFR) gene, and most have higher than normal expression of EGFR, whereas the gene TP53 (p53), which is often mutated in glioblastoma, is rarely mutated in this subtype. Loss of heterozygosity in chromosome 10 is also frequently seen in the classical subtype alongside chromosome 7 amplification.
The proneural subtype often has high rates of alterations in TP53 (p53), and in PDGFRA, the gene encoding a-type platelet-derived growth factor receptor, and in IDH1, the gene encoding isocitrate dehydrogenase-1.
The mesenchymal subtype is characterized by high rates of mutations or other alterations in NF1, the gene encoding neurofibromin 1 and fewer alterations in the EGFR gene and less expression of EGFR than other types.
The neural subtype was typified by the expression of neuron markers such as NEFL, GABRA1, SYT1, and SLC12A5, while often presenting themselves as normal cells upon pathological assessment.Many other genetic alterations have been described in glioblastoma, and the majority of them are clustered in two pathways, the RB and the PI3K/AKT. Glioblastomas have alterations in 68–78% and 88% of these pathways, respectively.Another important alteration is methylation of MGMT, a "suicide" DNA repair enzyme. Methylation impairs DNA transcription and expression of the MGMT gene. Since the MGMT enzyme can repair only one DNA alkylation due to its suicide repair mechanism, reserve capacity is low and methylation of the MGMT gene promoter greatly affects DNA-repair capacity. MGMT methylation is associated with an improved response to treatment with DNA-damaging chemotherapeutics, such as temozolomide.
Cancer stem cells
Glioblastoma cells with properties similar to progenitor cells (glioblastoma cancer stem cells) have been found in glioblastomas. Their presence, coupled with the glioblastomas diffuse nature results in difficulty in removing them completely by surgery, and is therefore believed to be the possible cause behind resistance to conventional treatments, and the high recurrence rate. Glioblastoma cancer stem cells share some resemblance with neural progenitor cells, both expressing the surface receptor CD133. CD44 can also be used as a cancer stem cell marker in a subset of glioblastoma tumour cells. Glioblastoma cancer stem cells appear to exhibit enhanced resistance to radiotherapy and chemotherapy mediated, at least in part, by up-regulation of the DNA damage response.
Metabolism
The IDH1 gene encodes for the enzyme isocitrate dehydrogenase 1 and is uncommonly mutated in glioblastoma (primary GBM: 5%, secondary GBM >80%). By producing very high concentrations of the oncometabolite D-2-hydroxyglutarate and dysregulating the function of the wild-type IDH1 enzyme, it induces profound changes to the metabolism of IDH1-mutated glioblastoma, compared with IDH1 wild-type glioblastoma or healthy astrocytes. Among others, it increases the glioblastoma cells dependence on glutamine or glutamate as an energy source. IDH1-mutated glioblastomas are thought to have a very high demand for glutamate and use this amino acid and neurotransmitter as a chemotactic signal. Since healthy astrocytes excrete glutamate, IDH1-mutated glioblastoma cells do not favor dense tumor structures, but instead migrate, invade, and disperse into healthy parts of the brain where glutamate concentrations are higher. This may explain the invasive behavior of these IDH1-mutated glioblastoma.
Ion channels
Furthermore, GBM exhibits numerous alterations in genes that encode for ion channels, including upregulation of gBK potassium channels and ClC-3 chloride channels. By upregulating these ion channels, glioblastoma tumor cells are hypothesized to facilitate increased ion movement over the cell membrane, thereby increasing H2O movement through osmosis, which aids glioblastoma cells in changing cellular volume very rapidly. This is helpful in their extremely aggressive invasive behavior because quick adaptations in cellular volume can facilitate movement through the sinuous extracellular matrix of the brain.
MicroRNA
As of 2012, RNA interference, usually microRNA, was under investigation in tissue culture, pathology specimens, and preclinical animal models of glioblastoma. Additionally, experimental observations suggest that microRNA-451 is a key regulator of LKB1/AMPK signaling in cultured glioma cells and that miRNA clustering controls epigenetic pathways in the disease.
Tumor vasculature
GBM is characterized by abnormal vessels that present disrupted morphology and functionality. The high permeability and poor perfusion of the vasculature result in a disorganized blood flow within the tumor and can lead to increased hypoxia, which in turn facilitates cancer progression by promoting processes such as immunosuppression.
Diagnosis
When viewed with MRI, glioblastomas often appear as ring-enhancing lesions. The appearance is not specific, however, as other lesions such as abscess, metastasis, tumefactive multiple sclerosis, and other entities may have a similar appearance. Definitive diagnosis of a suspected GBM on CT or MRI requires a stereotactic biopsy or a craniotomy with tumor resection and pathologic confirmation. Because the tumor grade is based upon the most malignant portion of the tumor, biopsy or subtotal tumor resection can result in undergrading of the lesion. Imaging of tumor blood flow using perfusion MRI and measuring tumor metabolite concentration with MR spectroscopy may add diagnostic value to standard MRI in select cases by showing increased relative cerebral blood volume and increased choline peak, respectively, but pathology remains the gold standard for diagnosis and molecular characterization.Distinguishing primary glioblastoma from secondary glioblastoma is important. These tumors occur spontaneously (de novo) or have progressed from a lower-grade glioma, respectively. Primary glioblastomas have a worse prognosis and different tumor biology, and may have a different response to therapy, which makes this a critical evaluation to determine patient prognosis and therapy. Over 80% of secondary glioblastomas carry a mutation in IDH1, whereas this mutation is rare in primary glioblastoma (5–10%). Thus, IDH1 mutations are a useful tool to distinguish primary and secondary glioblastomas, since histopathologically they are very similar and the distinction without molecular biomarkers is unreliable.
Prevention
There are no known methods to prevent glioblastoma. It is the case for most gliomas, unlike for some other forms of cancer, that they happen without previous warning and there are no known ways to prevent them.
Treatment
Treating glioblastoma is difficult due to several complicating factors:
The tumor cells are resistant to conventional therapies.
The brain is susceptible to damage from conventional therapy.
The brain has a limited capacity to repair itself.
Many drugs cannot cross the blood–brain barrier to act on the tumor.Treatment of primary brain tumors consists of palliative (symptomatic) care and therapies intended to improve survival.
Symptomatic therapy
Supportive treatment focuses on relieving symptoms and improving the patients neurologic function. The primary supportive agents are anticonvulsants and corticosteroids.
Historically, around 90% of patients with glioblastoma underwent anticonvulsant treatment, although only an estimated 40% of patients required this treatment. Recently, neurosurgeons have been recommended that anticonvulsants not be administered prophylactically, and should wait until a seizure occurs before prescribing this medication. Those receiving phenytoin concurrent with radiation may have serious skin reactions such as erythema multiforme and Stevens–Johnson syndrome.
Corticosteroids, usually dexamethasone, can reduce peritumoral edema (through rearrangement of the blood–brain barrier), diminishing mass effect and lowering intracranial pressure, with a decrease in headache or drowsiness.
Surgery
Surgery is the first stage of treatment of glioblastoma. An average GBM tumor contains 1011 cells, which is on average reduced to 109 cells after surgery (a reduction of 99%). Benefits of surgery include resection for a pathological diagnosis, alleviation of symptoms related to mass effect, and potentially removing disease before secondary resistance to radiotherapy and chemotherapy occurs.The greater the extent of tumor removal, the better. In retrospective analyses, removal of 98% or more of the tumor has been associated with a significantly longer healthier time than if less than 98% of the tumor is removed. The chances of near-complete initial removal of the tumor may be increased if the surgery is guided by a fluorescent dye known as 5-aminolevulinic acid. GBM cells are widely infiltrative through the brain at diagnosis, and despite a "total resection" of all obvious tumor, most people with GBM later develop recurrent tumors either near the original site or at more distant locations within the brain. Other modalities, typically radiation and chemotherapy, are used after surgery in an effort to suppress and slow recurrent disease.
Radiotherapy
Subsequent to surgery, radiotherapy becomes the mainstay of treatment for people with glioblastoma. It is typically performed along with giving temozolomide. A pivotal clinical trial carried out in the early 1970s showed that among 303 GBM patients randomized to radiation or nonradiation therapy, those who received radiation had a median survival more than double those who did not. Subsequent clinical research has attempted to build on the backbone of surgery followed by radiation. On average, radiotherapy after surgery can reduce the tumor size to 107 cells. Whole-brain radiotherapy does not improve when compared to the more precise and targeted three-dimensional conformal radiotherapy. A total radiation dose of 60–65 Gy has been found to be optimal for treatment.GBM tumors are well known to contain zones of tissue exhibiting hypoxia, which are highly resistant to radiotherapy. Various approaches to chemotherapy radiosensitizers have been pursued, with limited success as of 2016. As of 2010, newer research approaches included preclinical and clinical investigations into the use of an oxygen diffusion-enhancing compound such as trans sodium crocetinate as radiosensitizers, and as of 2015 a clinical trial was underway. Boron neutron capture therapy has been tested as an alternative treatment for glioblastoma, but is not in common use.
Chemotherapy
Most studies show no benefit from the addition of chemotherapy. However, a large clinical trial of 575 participants randomized to standard radiation versus radiation plus temozolomide chemotherapy showed that the group receiving temozolomide survived a median of 14.6 months as opposed to 12.1 months for the group receiving radiation alone. This treatment regimen is now standard for most cases of glioblastoma where the person is not enrolled in a clinical trial. Temozolomide seems to work by sensitizing the tumor cells to radiation, and appears more effective for tumors with MGMT promoter methylation. High doses of temozolomide in high-grade gliomas yield low toxicity, but the results are comparable to the standard doses. Antiangiogenic therapy with medications such as bevacizumab control symptoms, but do not appear to affect overall survival in those with glioblastoma. The overall benefit of anti-angiogenic therapies as of 2019 is unclear. In elderly people with newly diagnosed glioblastoma who are reasonably fit, concurrent and adjuvant chemoradiotherapy gives the best overall survival but is associated with a greater risk of haematological adverse events than radiotherapy alone.
Other procedures
Alternating electric field therapy is an FDA-approved therapy for newly diagnosed and recurrent glioblastoma. In 2015, initial results from a phase-III randomized clinical trial of alternating electric field therapy plus temozolomide in newly diagnosed glioblastoma reported a three-month improvement in progression-free survival, and a five-month improvement in overall survival compared to temozolomide therapy alone, representing the first large trial in a decade to show a survival improvement in this setting. Despite these results, the efficacy of this approach remains controversial among medical experts. However, increasing understanding of the mechanistic basis through which alternating electric field therapy exerts anti-cancer effects and results from ongoing phase-III clinical trials in extracranial cancers may help facilitate increased clinical acceptance to treat glioblastoma in the future.A Tel Aviv University study showed that pharmacological and molecular inhibition of the P-selectin protein leads to reduced tumor growth and increased survival in mouse models of glioblastoma. The results of this research could open to possible therapies with drugs that inhibit this protein, such as crizanlizumab.
Prognosis
The most common length of survival following diagnosis is 10 to 13 months, with fewer than 1 to 3% of people surviving longer than five years. In the United States between 2012 and 2016 five-year survival was 6.8%. Without treatment, survival is typically 3 months. Complete cures are extremely rare, but have been reported.Increasing age (> 60 years) carries a worse prognostic risk. Death is usually due to widespread tumor infiltration with cerebral edema and increased intracranial pressure.A good initial Karnofsky performance score (KPS) and MGMT methylation are associated with longer survival. A DNA test can be conducted on glioblastomas to determine whether or not the promoter of the MGMT gene is methylated. Patients with a methylated MGMT promoter have longer survival than those with an unmethylated MGMT promoter, due in part to increased sensitivity to temozolomide. Another positive prognostic marker for glioblastoma patients is mutation of the IDH1 gene, which can be tested by DNA-based methods or by immunohistochemistry using an antibody against the most common mutation, namely IDH1-R132H.More prognostic power can be obtained by combining the mutational status of IDH1 and the methylation status of MGMT into a two-gene predictor. Patients with both IDH1 mutations and MGMT methylation have the longest survival, patients with an IDH1 mutation or MGMT methylation an intermediate survival, and patients without either genetic event have the shortest survival.Long-term benefits have also been associated with those patients who receive surgery, radiotherapy, and temozolomide chemotherapy. However, much remains unknown about why some patients survive longer with glioblastoma. Age under 50 is linked to longer survival in GBM, as is 98%+ resection and use of temozolomide chemotherapy and better KPSs. A recent study confirms that younger age is associated with a much better prognosis, with a small fraction of patients under 40 years of age achieving a population-based cure. Cure is thought to occur when a persons risk of death returns to that of the normal population, and in GBM, this is thought to occur after 10 years.UCLA Neuro-oncology publishes real-time survival data for patients with this diagnosis.According to a 2003 study, GBM prognosis can be divided into three subgroups dependent on KPS, the age of the patient, and treatment.
Epidemiology
About three per 100,000 people develop the disease a year, although regional frequency may be much higher. The frequency in England doubled between 1995 and 2015.It is the second-most common central nervous system cancer after meningioma. It occurs more commonly in males than females. Although the average age at diagnosis is 64, in 2014, the broad category of brain cancers was second only to leukemia in people in the United States under 20 years of age.
History
The term glioblastoma multiforme was introduced in 1926 by Percival Bailey and Harvey Cushing, based on the idea that the tumor originates from primitive precursors of glial cells (glioblasts), and the highly variable appearance due to the presence of necrosis, hemorrhage, and cysts (multiform).
Research
Gene therapy
Gene therapy has been explored as a method to treat glioblastoma, and while animal models and early-phase clinical trials have been successful, as of 2017, all gene-therapy drugs that had been tested in phase-III clinical trials for glioblastoma had failed. Scientists have developed the core–shell nanostructured LPLNP-PPT (long persistent luminescence nanoparticles. PPT refers to polyetherimide, PEG and trans-activator of transcription, and TRAIL is the human tumor necrosis factor-related apoptosis-induced ligand) for effective gene delivery and tracking, with positive results. This is a TRAIL ligand that has been encoded to induce apoptosis of cancer cells, more specifically glioblastomas. Although this study was still in clinical trials in 2017, it has shown diagnostic and therapeutic functionalities, and will open great interest for clinical applications in stem-cell-based therapy.
Oncolytic virotherapy
Oncolytic virotherapy is an emerging novel treatment that is under investigation both at preclinical and clinical stages. Several viruses including herpes simplex virus, adenovirus, poliovirus, and reovirus are currently being tested in phases I and II of clinical trials for glioblastoma therapy and have shown to improve overall survival.
Intranasal drug delivery
Direct nose-to-brain drug delivery is being explored as a means to achieve higher, and hopefully more effective, drug concentrations in the brain. A clinical phase-I/II study with glioblastoma patients in Brazil investigated the natural compound perillyl alcohol for intranasal delivery as an aerosol. The results were encouraging and, as of 2016, a similar trial has been initiated in the United States.
Cannabinoids
The efficacy of cannabinoids (cannabis derivatives) is known in oncology (through capsules of tetrahydrocannabinol (THC) or the synthetic analogue nabilone), on the one hand to combat nausea and vomiting induced by chemotherapy, on the other to stimulate appetite and lessen the sense of anguish or the actual pain.
Their ability to inhibit growth and angiogenesis in malignant gliomas in mouse models has been demonstrated.
The results of a pilot study on the use of THC in end-stage patients with recurrent glioblastoma appeared worthy of further study.
A potential avenue for future research rests on the discovery that cannabinoids are able to attack the neoplastic stem cells of glioblastoma in mouse models, with the result on the one hand of inducing their differentiation into more mature, possibly more "treatable" cells, and on the other hand to inhibit tumorigenesis.
See also
Adegramotide
List of people with brain tumors
References
External links
Information about Glioblastoma Multiforme (GBM) from the American Brain Tumor Association
AFIP Course Syllabus – Astrocytoma WHO Grading Lecture Handout |
Gold | Gold is a chemical element with the symbol Au (from Latin: aurum) and atomic number 79. This makes it one of the higher atomic number elements that occur naturally. It is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal in a pure form. Chemically, gold is a transition metal and a group 11 element. It is one of the least reactive chemical elements and is solid under standard conditions. Gold often occurs in free elemental (native) form, as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element silver (as electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides).
Gold is resistant to most acids, though it does dissolve in aqua regia (a mixture of nitric acid and hydrochloric acid), forming a soluble tetrachloroaurate anion. Gold is insoluble in nitric acid alone, which dissolves silver and base metals, a property long used to refine gold and confirm the presence of gold in metallic substances, giving rise to the term acid test. Gold dissolves in alkaline solutions of cyanide, which are used in mining and electroplating. Gold also dissolves in mercury, forming amalgam alloys, and as the gold acts simply as a solute, this is not a chemical reaction.
A relatively rare element, gold is a precious metal that has been used for coinage, jewelry, and other arts throughout recorded history. In the past, a gold standard was often implemented as a monetary policy. Gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after the Nixon shock measures of 1971.
In 2020, the worlds largest gold producer was China, followed by Russia and Australia. A total of around 201,296 tonnes of gold exists above ground, as of 2020. This is equal to a cube with each side measuring roughly 21.7 meters (71 ft). The world consumption of new gold produced is about 50% in jewelry, 40% in investments and 10% in industry. Golds high malleability, ductility, resistance to corrosion and most other chemical reactions, and conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, production of colored glass, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatories in medicine.
Characteristics
Gold is the most malleable of all metals. It can be drawn into a wire of single-atom width, and then stretched considerably before it breaks. Such nanowires distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening. A single gram of gold can be beaten into a sheet of 1 square metre (11 sq ft), and an avoirdupois ounce into 300 square feet (28 m2). Gold leaf can be beaten thin enough to become semi-transparent. The transmitted light appears greenish-blue, because gold strongly reflects yellow and red. Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in visors of heat-resistant suits, and in sun-visors for spacesuits. Gold is a good conductor of heat and electricity.
Gold has a density of 19.3 g/cm3, almost identical to that of tungsten at 19.25 g/cm3; as such, tungsten has been used in counterfeiting of gold bars, such as by plating a tungsten bar with gold, or taking an existing gold bar, drilling holes, and replacing the removed gold with tungsten rods. By comparison, the density of lead is 11.34 g/cm3, and that of the densest element, osmium, is 22.588±0.015 g/cm3.
Color
Whereas most metals are gray or silvery white, gold is slightly reddish-yellow. This color is determined by the frequency of plasma oscillations among the metals valence electrons, in the ultraviolet range for most metals but in the visible range for gold due to relativistic effects affecting the orbitals around gold atoms. Similar effects impart a golden hue to metallic caesium.
Common colored gold alloys include the distinctive eighteen-karat rose gold created by the addition of copper. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Fourteen-karat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Fourteen- and eighteen-karat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. Blue gold can be made by alloying with iron, and purple gold can be made by alloying with aluminium. Less commonly, addition of manganese, indium, and other elements can produce more unusual colors of gold for various applications.Colloidal gold, used by electron-microscopists, is red if the particles are small; larger particles of colloidal gold are blue.
Isotopes
Gold has only one stable isotope, 197Au, which is also its only naturally occurring isotope, so gold is both a mononuclidic and monoisotopic element. Thirty-six radioisotopes have been synthesized, ranging in atomic mass from 169 to 205. The most stable of these is 195Au with a half-life of 186.1 days. The least stable is 171Au, which decays by proton emission with a half-life of 30 µs. Most of golds radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are 195Au, which decays by electron capture, and 196Au, which decays most often by electron capture (93%) with a minor β− decay path (7%). All of golds radioisotopes with atomic masses above 197 decay by β− decay.At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only 178Au, 180Au, 181Au, 182Au, and 188Au do not have isomers. Golds most stable isomer is 198m2Au with a half-life of 2.27 days. Golds least stable isomer is 177m2Au with a half-life of only 7 ns. 184m1Au has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths.
Synthesis
The possible production of gold from a more common element, such as lead, has long been a subject of human inquiry, and the ancient and medieval discipline of alchemy often focused on it; however, the transmutation of the chemical elements did not become possible until the understanding of nuclear physics in the 20th century. The first synthesis of gold was conducted by Japanese physicist Hantaro Nagaoka, who synthesized gold from mercury in 1924 by neutron bombardment. An American team, working without knowledge of Nagaokas prior study, conducted the same experiment in 1941, achieving the same result and showing that the isotopes of gold produced by it were all radioactive. In 1980, Glenn Seaborg transmuted several thousand atoms of bismuth into gold at the Lawrence Berkeley Laboratory. Gold can be manufactured in a nuclear reactor, but doing so is highly impractical and would cost far more than the value of the gold that is produced.
Chemistry
Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is Au(CN)−2, which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives.Au(III) (referred to as the auric) is a common oxidation state, and is illustrated by gold(III) chloride, Au2Cl6. The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex.
Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone.
A
u
+
O
2
≠
{\displaystyle \mathrm {Au} +\mathrm {O} _{2}\neq }
A
u
+
O
3
≠
t
<
100
∘
C
{\displaystyle \mathrm {Au} +\mathrm {O} _{3}{\overset {\underset {t<100^{\circ }{\text{C}}}{}}{\neq }}}
Some free halogens react with gold. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride AuF3. Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride AuCl3. Gold reacts with bromine at 140 °C to form gold(III) bromide AuBr3, but reacts only very slowly with iodine to form gold(I) iodide AuI.
2
Au
+
3
F
2
→
t
2
AuF
3
{\displaystyle {\ce {2 Au + 3 F2 ->[t] 2 AuF3}}}
2
Au
+
3
Cl
2
→
t
2
AuCl
3
{\displaystyle {\ce {2 Au + 3 Cl2 ->[t] 2 AuCl3}}}
2
Au
+
2
Br
2
→
t
AuBr
3
+
AuBr
{\displaystyle {\ce {2 Au + 2 Br2 ->[t] AuBr3 + AuBr}}}
2
Au
+
I
2
→
t
2
AuI
{\displaystyle {\ce {2 Au + I2 ->[t] 2 AuI}}}
Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chlorauric acid.
Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors.Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming AuCl−4 ions, or chloroauric acid, thereby enabling further oxidation.
2
Au
6
+
H
2
SeO
4
→
200
∘
C
Au
2
(
SeO
4
)
3
+
3
H
2
SeO
3
+
3
H
2
O
{\displaystyle {\ce {2Au+6H2SeO4->[200^{\circ }C]Au2(SeO4)3+3H2SeO3+3H2O}}}
Au
4
+
HCl
+
HNO
3
⟶
H
[
AuCl
4
]
+
NO
↑
+
2
H
2
O
{\displaystyle {\ce {Au+4HCl+HNO3->H[AuCl4]{}+NO\uparrow +2H2O}}}
Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes.Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate.
Rare oxidation states
Less common oxidation states of gold include −1, +2, and +5.
The −1 oxidation state occurs in aurides, compounds containing the Au− anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making Au− a stable species, analogous to the halides.
Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride.Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [Au(CH2)2P(C6H5)2]2Cl2. The evaporation of a solution of Au(OH)3 in concentrated H2SO4 produces red crystals of gold(II) sulfate, Au2(SO4)2. Originally thought to be a mixed-valence compound, it has been shown to contain Au4+2 cations, analogous to the better-known mercury(I) ion, Hg2+2. A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in [AuXe4](Sb2F11)2.Gold pentafluoride, along with its derivative anion, AuF−6, and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state.Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond.
Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species {Au(P(C6H5)3)}2+6.
Origin
Gold production in the universe
Gold is thought to have been produced in supernova nucleosynthesis, and from the collision of neutron stars, and to have been present in the dust from which the Solar System formed.Traditionally, gold in the universe is thought to have formed by the r-process (rapid neutron capture) in supernova nucleosynthesis, but more recently it has been suggested that gold and other elements heavier than iron may also be produced in quantity by the r-process in the collision of neutron stars. In both cases, satellite spectrometers at first only indirectly detected the resulting gold. However, in August 2017, the spectroscopic signatures of heavy elements, including gold, were observed by electromagnetic observatories in the GW170817 neutron star merger event, after gravitational wave detectors confirmed the event as a neutron star merger. Current astrophysical models suggest that this single neutron star merger event generated between 3 and 13 Earth masses of gold. This amount, along with estimations of the rate of occurrence of these neutron star merger events, suggests that such mergers may produce enough gold to account for most of the abundance of this element in the universe.
Asteroid origin theories
Because the Earth was molten when it was formed, almost all of the gold present in the early Earth probably sank into the planetary core. Therefore, most of the gold that is in the Earths crust and mantle has in one model thought to have been delivered to Earth later, by asteroid impacts during the Late Heavy Bombardment, about 4 billion years ago.Gold which is reachable by humans has, in one case, been associated with a particular asteroid impact. The asteroid that formed Vredefort impact structure 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth. However, this scenario is now questioned. The gold-bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact. These gold-bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas and the Transvaal Supergroup of rocks before the meteor struck, and thus the gold did not actually arrive in the asteroid/meteorite. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold-bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original 300 km (190 mi) diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Some 22% of all the gold that is ascertained to exist today on Earth has been extracted from these Witwatersrand rocks.
Mantle return theories
Notwithstanding the impact above, much of the rest of the gold on Earth is thought to have been incorporated into the planet since its very beginning, as planetesimals formed the planets mantle, early in Earths creation. In 2017, an international group of scientists, established that gold "came to the Earths surface from the deepest regions of our planet", the mantle, evidenced by their findings at Deseado Massif in the Argentinian Patagonia.
Occurrence
On Earth, gold is found in ores in rock formed from the Precambrian time onward. It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold/silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver, and is commonly known as white gold. Electrums color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity.
Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "fools gold", which is a pyrite. These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the exposed surface of gold-bearing veins, owing to the oxidation of accompanying minerals followed by weathering; and by washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets.
Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite (Au2Bi) and antimonide aurostibite (AuSb2). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (Cu3Au), novodneprite (AuPb3) and weishanite ((Au,Ag)3Hg2).
Recent research suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits.Another recent study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About 10 kilometres (6.2 mi) below the surface, under very high temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces.
Seawater
The worlds oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 femtomol/L or 10–30 parts per quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 femtomol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 femtomol/L) attributed to wind-blown dust and/or rivers. At 10 parts per quadrillion the Earths oceans would hold 15,000 tonnes of gold. These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data.
A number of people have claimed to be able to economically recover gold from sea water, but they were either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s, as did an English fraudster in the early 1900s. Fritz Haber did research on the extraction of gold from sea water in an effort to help pay Germanys reparations following World War I. Based on the published values of 2 to 64 ppb of gold in seawater a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb it became clear that extraction would not be possible and he ended the project.
History
The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC.The oldest gold artifacts in the world are from Bulgaria and are dating back to the 5th millennium BC (4,600 BC to 4,200 BC), such as those found in the Varna Necropolis near Lake Varna and the Black Sea coast, thought to be the earliest "well-dated" finding of gold artifacts in history. Several prehistoric Bulgarian finds are considered no less old – the golden treasures of Hotnitsa, Durankulak, artifacts from the Kurgan settlement of Yunatsite near Pazardzhik, the golden treasure Sakar, as well as beads and gold jewelry found in the Kurgan settlement of Provadia – Solnitsata (“salt pit”). However, Varna gold is most often called the oldest since this treasure is the largest and most diverse.Gold artifacts probably made their first appearance in Ancient Egypt at the very beginning of the pre-dynastic period, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium. As of 1990, gold artifacts found at the Wadi Qana cave cemetery of the 4th millennium BC in West Bank were the earliest from the Levant. Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age.
The oldest known map of a gold mine was drawn in the 19th Dynasty of Ancient Egypt (1320–1200 BC), whereas the first written reference to gold was recorded in the 12th Dynasty around 1900 BC. Egyptian hieroglyphs from as early as 2600 BC describe gold, which King Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt. Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. One of the earliest known maps, known as the Turin Papyrus Map, shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia.
Gold is mentioned in the Amarna letters numbered 19 and 26 from around the 14th century BC.Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of the golden calf, and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the worlds earliest coinage in Lydia around 610 BC. The legend of the golden fleece dating from eighth century BCE may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin.
In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León, where seven long aqueducts enabled them |
Gold | to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the first century AD.
During Mansa Musas (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade, causing high inflation. A contemporary Arab historian remarked:
Gold was at a high price in Egypt until they came in that year. The mithqal did not go below 25 dirhams and was generally above, but from that time its value fell and it cheapened in price and has remained cheap till now. The mithqal does not exceed 22 dirhams or less. This has been the state of affairs for about twelve years until this day by reason of the large amount of gold which they brought into Egypt and spent there [...].
The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as the product of the gods, calling it literally "god excrement" (teocuitlatl in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain. However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate. El Dorado is applied to a legendary story in which precious stones were found in fabulous abundance along with gold coins. The concept of El Dorado underwent several transformations, and eventually accounts of the previous myth were also combined with those of a legendary lost city. El Dorado, was the term used by the Spanish Empire to describe a mythical tribal chief (zipa) of the Muisca native people in Colombia, who, as an initiation rite, covered himself with gold dust and submerged in Lake Guatavita. The legends surrounding El Dorado changed over time, as it went from being a man, to a city, to a kingdom, and then finally to an empire.
Beginning in the early modern period, European exploration and colonization of West Africa was driven in large part by reports of gold deposits in the region, which was eventually referred to by Europeans as the "Gold Coast". From the late 15th to early 19th centuries, European trade in the region was primarily focused in gold, along with ivory and slaves. The gold trade in West Africa was dominated by the Ashanti Empire, who initially traded with the Portuguese before branching out and trading with British, French, Spanish and Danish merchants. British desires to secure control of West African gold deposits played a role in the Anglo-Ashanti wars of the late 19th century, which saw the Ashanti Empire annexed by Britain.Gold played a role in western culture, as a cause for desire and of corruption, as told in childrens fables such as Rumpelstiltskin—where Rumpelstiltskin turns hay into gold for the peasants daughter in return for her child when she becomes a princess—and the stealing of the hen that lays golden eggs in Jack and the Beanstalk.
The top prize at the Olympic Games and many other sports competitions is the gold medal.
75% of the presently accounted for gold has been extracted since 1910, two-thirds since 1950.
One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosophers stone. Trying to produce gold led the alchemists to systematically find out what can be done with substances, and this laid the foundation for todays chemistry, which can produce gold (albeit uneconomically) by using nuclear transmutation. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun.
The Dome of the Rock is covered with an ultra-thin golden glassier. The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Buddhist temple (wat) in Thailand has ornamental gold-leafed statues and roofs. Some European king and queens crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, wife of Rabbi Akiva, receiving a "Jerusalem of Gold" (diadem). A Greek burial crown made of gold was found in a grave circa 370 BC.
Etymology
"Gold" is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *gulþą from Proto-Indo-European *ǵʰelh₃- ("to shine, to gleam; to be yellow or green").The symbol Au is from the Latin: aurum, the Latin word for "gold". The Proto-Indo-European ancestor of aurum was *h₂é-h₂us-o-, meaning "glow". This word is derived from the same root (Proto-Indo-European *h₂u̯es- "to dawn") as *h₂éu̯sōs, the ancestor of the Latin word Aurora, "dawn". This etymological relationship is presumably behind the frequent claim in scientific publications that aurum meant "shining dawn".
Culture
In popular culture gold is a high standard of excellence, often used in awards. Great achievements are frequently rewarded with gold, in the form of gold medals, gold trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme dOr, and the British Academy Film Awards).Aristotle in his ethics used gold symbolism when referring to what is now known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the golden rule. Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. A persons most valued or most successful latter years are sometimes considered "golden years". The height of a civilization is referred to as a golden age.
Religion
In some forms of Christianity and Judaism, gold has been associated both with the sacred and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Virgin Mary and the saints are often golden.In Islam, gold (along with silk) is often cited as being forbidden for men to wear. Abu Bakr al-Jazaeri, quoting a hadith, said that "[t]he wearing of silk and gold are forbidden on the males of my nation, and they are lawful to their women". This, however, has not been enforced consistently throughout history, e.g. in the Ottoman Empire. Further, small gold accents on clothing, such as in embroidery, may be permitted.In ancient Greek religion and mythology, Theia was seen as the goddess of gold, silver and other gems.According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise.Wedding rings are typically made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites.
On 24 August 2020, Israeli archaeologists discovered a trove of early Islamic gold coins near the central city of Yavne. Analysis of the extremely rare collection of 425 gold coins indicated that they were from the late 9th century. Dating to around 1,100 years back, the gold coins were from the Abbasid Caliphate.
Production
According to the United States Geological Survey in 2016, about 5,726,000,000 troy ounces (178,100 t) of gold has been accounted for, of which 85% remains in active use.
Mining and prospecting
Since the 1880s, South Africa has been the source of a large proportion of the worlds gold supply, and about 22% of the gold presently accounted is from South Africa. Production in 1970 accounted for 79% of the world supply, about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the worlds largest gold producer, the first time since 1905 that South Africa had not been the largest.In 2020, China was the worlds leading gold-mining country, followed in order by Russia, Australia, the United States, Canada, and Ghana.
In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina.
It has been estimated that up to one-quarter of the yearly global gold production originates from artisanal or small scale mining.The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest natural gold deposits in recorded history. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a 5–7 km (3.1–4.3 mi) thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces. These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin. From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly 4,000 m (13,000 ft), making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly Braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited.The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa.
During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803. The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega. Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, a number of locations across Australia, Witwatersrand in South Africa, and the Klondike in Canada.
Grasberg mine located in Papua, Indonesia is the largest gold mine in the world.
Extraction and refining
Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 parts per million (ppm) can be economical. Typical ore grades in open-pit mines are 1–5 ppm; ore grades in underground or hard rock mines are usually at least 3 ppm. Because ore grades of 30 ppm are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible.
The average gold mining and extraction costs were about $317 per troy ounce in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes.After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations. Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia.As of 2020, the amount of carbon dioxide CO2 produced in mining a kilogram of gold is 16 tonnes, while recycling a kilogram of gold produces 53 kilograms of CO2 equivalent. Approximately 30 percent of the global gold supply is recycled and not mined as of 2020.Corporations are starting to adopt gold recycling including jewelry companies such as Generation Collection and computer companies including Dell.
Consumption
The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry.According to the World Gold Council, China was the worlds largest single consumer of gold in 2013, overtaking India.
Pollution
Gold production is associated with contribution to hazardous pollution.Low-grade gold ore may contain less than one ppm gold metal; such ore is ground and mixed with sodium cyanide to dissolve the gold. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills from gold mines have occurred in both developed and developing countries which killed aquatic life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters. Up to thirty tons of used ore can dumped as waste for producing one troy ounce of gold. Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide-bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps are long-term, highly hazardous wastes second only to nuclear waste dumps.It was once common to use mercury to recover gold from ore, but today the use of mercury is largely limited to small-scale individual miners. Minute quantities of mercury compounds can reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans causes incurable brain function damage and severe retardation.Gold extraction is also a highly energy-intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires nearly 25 kWh of electricity per gram of gold produced.
Monetary use
Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity.
The first known coins containing gold were struck in Lydia, Asia Minor, around 600 BC. The talent coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams. From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries.Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies.
In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort.
Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations.
After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; this was ended by a referendum in 1999.Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining. With the sharp growth of economies in the 20th century, and increasing foreign exchange, the worlds gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1% or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices.
The gold proportion (fineness) of alloys is measured by karat (k). Pure gold (commercially termed fine gold) is designated as 24 karat, abbreviated 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold, for hardness (American gold coins for circulation after 1837 contain an alloy of 0.900 fine gold, or 21.6 kt).Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party.
The ISO 4217 currency code of gold is XAU. Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions, though its efficacy as such has been questioned; historically, it has not proven itself reliable as a hedging instrument. Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerrand, first released in 1967, is also 22k (0.92).The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda.
Price
As of September 2017, gold is valued at around $42 per gram ($1,300 per troy ounce).
Like other precious metals, gold is measured by troy weight and by grams. The proportion of gold in the alloy is measured by karat (k), with 24 karat (24k) being pure gold (100%), and lower karat numbers proportionally less (18k = 75%). The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being nearly pure.
The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open.
History
Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($0.665 per gram), but in 1934 the dollar was devalued to $35.00 per troy ounce ($0.889/g). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand.On 17 March 1968, economic circumstances caused the collapse of the gold pool, and a two-tiered pricing scheme was established whereby gold was still used to settle international accounts at the old $35.00 per troy ounce ($1.13/g) but the price of gold on the private market was allowed to fluctuate; this two-tiered pricing system was abandoned in 1975 when the price of gold was left to find its free-market level. Central banks still hold historical gold reserves as a store of value although the level has generally been declining. The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3% of the gold known to exist and accounted for today, as does the similarly laden U.S. Bullion Depository at Fort Knox.
In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes.After 15 August 1971 Nixon shock, the price began to greatly increase, and between 1968 and 2000 the price of gold ranged widely, from a high of $850 per troy ounce ($27.33/g) on 21 January 1980, to a low of $252.90 per troy ounce ($8.13/g) on 21 June 1999 (London Gold Fixing). Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008, when a new maximum of $865.35 per troy ounce was set. Another record price was set on 17 March 2008, at $1023.50 per troy ounce ($32.91/g).In late 2009, gold markets experienced renewed momentum upwards due to increased demand and a weakening US dollar. On 2 December 2009, gold reached a new high closing at $1,217.23. Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset. On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East.From April 2001 to August 2011, spot gold prices more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011, prompting speculation that the long secular bear market had ended and a bull market had returned. However, the price then began a slow decline towards $1200 per troy ounce in late 2014 and 2015.
In August 2020, the gold price picked up to US$2060 per ounce after a complexive growth of 59% from August 2018 to October 2020, a period during which it outplaced the Nasdaq total return of 54%.Gold futures are traded on the COMEX exchange. These contacts are priced in USD per troy ounce (1 troy ounce = 31.1034768 grams). Below are the CQG contract specifications outlining the futures contracts:
Medicinal uses
Medicinal applications of gold and its complexes have a long history dating back thousands of years. Several gold complexes have been applied to treat rheumatoid arthritis, the most frequently used being aurothiomalate, aurothioglucose, and auranofin. Both gold(I) and gold(III) compounds have been investigated as possible anti-cancer drugs. For gold(III) complexes, reduction to gold(0/I) under physiological conditions has to be considered. Stable complexes can be generated using different types of bi-, tri-, and tetradentate ligand systems, and their efficacy has been demonstrated in vitro and in vivo.
Other applications
Jewelry
Because of the softness of pure (24k) gold, it is usually alloyed with base metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower karat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper or other base metals or silver or palladium in the alloy. Nickel is toxic, and its release from nickel white gold is controlled by legislation in Europe. Palladium-gold alloys are more expensive than those using nickel. High-karat white gold alloys are more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects.
By 2014, the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report.
Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, the gold solder alloy must match the fineness (purity) of the work, and alloy formulas are manufactured to color-match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints. Gold can also be made into thread and used in embroidery.
Electronics
Only 10% of the world consumption of new gold produced goes to industry, but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold Council, a typical cell phone may contain 50 mg of gold, worth about 50 cents. But since nearly one billion cell phones are produced each year, a gold value of 50 cents in each phone adds to $500 million in gold from just this application.Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin-layer coating on electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost ( |
Gold | certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common.Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity. Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding.
The concentration of free electrons in gold metal is 5.91×1022 cm−3. Gold is highly conductive to electricity, and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Projects atomic experiments, but large high-current silver wires were used in the calutron isotope separator magnets in the project.
It is estimated that 16% of the worlds presently-accounted-for gold and 22% of the worlds silver is contained in electronic technology in Japan.
Medicine
Metallic and gold compounds have long been used for medicinal purposes. Gold, usually as the metal, is perhaps the most anciently administered medicine (apparently by shamanic practitioners) and known to Dioscorides. In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy. Even some modern esotericists and forms of alternative medicine assign metallic gold a healing power.
In the 19th century gold had a reputation as an anxiolytic, a therapy for nervous disorders. Depression, epilepsy, migraine, and glandular problems such as amenorrhea and impotence were treated, and most notably alcoholism (Keeley, 1897).The apparent paradox of the actual toxicology of the substance suggests the possibility of serious gaps in the understanding of the action of gold in physiology. Only salts and radioisotopes of gold are of pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (e.g., ingested gold cannot be attacked by stomach acid). Some gold salts do have anti-inflammatory properties and at present two are still used as pharmaceuticals in the treatment of arthritis and other similar conditions in the US (sodium aurothiomalate and auranofin). These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites.Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others.
Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen.Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Golds very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope.The isotope gold-198 (half-life 2.7 days) is used in nuclear medicine, in some cancer treatments and for treating other diseases.
Cuisine
Gold can be used in food and has the E number 175. In 2016, the European Food Safety Authority published an opinion on the re-evaluation of gold as a food additive. Concerns included the possible presence of minute amounts of gold nanoparticles in the food additive, and that gold nanoparticles have been shown to be genotoxic in mammalian cells in vitro.
Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as decorative ingredient. Gold flake was used by the nobility in medieval Europe as a decoration in food and drinks, in the form of leaf, flakes or dust, either to demonstrate the hosts wealth or in the belief that something that valuable and rare must be beneficial for ones health.
Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser (English: Goldwater) is a traditional German herbal liqueur produced in what is today Gdańsk, Poland, and Schwabach, Germany, and contains flakes of gold leaf. There are also some expensive (c. $1000) cocktails which contain flakes of gold leaf. However, since metallic gold is inert to all body chemistry, it has no taste, it provides no nutrition, and it leaves the body unaltered.
Vark is a foil composed of a pure metal that is sometimes gold, and is used for garnishing sweets in South Asian cuisine.
Miscellanea
Gold produces a deep, intense red color when used as a coloring agent in cranberry glass.
In photography, gold toners are used to shift the color of silver bromide black-and-white prints towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners produce red tones. Kodak published formulas for several types of gold toners, which use gold as the chloride.
Gold is a good reflector of electromagnetic radiation such as infrared and visible light, as well as radio waves. It is used for the protective coatings on many artificial satellites, in infrared protective faceplates in thermal-protection suits and astronauts helmets, and in electronic warfare planes such as the EA-6B Prowler.
Gold is used as the reflective layer on some high-end CDs.
Automobiles may use gold for heat shielding. McLaren uses gold foil in the engine compartment of its F1 model.
Gold can be manufactured so thin that it appears semi-transparent. It is used in some aircraft cockpit windows for de-icing or anti-icing by passing electricity through it. The heat produced by the resistance of the gold is enough to prevent ice from forming.
Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process. Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and electroforming.
Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or ascorbate ions. Gold chloride and gold oxide are used to make cranberry or red-colored glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles.
Gold, when dispersed in nanoparticles, can act as a heterogeneous catalyst of chemical reactions.
Toxicity
Pure metallic (elemental) gold is non-toxic and non-irritating when ingested and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body.
Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide. Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol.
Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society; gold contact allergies affect mostly women. Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel.A sample of the fungus Aspergillus niger was found growing from gold mining solution; and was found to contain cyano metal complexes, such as gold, silver, copper, iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.
See also
References
Further reading
Bachmann, H. G. The lure of gold : an artistic and cultural history (2006) online
Bernstein, Peter L. The Power of Gold: The History of an Obsession (2000) onlineBrands, H.W. The Age of Gold: The California Gold Rush and the New American Dream (2003) excerpt
Buranelli, Vincent. Gold : an illustrated history (1979) online wide-ranging popular history
Cassel, Gustav. "The restoration of the gold standard." Economica 9 (1923): 171-185. online
Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939 (Oxford UP, 1992).
Ferguson, Niall. The Ascent of Money - Financial History of the World (2009) online
Hart, Matthew, Gold: The Race for the Worlds Most Seductive Metal Gold : the race for the worlds most seductive metal"], New York: Simon & Schuster, 2013. ISBN 9781451650020
Johnson, Harry G. "The gold rush of 1968 in retrospect and prospect." American Economic Review 59.2 (1969): 344-348. online
Kwarteng, Kwasi. War and Gold: A Five-Hundred-Year History of Empires, Adventures, and Debt (2014) online
Vilar, Pierre. A History of Gold and Money, 1450 to 1920 (1960). online
Vilches, Elvira. New World Gold: Cultural Anxiety and Monetary Disorder in Early Modern Spain (2010).
External links
"Gold". Encyclopædia Britannica. Vol. 11 (11th ed.). 1911.
Chemistry in its element podcast (MP3) from the Royal Society of Chemistrys Chemistry World: Gold www.rsc.org
Gold at The Periodic Table of Videos (University of Nottingham)
Getting Gold 1898 book, www.lateralscience.co.uk
Technical Document on Extraction and Mining of Gold at the Wayback Machine (archived 7 March 2008), www.epa.gov
Gold element information - rsc.org |
Granuloma annulare | Granuloma annulare (GA) is a common, sometimes chronic skin condition which presents as reddish bumps on the skin arranged in a circle or ring. It can initially occur at any age, though two-thirds of patients are under 30 years old, and it is seen most often in children and young adults. Females are two times as likely to have it than males.
Signs and symptoms
Aside from the visible rash, granuloma annulare is usually asymptomatic. Sometimes the rash may burn or itch. People with GA usually notice a ring of small, firm bumps (papules) over the backs of the forearms, hands or feet, often centered on joints or knuckles. The bumps are caused by the clustering of T cells below the skin. These papules start as very small, pimple looking bumps, which spread over time from that size to dime, quarter, half-dollar size and beyond. Occasionally, multiple rings may join into one. Rarely, GA may appear as a firm nodule under the skin of the arms or legs. It also occurs on the sides and circumferential at the waist and without therapy can continue to be present for many years. Outbreaks continue to develop at the edges of the aging rings.
Causes
The condition is usually seen in otherwise healthy people. Occasionally, it may be associated with diabetes or thyroid disease. It has also been associated with auto-immune diseases such as systemic lupus erythematosus, rheumatoid arthritis, Lyme disease and Addisons disease. At this time, no conclusive connection has been made between patients.
Pathology
Granuloma annulare microscopically consists of dermal epithelioid histiocytes around a central zone of mucin—a so-called palisaded granuloma.
Pathogenesis
Granuloma annulare is an idiopathic condition, though many catalysts have been proposed. Among these is skin trauma, UV exposure, vaccinations, tuberculin skin testing, and Borrelia and viral infections.The mechanisms proposed at a molecular level vary even more. In 1977, Dahl et al. proposed that since the lesions of GA often display a thickening of, occlusion of, or other trauma to blood vessels, blood vessels may be responsible for GA. From their study of 58 patients, they found that immunoglobin M (IgM), complement, and fibrinogen were in the blood vessels of GA areas, suggesting that GA may share similarities with an immune-mediated, type 3 reaction or that chronic immune vasculitis may be involved in the pathogenesis. Another study found evidence suggesting blood vessel involvement with masses of intercellular fibrin and thickened basal lamina found around capillaries.Umbert et al. (1976), proposed an alternative pathogenesis: cell-mediated immunity. Their data suggests that lymphokines, such as macrophage-inhibiting factor (MIF), leads to sequestration of macrophages and histiocytes in the dermis. Then, upon lysosomal enzyme release by these sequestered cells, connective tissue damage ensues, which results in GA. Later, these authors found data suggesting that activation of macrophages and fibroblasts are involved in the pathogenesis of GA and that fibrin and the rare IgM and C3 deposition around vessels were more likely a delayed-type hypersensitivity with resulting tissue and vessel changes rather than an immune-complex mediated disease. Further data has been collected supporting this finding.
Diagnosis
Types
Granuloma annulare may be divided into the following types:: 703–5
Localized granuloma annulare
Generalized granuloma annulare
Patch-type granuloma annulare
Subcutaneous granuloma annulare
Perforating granuloma annulare
Treatment
Because granuloma annulare is usually asymptomatic and self-limiting with a course of about 2 years, initial treatment is generally topical steroids or calcineurin inhibitors; if unimproved with topical treatments, it may be treated with intradermal injections of steroids. If local treatment fails it may be treated with systemic corticosteroids. Treatment success varies widely, with most patients finding only brief success with the above-mentioned treatments. Most lesions of granuloma annulare disappear in pre-pubertal patients with no treatment within two years while older patients (50+) have rings for upwards of 20 years. The appearance of new rings years later is not uncommon.
History
The disease was first described in 1895 by Thomas Colcott Fox as a "ringed eruption of the fingers", and it was named granuloma annulare by Henry Radcliffe Crocker in 1902.
See also
Granuloma
Necrobiosis lipoidica
References
External links
DermNet dermal-infiltrative/granuloma-annulare |
Helicobacter pylori | Helicobacter pylori, previously known as Campylobacter pylori, is a gram-negative, microaerophilic, spiral (helical) bacterium usually found in the stomach. Its helical shape (from which the genus name, helicobacter, derives) is thought to have evolved in order to penetrate the mucoid lining of the stomach and thereby establish infection. The bacterium was first identified in 1982 by the Australian doctors Barry Marshall and Robin Warren. H. pylori has been associated with cancer of the mucosa-associated lymphoid tissue in the stomach, esophagus, colon, rectum, or tissues around the eye (termed extranodal marginal zone B-cell lymphoma of the cited organ), and of lymphoid tissue in the stomach (termed diffuse large B-cell lymphoma).H. pylori infection usually has no symptoms but sometimes causes gastritis (stomach inflammation) or ulcers of the stomach or first part of the small intestine. The infection is also associated with the development of certain cancers. Many investigators have suggested that H. pylori causes or prevents a wide range of other diseases, but many of these relationships remain controversial.Some studies suggest that H. pylori plays an important role in the natural stomach ecology, e.g. by influencing the type of bacteria that colonize the gastrointestinal tract. Other studies suggest that non-pathogenic strains of H. pylori may beneficially normalize stomach acid secretion, and regulate appetite.In 2015, it was estimated that over 50% of the worlds population had H. pylori in their upper gastrointestinal tracts with this infection (or colonization) being more common in developing countries. In recent decades, however, the prevalence of H. pylori colonization of the gastrointestinal tract has declined in many countries.
Signs and symptoms
Up to 90% of people infected with H. pylori never experience symptoms or complications. However, individuals infected with H. pylori have a 10% to 20% lifetime risk of developing peptic ulcers. Acute infection may appear as an acute gastritis with abdominal pain (stomach ache) or nausea. Where this develops into chronic gastritis, the symptoms, if present, are often those of non-ulcer dyspepsia: Stomach pains, nausea, bloating, belching, and sometimes vomiting. Pain typically occurs when the stomach is empty, between meals, and in the early morning hours, but it can also occur at other times. Less common ulcer symptoms include nausea, vomiting, and loss of appetite.
Bleeding in the stomach can also occur as evidenced by the passage of black stools; prolonged bleeding may cause anemia leading to weakness and fatigue. If bleeding is heavy, hematemesis, hematochezia, or melena may occur. Inflammation of the pyloric antrum, which connects the stomach to the duodenum, is more likely to lead to duodenal ulcers, while inflammation of the corpus (i.e. body of the stomach) is more likely to lead to gastric ulcers. Individuals infected with H. pylori may also develop colorectal or gastric polyps, i.e. non-cancerous growths of tissue projecting from the mucous membranes of these organs. Usually, these polyps are asymptomatic but gastric polyps may be the cause of dyspepsia, heartburn, bleeding from the upper gastrointestinal tract, and, rarely, gastric outlet obstruction while colorectal polyps may be the cause of rectal bleeding, anemia, constipation, diarrhea, weight loss, and abdominal pain.Individuals with chronic H. pylori infection have an increased risk of acquiring a cancer that is directly related to this infection. These cancers are stomach adenocarcinoma, less commonly diffuse large B-cell lymphoma of the stomach, or extranodal marginal zone B-cell lymphomas of the stomach, or, more rarely, of the colon, rectum, esophagus, or ocular adenexa (i.e. orbit, conjunctiva, and/or eyelids). The signs, symptoms, pathophysiology, and diagnoses of these cancers are given in the cited linkages.
Microbiology
Morphology
Helicobacter pylori is a helix-shaped (classified as a curved rod, not spirochaete) Gram-negative bacterium about 3 μm long with a diameter of about 0.5 μm. H. pylori can be demonstrated in tissue by Gram stain, Giemsa stain, haematoxylin–eosin stain, Warthin–Starry silver stain, acridine orange stain, and phase-contrast microscopy. It is capable of forming biofilms and can convert from spiral to a possibly viable but nonculturable coccoid form.Helicobacter pylori has four to six flagella at the same location; all gastric and enterohepatic Helicobacter species are highly motile owing to flagella. The characteristic sheathed flagellar filaments of Helicobacter are composed of two copolymerized flagellins, FlaA and FlaB.
Physiology
Helicobacter pylori is microaerophilic – that is, it requires oxygen, but at lower concentration than in the atmosphere. It contains a hydrogenase that can produce energy by oxidizing molecular hydrogen (H2) made by intestinal bacteria. It produces oxidase, catalase, and urease.
H. pylori possesses five major outer membrane protein families. The largest family includes known and putative adhesins. The other four families are porins, iron transporters, flagellum-associated proteins, and proteins of unknown function. Like other typical Gram-negative bacteria, the outer membrane of H. pylori consists of phospholipids and lipopolysaccharide (LPS). The O antigen of LPS may be fucosylated and mimic Lewis blood group antigens found on the gastric epithelium. The outer membrane also contains cholesterol glucosides, which are present in few other bacteria.
Genome
Helicobacter pylori consists of a large diversity of strains, and hundreds of genomes have been completely sequenced. The genome of the strain "26695" consists of about 1.7 million base pairs, with some 1,576 genes. The pan-genome, that is a combined set of 30 sequenced strains, encodes 2,239 protein families (orthologous groups, OGs). Among them, 1,248 OGs are conserved in all the 30 strains, and represent the universal core. The remaining 991 OGs correspond to the accessory genome in which 277 OGs are unique (i.e., OGs present in only one strain).
Transcriptome
In 2010, Sharma et al. presented a comprehensive analysis of transcription at single-nucleotide resolution by differential RNA-seq that confirmed the known acid induction of major virulence loci, such as the urease (ure) operon or the cag pathogenicity island (see below). More importantly, this study identified a total of 1,907 transcriptional start sites, 337 primary operons, and 126 additional suboperons, and 66 monocistrons. Until 2010, only about 55 transcriptional start sites (TSSs) were known in this species. Notably, 27% of the primary TSSs are also antisense TSSs, indicating that – similar to E. coli – antisense transcription occurs across the entire H. pylori genome. At least one antisense TSS is associated with about 46% of all open reading frames, including many housekeeping genes. Most (about 50%) of the 5′ UTRs are 20–40 nucleotides (nt) in length and support the AAGGag motif located about 6 nt (median distance) upstream of start codons as the consensus Shine–Dalgarno sequence in H. pylori.
Genes involved in virulence and pathogenesis
Study of the H. pylori genome is centered on attempts to understand pathogenesis, the ability of this organism to cause disease. About 29% of the loci have a colonization defect when mutated. Two of sequenced strains have an around 40 kb-long Cag pathogenicity island (a common gene sequence believed responsible for pathogenesis) that contains over 40 genes. This pathogenicity island is usually absent from H. pylori strains isolated from humans who are carriers of H. pylori, but remain asymptomatic.The cagA gene codes for one of the major H. pylori virulence proteins. Bacterial strains with the cagA gene are associated with an ability to cause ulcers. The cagA gene codes for a relatively long (1186-amino acid) protein. The cag pathogenicity island (PAI) has about 30 genes, part of which code for a complex type IV secretion system. The low GC-content of the cag PAI relative to the rest of the Helicobacter genome suggests the island was acquired by horizontal transfer from another bacterial species. The serine protease HtrA also plays a major role in the pathogenesis of H. pylori. The HtrA protein enables the bacterium to transmigrate across the host cells epithelium, and is also needed for the translocation of CagA.The vacA (Q48245) gene codes for another major H. pylori virulence protein. There are four main subtypes of vacA: s1/m1, s1/m2, s2/m1, and s2/m2. s1/m1 and s1/m2 subtypes are known to cause increased risk of gastric cancer. This has been linked to the ability for toxigenic vacA to promote the generation of intracellular reservoirs of H. pylori via disruption of calcium channel TRPML1.
Proteome
The proteins of H. pylori have been systematically analyzed by multiple studies. As a consequence, more than 70% of its proteome have been detected by mass spectrometry and other biochemical methods. In fact, about 50% of the proteome have been quantified, that is, we know how many copies of each protein are present in a typical cell. Furthermore, the interactome of H. pylori has been systematically studied and more than 3000 protein-protein interactions have been identified. The latter provide information of how proteins interact with each other, e.g. in stable protein complexes or in more dynamic, transient interactions. This in turn helps researchers to find out what the function of uncharacterized proteins is, e.g. when an uncharacterized protein interacts with several proteins of the ribosome (that is, it is likely also involved in ribosome function). Nevertheless, about a third of all ~1,500 proteins in H. pylori remain uncharacterized and their function is largely unknown.
Pathophysiology
Adaptation to the stomach
To avoid the acidic environment of the interior of the stomach (lumen), H. pylori uses its flagella to burrow into the mucus lining of the stomach to reach the epithelial cells underneath, where it is less acidic. H. pylori is able to sense the pH gradient in the mucus and move towards the less acidic region (chemotaxis). This also keeps the bacteria from being swept away into the lumen with the bacterias mucus environment, which is constantly moving from its site of creation at the epithelium to its dissolution at the lumen interface.
H. pylori is found in the mucus, on the inner surface of the epithelium, and occasionally inside the epithelial cells themselves. It adheres to the epithelial cells by producing adhesins, which bind to lipids and carbohydrates in the epithelial cell membrane. One such adhesin, BabA, binds to the Lewis b antigen displayed on the surface of stomach epithelial cells. H. pylori adherence via BabA is acid sensitive and can be fully reversed by decreased pH. It has been proposed that BabAs acid responsiveness enables adherence while also allowing an effective escape from unfavorable environment at pH that is harmful to the organism. Another such adhesin, SabA, binds to increased levels of sialyl-Lewis X (sLeX) antigen expressed on gastric mucosa.In addition to using chemotaxis to avoid areas of low pH, H. pylori also neutralizes the acid in its environment by producing large amounts of urease, which breaks down the urea present in the stomach to carbon dioxide and ammonia. These react with the strong acids in the environment to produce a neutralized area around H. pylori. Urease knockout mutants are incapable of colonization. In fact, urease expression is not only required for establishing initial colonization but also for maintaining chronic infection.
Adaptation of H. pylori to high acidity of stomach
As mentioned above, H. pylori produce large amounts of urease to produce ammonia as one of its adaptation methods to overcome stomach acidity. Helicobacter pylori arginase, a bimetallic enzyme binuclear Mn2-metalloenzyme arginase, crucial for pathogenesis of the bacterium in human stomach, a member of the ureohydrolase family, catalyzes the conversion of L-arginine to L-ornithine and urea, where ornithine is further converted into polyamines, which are essential for various critical metabolic processes.This provides acid resistance and is thus important for colonization of the bacterium in the gastric epithelial cells. Arginase of H. pylori also plays a role in evasion of the pathogen from the host immune system mainly by various proposed mechanisms, arginase competes with host-inducible nitric oxide (NO) synthase for the common substrate L-arginine, and thus reduces the synthesis of NO, an important component of innate immunity and an effective antimicrobial agent that is able to kill the invading pathogens directly.Alterations in the availability of L-arginine and its metabolism into polyamines contribute significantly to the dysregulation of the host immune response to H. pylori infection.
Inflammation, gastritis and ulcer
Helicobacter pylori harms the stomach and duodenal linings by several mechanisms. The ammonia produced to regulate pH is toxic to epithelial cells, as are biochemicals produced by H. pylori such as proteases, vacuolating cytotoxin A (VacA) (this damages epithelial cells, disrupts tight junctions and causes apoptosis), and certain phospholipases. Cytotoxin associated gene CagA can also cause inflammation and is potentially a carcinogen.Colonization of the stomach by H. pylori can result in chronic gastritis, an inflammation of the stomach lining, at the site of infection. Helicobacter cysteine-rich proteins (Hcp), particularly HcpA (hp0211), are known to trigger an immune response, causing inflammation. H. pylori has been shown to increase the levels of COX2 in H. pylori positive gastritis.
Chronic gastritis is likely to underlie H. pylori-related diseases.Ulcers in the stomach and duodenum result when the consequences of inflammation allow stomach acid and the digestive enzyme pepsin to overwhelm the mechanisms that protect the stomach and duodenal mucous membranes. The location of colonization of H. pylori, which affects the location of the ulcer, depends on the acidity of the stomach.
In people producing large amounts of acid, H. pylori colonizes near the pyloric antrum (exit to the duodenum) to avoid the acid-secreting parietal cells at the fundus (near the entrance to the stomach). In people producing normal or reduced amounts of acid, H. pylori can also colonize the rest of the stomach.
The inflammatory response caused by bacteria colonizing near the pyloric antrum induces G cells in the antrum to secrete the hormone gastrin, which travels through the bloodstream to parietal cells in the fundus. Gastrin stimulates the parietal cells to secrete more acid into the stomach lumen, and over time increases the number of parietal cells, as well. The increased acid load damages the duodenum, which may eventually result in ulcers forming in the duodenum.
When H. pylori colonizes other areas of the stomach, the inflammatory response can result in atrophy of the stomach lining and eventually ulcers in the stomach. This also may increase the risk of stomach cancer.
Cag pathogenicity island
The pathogenicity of H. pylori may be increased by genes of the cag pathogenicity island; about 50–70% of H. pylori strains in Western countries carry it. Western people infected with strains carrying the cag PAI have a stronger inflammatory response in the stomach and are at a greater risk of developing peptic ulcers or stomach cancer than those infected with strains lacking the island. Following attachment of H. pylori to stomach epithelial cells, the type IV secretion system expressed by the cag PAI "injects" the inflammation-inducing agent, peptidoglycan, from their own cell walls into the epithelial cells. The injected peptidoglycan is recognized by the cytoplasmic pattern recognition receptor (immune sensor) Nod1, which then stimulates expression of cytokines that promote inflammation.The type-IV secretion apparatus also injects the cag PAI-encoded protein CagA into the stomachs epithelial cells, where it disrupts the cytoskeleton, adherence to adjacent cells, intracellular signaling, cell polarity, and other cellular activities. Once inside the cell, the CagA protein is phosphorylated on tyrosine residues by a host cell membrane-associated tyrosine kinase (TK). CagA then allosterically activates protein tyrosine phosphatase/protooncogene Shp2. Pathogenic strains of H. pylori have been shown to activate the epidermal growth factor receptor (EGFR), a membrane protein with a TK domain. Activation of the EGFR by H. pylori is associated with altered signal transduction and gene expression in host epithelial cells that may contribute to pathogenesis. A C-terminal region of the CagA protein (amino acids 873–1002) has also been suggested to be able to regulate host cell gene transcription, independent of protein tyrosine phosphorylation. A great deal of diversity exists between strains of H. pylori, and the strain that infects a person can predict the outcome.
Cancer
Two related mechanisms by which H. pylori could promote cancer are under investigation. One mechanism involves the enhanced production of free radicals near H. pylori and an increased rate of host cell mutation. The other proposed mechanism has been called a "perigenetic pathway", and involves enhancement of the transformed host cell phenotype by means of alterations in cell proteins, such as adhesion proteins. H. pylori has been proposed to induce inflammation and locally high levels of TNF-α and/or interleukin 6 (IL-6). According to the proposed perigenetic mechanism, inflammation-associated signaling molecules, such as TNF-α, can alter gastric epithelial cell adhesion and lead to the dispersion and migration of mutated epithelial cells without the need for additional mutations in tumor suppressor genes, such as genes that code for cell adhesion proteins.The strain of H. pylori a person is exposed to may influence the risk of developing gastric cancer. Strains of H. pylori that produce high levels of two proteins, vacuolating toxin A (VacA) and the cytotoxin-associated gene A (CagA), appear to cause greater tissue damage than those that produce lower levels or that lack those genes completely. These proteins are directly toxic to cells lining the stomach and signal strongly to the immune system that an invasion is under way. As a result of the bacterial presence, neutrophils and macrophages set up residence in the tissue to fight the bacteria assault.H. pylori is a major source of worldwide cancer mortality. Although the data varies between different countries, overall about 1% to 3% of people infected with Helicobacter pylori develop gastric cancer in their lifetime compared to 0.13% of individuals who have had no H. pylori infection. H. pylori infection is very prevalent. As evaluated in 2002, it is present in the gastric tissues of 74% of middle-aged adults in developing countries and 58% in developed countries. Since 1% to 3% of infected individuals are likely to develop gastric cancer, H. pylori-induced gastric cancer is the third highest cause of worldwide cancer mortality as of 2018.Infection by H. pylori causes no symptoms in about 80% of those infected. About 75% of individuals infected with H. pylori develop gastritis. Thus, the usual consequence of H. pylori infection is chronic asymptomatic gastritis. Because of the usual lack of symptoms, when gastric cancer is finally diagnosed it is often fairly advanced. More than half of gastric cancer patients have lymph node metastasis when they are initially diagnosed.The gastritis caused by H. pylori is accompanied by inflammation, characterized by infiltration of neutrophils and macrophages to the gastric epithelium, which favors the accumulation of pro-inflammatory cytokines and reactive oxygen species/reactive nitrogen species (ROS/RNS). The substantial presence of ROS/RNS causes DNA damage including 8-oxo-2-deoxyguanosine (8-OHdG). If the infecting H. pylori carry the cytotoxic cagA gene (present in about 60% of Western isolates and a higher percentage of Asian isolates), they can increase the level of 8-OHdG in gastric cells by 8-fold, while if the H. pylori do not carry the cagA gene, the increase in 8-OHdG is about 4-fold. In addition to the oxidative DNA damage 8-OHdG, H. pylori infection causes other characteristic DNA damages including DNA double-strand breaks.H. pylori also causes many epigenetic alterations linked to cancer development. These epigenetic alterations are due to H. pylori-induced methylation of CpG sites in promoters of genes and H. pylori-induced altered expression of multiple microRNAs.As reviewed by Santos and Ribeiro H. pylori infection is associated with epigenetically reduced efficiency of the DNA repair machinery, which favors the accumulation of mutations and genomic instability as well as gastric carcinogenesis. In particular, Raza et al. showed that expression of two DNA repair proteins, ERCC1 and PMS2, was severely reduced once H. pylori infection had progressed to cause dyspepsia. Dyspepsia occurs in about 20% of infected individuals. In addition, as reviewed by Raza et al., human gastric infection with H. pylori causes epigenetically reduced protein expression of DNA repair proteins MLH1, MGMT and MRE11. Reduced DNA repair in the presence of increased DNA damage increases carcinogenic mutations and is likely a significant cause of H. pylori carcinogenesis.
Survival of Helicobacter pylori
The pathogenesis of H. pylori depends on its ability to survive in the harsh gastric environment characterized by acidity, peristalsis, and attack by phagocytes accompanied by release of reactive oxygen species. In particular, H. pylori elicits an oxidative stress response during host colonization. This oxidative stress response induces potentially lethal and mutagenic oxidative DNA adducts in the H. pylori genome.Vulnerability to oxidative stress and oxidative DNA damage occurs commonly in many studied bacterial pathogens, including Neisseria gonorrhoeae, Hemophilus influenzae, Streptococcus pneumoniae, S. mutans, and H. pylori. For each of these pathogens, surviving the DNA damage induced by oxidative stress appears supported by transformation-mediated recombinational repair. Thus, transformation and recombinational repair appear to contribute to successful infection.
Transformation (the transfer of DNA from one bacterial cell to another through the intervening medium) appears to be part of an adaptation for DNA repair. H. pylori is naturally competent for transformation. While many organisms are competent only under certain environmental conditions, such as starvation, H. pylori is competent throughout logarithmic growth. All organisms encode genetic programs for response to stressful conditions including those that cause DNA damage. In H. pylori, homologous recombination is required for repairing DNA double-strand breaks (DSBs). The AddAB helicase-nuclease complex resects DSBs and loads RecA onto single-strand DNA (ssDNA), which then mediates strand exchange, leading to homologous recombination and repair. The requirement of RecA plus AddAB for efficient gastric colonization suggests, in the stomach, H. pylori is either exposed to double-strand DNA damage that must be repaired or requires some other recombination-mediated event. In particular, natural transformation is increased by DNA damage in H. pylori, and a connection exists between the DNA damage response and DNA uptake in H. pylori, suggesting natural competence contributes to persistence of H. pylori in its human host and explains the retention of competence in most clinical isolates.
RuvC protein is essential to the process of recombinational repair, since it resolves intermediates in this process termed Holliday junctions. H. pylori mutants that are defective in RuvC have increased sensitivity to DNA-damaging agents and to oxidative stress, exhibit reduced survival within macrophages, and are unable to establish successful infection in a mouse model. Similarly, RecN protein plays an important role in DSB repair in H. pylori. An H. pylori recN mutant displays an attenuated ability to colonize mouse stomachs, highlighting the importance of recombinational DNA repair in survival of H. pylori within its host.
Diagnosis
Colonization with H. pylori is not a disease in itself, but a condition associated with a number of disorders of the upper gastrointestinal tract. Testing is recommended if peptic ulcer disease or low-grade gastric MALT lymphoma (MALToma) is present, after endoscopic resection of early gastric cancer, for first-degree relatives with gastric cancer, and in certain cases of dyspepsia. Several methods of testing exist, including invasive and noninvasive testing methods.
Noninvasive tests for H. pylori infection may be suitable and include blood antibody tests, stool antigen tests, or the carbon urea breath test (in which the patient drinks 14C – or 13C-labelled urea, which the bacterium metabolizes, producing labelled carbon dioxide that can be detected in the breath). It is not known for sure which non-invasive test is more accurate for diagnosing a H. pylori infection but indirect comparison puts urea breath test as a higher accuracy than others.An endoscopic biopsy is an invasive means to test for H. pylori infection. Low-level infections can be missed by biopsy, so multiple samples are recommended. The most accurate method for detecting H. pylori infection is with a histological examination from two sites after endoscopic biopsy, combined with either a rapid urease test or microbial culture.
Transmission
Helicobacter pylori is contagious, although the exact route of transmission is not known.
Person-to-person transmission by either the oral–oral (kissing, mouth feeding) or fecal–oral route is most likely. Consistent with these transmission routes, the bacteria have been isolated from feces, saliva, and dental plaque of some infected people. Findings suggest H. pylori is more easily transmitted by gastric mucus than saliva. Transmission occurs mainly within families in developed nations, yet can also be acquired from the community in developing countries. H. pylori may also be transmitted orally by means of fecal matter through the ingestion of waste-tainted water, so a hygienic environment could help decrease the risk of H. pylori infection.
Prevention
Due to H. pyloris role as a major cause of certain diseases (particularly cancers) and its consistently increasing antibiotic resistance, there is a clear need for new therapeutic strategies to prevent or remove the bacterium from colonizing humans. Much work has been done on developing viable vaccines aimed at providing an alternative strategy to control H. pylori infection and related diseases. Researchers are studying different adjuvants, antigens, and routes of immunization to ascertain the most appropriate system of immune protection; however, most of the research only recently moved from animal to human trials. An economic evaluation of the use of a potential H. pylori vaccine in babies found its introduction could, at least in the Netherlands, prove cost-effective for the prevention of peptic ulcer and stomach adenocarcinoma. A similar approach has also been studied for the United States. Notwithstanding this proof-of-concept (i.e. vaccination |
Helicobacter pylori | protects children from acquisition of infection with H. pylori), as of late 2019 there have been no advanced vaccine candidates and only one vaccine in a Phase I clinical trial. Furthermore, development of a vaccine against H. pylori has not been a current priority of major pharmaceutical companies.Many investigations have attempted to prevent the development of Helicobacter pylori-related diseases by eradicating the bacterium during the early stages of its infestation using antibiotic-based drug regimens. Studies find that such treatments, when effectively eradicating H. pylori from the stomach, reduce the inflammation and some of the histopathological abnormalities associated with the infestation. However studies disagree on the ability of these treatments to alleviate the more serious histopathological abnormalities in H. pylori infections, e.g. gastric atrophy and metaplasia, both of which are precursors to gastric adenocarcinoma. There is similar disagreement on the ability of antibiotic-based regiments to prevent gastric adenocarcinoma. A meta-analysis (i.e. a statistical analysis that combines the results of multiple randomized controlled trials) published in 2014 found that these regimens did not appear to prevent development of this adenocarcinoma. However, two subsequent prospective cohort studies conducted on high-risk individuals in China and Taiwan found that eradication of the bacterium produced a significant decrease in the number of individuals developing the disease. These results agreed with a retrospective cohort study done in Japan and published in 2016 as well as a meta-analysis, also published in 2016, of 24 studies conducted on individuals with varying levels of risk for developing the disease. These more recent studies suggest that the eradication of H. pylori infection reduces the incidence of H. pylori-related gastric adenocarcinoma in individuals at all levels of baseline risk. Further studies will be required to clarify this issue. In all events, studies agree that antibiotic-based regimens effectively reduce the occurrence of metachronous H. pylori-associated gastric adenocarcinoma. (Metachronous cancers are cancers that reoccur 6 months or later after resection of the original cancer.) It is suggested that antibiotic-based drug regimens be used after resecting H. pylori-associated gastric adenocarcinoma in order to reduce its metachronus reoccurrence.
Treatment
Gastritis
Superficial gastritis, either acute or chronic, is the most common manifestation of H. pylori infection. The signs and symptoms of this gastritis have been found to remit spontaneously in many individuals without resorting to Helicobacter pylori eradication protocols. The H. pylori bacterial infection persists after remission in these cases. Various antibiotic plus proton pump inhibitor drug regimens are used to eradicate the bacterium and thereby successfully treat the disorder with triple-drug therapy consisting of clarithromycin, amoxicillin, and a proton-pump inhibitor given for 14–21 days often being considered first line treatment.
Peptic ulcers
Once H. pylori is detected in a person with a peptic ulcer, the normal procedure is to eradicate it and allow the ulcer to heal. The standard first-line therapy is a one-week "triple therapy" consisting of proton-pump inhibitors such as omeprazole and the antibiotics clarithromycin and amoxicillin. (The actions of proton pump inhibitors against H. pylori may reflect their direct bacteriostatic effect due to inhibition of the bacteriums P-type ATPase and/or urease.) Variations of the triple therapy have been developed over the years, such as using a different proton pump inhibitor, as with pantoprazole or rabeprazole, or replacing amoxicillin with metronidazole for people who are allergic to penicillin. In areas with higher rates of clarithromycin resistance, other options are recommended. Such a therapy has revolutionized the treatment of peptic ulcers and has made a cure to the disease possible. Previously, the only option was symptom control using antacids, H2-antagonists or proton pump inhibitors alone.
Antibiotic-resistant disease
An increasing number of infected individuals are found to harbor antibiotic-resistant bacteria. This results in initial treatment failure and requires additional rounds of antibiotic therapy or alternative strategies, such as a quadruple therapy, which adds a bismuth colloid, such as bismuth subsalicylate. In patients with any previous macrolide exposure or who are allergic to penicillin, a quadruple therapy that consisting of a proton pump inhibitor, bismuth, tetracycline, and a nitroimidazole for 10–14 days is a recommended first-line treatment option. For the treatment of clarithromycin-resistant strains of H. pylori, the use of levofloxacin as part of the therapy has been suggested.Ingesting lactic acid bacteria exerts a suppressive effect on H. pylori infection in both animals and humans, and supplementing with Lactobacillus- and Bifidobacterium-containing yogurt improved the rates of eradication of H. pylori in humans. Symbiotic butyrate-producing bacteria which are normally present in the intestine are sometimes used as probiotics to help suppress H. pylori infections as an adjunct to antibiotic therapy. Butyrate itself is an antimicrobial which destroys the cell envelope of H. pylori by inducing regulatory T cell expression (specifically, FOXP3) and synthesis of an antimicrobial peptide called LL-37, which arises through its action as a histone deacetylase inhibitor.The substance sulforaphane, which occurs in broccoli and cauliflower, has been proposed as a treatment. Periodontal therapy or scaling and root planing has also been suggested as an additional treatment.
Cancers
Extranodal marginal zone B-cell lymphomas
Extranodal marginal zone B-cell lymphomas (also termed MALT lymphomas) are generally indolent malignancies. Recommended treatment of H. pylori-positive extranodal marginal zone B-cell lymphoma of the stomach, when localized (i.e. Ann Arbor stage I and II), employs one of the antibiotic-proton pump inhibitor regiments listed in the H. pylori eradication protocols. If the initial regimen fails to eradicate the pathogen, patients are treated with an alternate protocol. Eradication of the pathogen is successful in 70–95% of cases. Some 50-80% of patients who experience eradication of the pathogen develop within 3–28 months a remission and long-term clinical control of their lymphoma. Radiation therapy to the stomach and surrounding (i.e. peri-gastric) lymph nodes has also been used to successfully treat these localized cases. Patients with non-localized (i.e. systemic Ann Arbor stage III and IV) disease who are free of symptoms have been treated with watchful waiting or, if symptomatic, with the immunotherapy drug, rituximab, (given for 4 weeks) combined with the chemotherapy drug, chlorambucil, for 6–12 months; 58% of these patients attain a 58% progression-free survival rate at 5 years. Frail stage III/IV patients have been successfully treated with rituximab or the chemotherapy drug, cyclophosphamide, alone. Only rare cases of H. pylori-positive extranodal marginal zone B-cell lymphoma of the colon have been successfully treated with an antibiotic-proton pump inhibitor regimen; the currently recommended treatments for this disease are surgical resection, endoscopic resection, radiation, chemotherapy, or, more recently, rituximab. In the few reported cases of H. pylori-positive extranodal marginal zone B-cell lymphoma of the esophagus, localized disease has been successfully treated with antibiotic-proton pump inhibitor regimens; however, advanced disease appears less responsive or unresponsive to these regimens but partially responsive to rituximab. Antibiotic-proton pump inhibitor eradication therapy and localized radiation therapy have been used successfully to treat H. pylori-positive extranodal marginal zone B-cell lymphomas of the rectum; however radiation therapy has given slightly better results and therefore been suggested to be the disease preferred treatment. The treatment of localized H. pylori-positive extranodal marginal zone B-cell lymphoma of the ocular adenexa with antibiotic/proton pump inhibitor regimens has achieved 2 year and 5 year failure-free survival rates of 67% and 55%, respectively, and a 5-year progression-free rate of 61%. However, the generally recognized treatment of choice for patients with systemic involvement uses various chemotherapy drugs often combined with rituximab.
Diffuse large B-cell lymphoma
Diffuse large B-cell lymphoma is a far more aggressive cancer than extranodal marginal zone B-cell lymphoma. Cases of this malignancy that are H. pylori-positive may be derived from the latter lymphoma and are less aggressive as well as more susceptible to treatment than H. pylori negative cases. Several recent studies strongly suggest that localized, early-stage diffuse Helicobacter pylori positive diffuse large B-cell lymphoma, when limited to the stomach, can be successfully treated with antibiotic-proton pump inhibitor regimens. However, these studies also agree that, given the aggressiveness of diffuse large B-cell lymphoma, patients treated with one of these H. pylori eradication regimes need to be carefully followed. If found unresponsive to or clinically worsening on these regimens, these patients should be switched to more conventional therapy such as chemotherapy (e.g. CHOP or a CHOP-like regimen), immunotherapy (e.g. rituximab), surgery, and/or local radiotherapy. H. pylori positive diffuse large B-cell lymphoma has been successfully treated with one or a combination of these methods.
Stomach adenocarcinoma
Helicobacter pylori is linked to the majority of gastric adenocarcinoma cases, particularly those that are located outside of the stomachs cardia (i.e. esophagus-stomach junction). The treatment for this cancer is highly aggressive with even localized disease being treated sequentially with chemotherapy and radiotherapy before surgical resection. Since this cancer, once developed, is independent of H. pylori infection, antibiotic-proton pump inhibitor regimens are not used in its treatment.
Prognosis
Helicobacter pylori colonizes the stomach and induces chronic gastritis, a long-lasting inflammation of the stomach. The bacterium persists in the stomach for decades in most people. Most individuals infected by H. pylori never experience clinical symptoms, despite having chronic gastritis. About 10–20% of those colonized by H. pylori ultimately develop gastric and duodenal ulcers. H. pylori infection is also associated with a 1–2% lifetime risk of stomach cancer and a less than 1% risk of gastric MALT lymphoma.In the absence of treatment, H. pylori infection – once established in its gastric niche – is widely believed to persist for life. In the elderly, however, infection likely can disappear as the stomachs mucosa becomes increasingly atrophic and inhospitable to colonization. The proportion of acute infections that persist is not known, but several studies that followed the natural history in populations have reported apparent spontaneous elimination.It is possible for H. pylori to re-establish in a person after eradication. This recurrence can be caused by the original strain (recrudescence), or be caused by a different strain (reinfection). According to a 2017 meta-analysis by Hu et al., the global per-person annual rates of recurrence, reinfection, and recrudescence is 4.3%, 3.1%, and 2.2% resepctively. It is unclear what the main risk factors are.Mounting evidence suggests H. pylori has an important role in protection from some diseases. The incidence of acid reflux disease, Barretts esophagus, and esophageal cancer have been rising dramatically at the same time as H. pyloris presence decreases. In 1996, Martin J. Blaser advanced the hypothesis that H. pylori has a beneficial effect by regulating the acidity of the stomach contents. The hypothesis is not universally accepted as several randomized controlled trials failed to demonstrate worsening of acid reflux disease symptoms following eradication of H. pylori. Nevertheless, Blaser has reasserted his view that H. pylori is a member of the normal flora of the stomach. He postulates that the changes in gastric physiology caused by the loss of H. pylori account for the recent increase in incidence of several diseases, including type 2 diabetes, obesity, and asthma. His group has recently shown that H. pylori colonization is associated with a lower incidence of childhood asthma.
Epidemiology
At least half the worlds population is infected by the bacterium, making it the most widespread infection in the world. Actual infection rates vary from nation to nation; the developing world has much higher infection rates than the developed one (notably Western Europe, North America, Australasia), where rates are estimated to be around 25%.The age when someone acquires this bacterium seems to influence the pathologic outcome of the infection. People infected at an early age are likely to develop more intense inflammation that may be followed by atrophic gastritis with a higher subsequent risk of gastric ulcer, gastric cancer, or both. Acquisition at an older age brings different gastric changes more likely to lead to duodenal ulcer. Infections are usually acquired in early childhood in all countries. However, the infection rate of children in developing nations is higher than in industrialized nations, probably due to poor sanitary conditions, perhaps combined with lower antibiotics usage for unrelated pathologies. In developed nations, it is currently uncommon to find infected children, but the percentage of infected people increases with age, with about 50% infected for those over the age of 60 compared with around 10% between 18 and 30 years. The higher prevalence among the elderly reflects higher infection rates in the past when the individuals were children rather than more recent infection at a later age of the individual. In the United States, prevalence appears higher in African-American and Hispanic populations, most likely due to socioeconomic factors. The lower rate of infection in the West is largely attributed to higher hygiene standards and widespread use of antibiotics. Despite high rates of infection in certain areas of the world, the overall frequency of H. pylori infection is declining. However, antibiotic resistance is appearing in H. pylori; many metronidazole- and clarithromycin-resistant strains are found in most parts of the world.
History
Helicobacter pylori migrated out of Africa along with its human host circa 60,000 years ago. Recent research states that genetic diversity in H. pylori, like that of its host, decreases with geographic distance from East Africa. Using the genetic diversity data, researchers have created simulations that indicate the bacteria seem to have spread from East Africa around 58,000 years ago. Their results indicate modern humans were already infected by H. pylori before their migrations out of Africa, and it has remained associated with human hosts since that time.H. pylori was first discovered in the stomachs of patients with gastritis and ulcers in 1982 by Drs. Barry Marshall and Robin Warren of Perth, Western Australia. At the time, the conventional thinking was that no bacterium could live in the acid environment of the human stomach. In recognition of their discovery, Marshall and Warren were awarded the 2005 Nobel Prize in Physiology or Medicine.Before the research of Marshall and Warren, German scientists found spiral-shaped bacteria in the lining of the human stomach in 1875, but they were unable to culture them, and the results were eventually forgotten. The Italian researcher Giulio Bizzozero described similarly shaped bacteria living in the acidic environment of the stomach of dogs in 1893. Professor Walery Jaworski of the Jagiellonian University in Kraków investigated sediments of gastric washings obtained by lavage from humans in 1899. Among some rod-like bacteria, he also found bacteria with a characteristic spiral shape, which he called Vibrio rugula. He was the first to suggest a possible role of this organism in the pathogenesis of gastric diseases. His work was included in the Handbook of Gastric Diseases, but it had little impact, as it was written in Polish. Several small studies conducted in the early 20th century demonstrated the presence of curved rods in the stomachs of many people with peptic ulcers and stomach cancers. Interest in the bacteria waned, however, when an American study published in 1954 failed to observe the bacteria in 1180 stomach biopsies.Interest in understanding the role of bacteria in stomach diseases was rekindled in the 1970s, with the visualization of bacteria in the stomachs of people with gastric ulcers. The bacteria had also been observed in 1979, by Robin Warren, who researched it further with Barry Marshall from 1981. After unsuccessful attempts at culturing the bacteria from the stomach, they finally succeeded in visualizing colonies in 1982, when they unintentionally left their Petri dishes incubating for five days over the Easter weekend. In their original paper, Warren and Marshall contended that most stomach ulcers and gastritis were caused by bacterial infection and not by stress or spicy food, as had been assumed before.Some skepticism was expressed initially, but within a few years multiple research groups had verified the association of H. pylori with gastritis and, to a lesser extent, ulcers. To demonstrate H. pylori caused gastritis and was not merely a bystander, Marshall drank a beaker of H. pylori culture. He became ill with nausea and vomiting several days later. An endoscopy 10 days after inoculation revealed signs of gastritis and the presence of H. pylori. These results suggested H. pylori was the causative agent. Marshall and Warren went on to demonstrate antibiotics are effective in the treatment of many cases of gastritis. In 1994, the National Institutes of Health stated most recurrent duodenal and gastric ulcers were caused by H. pylori, and recommended antibiotics be included in the treatment regimen.The bacterium was initially named Campylobacter pyloridis, then renamed C. pylori in 1987 (pylori being the genitive of pylorus, the circular opening leading from the stomach into the duodenum, from the Ancient Greek word πυλωρός, which means gatekeeper.). When 16S ribosomal RNA gene sequencing and other research showed in 1989 that the bacterium did not belong in the genus Campylobacter, it was placed in its own genus, Helicobacter from the ancient Greek έλιξ (hělix) "spiral" or "coil".In October 1987, a group of experts met in Copenhagen to found the European Helicobacter Study Group (EHSG), an international multidisciplinary research group and the only institution focused on H. pylori. The Group is involved with the Annual International Workshop on Helicobacter and Related Bacteria, the Maastricht Consensus Reports (European Consensus on the management of H. pylori), and other educational and research projects, including two international long-term projects:
European Registry on H. pylori Management (Hp-EuReg) – a database systematically registering the routine clinical practice of European gastroenterologists.
Optimal H. pylori management in primary care (OptiCare) – a long-term educational project aiming to disseminate the evidence based recommendations of the Maastricht IV Consensus to primary care physicians in Europe, funded by an educational grant from United European Gastroenterology.
Research
Results from in vitro studies suggest that fatty acids, mainly polyunsaturated fatty acids, have a bactericidal effect against H. pylori, but their in vivo effects have not been proven.
See also
List of oncogenic bacteria
Infectious causes of cancer
Explanatory footnotes
References
External links
"Information on tests for H. pylori". National Institutes of Health. U.S. Department of Health and Human Services. Archived from the original on 13 June 2013.
"European Helicobacter Study Group (EHSG)".
"Type strain of Helicobacter pylori at BacDive". Bacterial Diversity Metadatabase.
"Helicobacter pylori". Genome. KEGG. Japan. 26695. |
Hemorrhoid | Hemorrhoids (or haemorrhoids), also known as piles, are vascular structures in the anal canal. In their normal state, they are cushions that help with stool control. They become a disease when swollen or inflamed; the unqualified term "hemorrhoid" is often used to refer to the disease. The signs and symptoms of hemorrhoids depend on the type present. Internal hemorrhoids often result in painless, bright red rectal bleeding when defecating. External hemorrhoids often result in pain and swelling in the area of the anus. If bleeding occurs, it is usually darker. Symptoms frequently get better after a few days. A skin tag may remain after the healing of an external hemorrhoid.While the exact cause of hemorrhoids remains unknown, a number of factors that increase pressure in the abdomen are believed to be involved. This may include constipation, diarrhea, and sitting on the toilet for long periods. Hemorrhoids are also more common during pregnancy. Diagnosis is made by looking at the area. Many people incorrectly refer to any symptom occurring around the anal area as "hemorrhoids", and serious causes of the symptoms should not be ruled out. Colonoscopy or sigmoidoscopy is reasonable to confirm the diagnosis and rule out more serious causes.Often, no specific treatment is needed. Initial measures consist of increasing fiber intake, drinking fluids to maintain hydration, NSAIDs to help with pain, and rest. Medicated creams may be applied to the area, but their effectiveness is poorly supported by evidence. A number of minor procedures may be performed if symptoms are severe or do not improve with conservative management. Surgery is reserved for those who fail to improve following these measures.Approximately 50% to 66% of people have problems with hemorrhoids at some point in their lives. Males and females are both affected with about equal frequency. Hemorrhoids affect people most often between 45 and 65 years of age, and they are more common among the wealthy. Outcomes are usually good. The first known mention of the disease is from a 1700 BC Egyptian papyrus.
Signs and symptoms
In about 40% of people with pathological hemorrhoids, there are no significant symptoms. Internal and external hemorrhoids may present differently; however, many people may have a combination of the two. Bleeding enough to cause anemia is rare, and life-threatening bleeding is even more uncommon. Many people feel embarrassed when facing the problem and often seek medical care only when the case is advanced.
External
If not thrombosed, external hemorrhoids may cause few problems. However, when thrombosed, hemorrhoids may be very painful. Nevertheless, this pain typically resolves in two to three days. The swelling may, however, take a few weeks to disappear. A skin tag may remain after healing. If hemorrhoids are large and cause issues with hygiene, they may produce irritation of the surrounding skin, and thus itchiness around the anus.Lidocaine is a local anesthetic that blocks the calcium channel by blocking the transmission of nerve messages before reaching the central nervous system. As a result, the patient does not feel any pain. In addition, this drug is anti-inflammatory and is effective in treating hemorrhoids. Lidocaine is not recommended if you are pregnant or have a local allergy.
Internal
Internal hemorrhoids usually present with painless, bright red rectal bleeding during or following a bowel movement. The blood typically covers the stool (a condition known as hematochezia), is on the toilet paper, or drips into the toilet bowl. The stool itself is usually normally coloured. Other symptoms may include mucous discharge, a perianal mass if they prolapse through the anus, itchiness, and fecal incontinence. Internal hemorrhoids are usually painful only if they become thrombosed or necrotic.
Causes
The exact cause of symptomatic hemorrhoids is unknown. A number of factors are believed to play a role, including irregular bowel habits (constipation or diarrhea), lack of exercise, nutritional factors (low-fiber diets), increased intra-abdominal pressure (prolonged straining, ascites, an intra-abdominal mass, or pregnancy), genetics, an absence of valves within the hemorrhoidal veins, and aging. Other factors believed to increase risk include obesity, prolonged sitting, a chronic cough, and pelvic floor dysfunction. Squatting while defecating may also increase the risk of severe hemorrhoids. Evidence for these associations, however, is poor.During pregnancy, pressure from the fetus on the abdomen and hormonal changes cause the hemorrhoidal vessels to enlarge. The birth of the baby also leads to increased intra-abdominal pressures. Pregnant women rarely need surgical treatment, as symptoms usually resolve after delivery.
Pathophysiology
Hemorrhoid cushions are a part of normal human anatomy and become a pathological disease only when they experience abnormal changes. There are three main cushions present in the normal anal canal. These are located classically at left lateral, right anterior, and right posterior positions. They are composed of neither arteries nor veins, but blood vessels called sinusoids, connective tissue, and smooth muscle.: 175 Sinusoids do not have muscle tissue in their walls, as veins do. This set of blood vessels is known as the hemorrhoidal plexus.Hemorrhoid cushions are important for continence. They contribute to 15–20% of anal closure pressure at rest and protect the internal and external anal sphincter muscles during the passage of stool. When a person bears down, the intra-abdominal pressure grows, and hemorrhoid cushions increase in size, helping maintain anal closure. Hemorrhoid symptoms are believed to result when these vascular structures slide downwards or when venous pressure is excessively increased. Increased internal and external anal sphincter pressure may also be involved in hemorrhoid symptoms. Two types of hemorrhoids occur: internals from the superior hemorrhoidal plexus and externals from the inferior hemorrhoidal plexus. The pectinate line divides the two regions.
Diagnosis
Hemorrhoids are typically diagnosed by physical examination. A visual examination of the anus and surrounding area may diagnose external or prolapsed hemorrhoids. A rectal exam may be performed to detect possible rectal tumors, polyps, an enlarged prostate, or abscesses. This examination may not be possible without appropriate sedation because of pain, although most internal hemorrhoids are not associated with pain. Visual confirmation of internal hemorrhoids may require anoscopy, insertion of a hollow tube device with a light attached at one end. The two types of hemorrhoids are external and internal. These are differentiated by their position with respect to the pectinate line. Some persons may concurrently have symptomatic versions of both. If pain is present, the condition is more likely to be an anal fissure or external hemorrhoid rather than internal hemorrhoid.
Internal
Internal hemorrhoids originate above the pectinate line. They are covered by columnar epithelium, which lacks pain receptors. They were classified in 1985 into four grades based on the degree of prolapse:
Grade I: No prolapse, just prominent blood vessels
Grade II: Prolapse upon bearing down, but spontaneous reduction
Grade III: Prolapse upon bearing down requiring manual reduction
Grade IV: Prolapse with inability to be manually reduced.
External
External hemorrhoids occur below the dentate (or pectinate) line. They are covered proximally by anoderm and distally by skin, both of which are sensitive to pain and temperature.
Differential
Many anorectal problems, including fissures, fistulae, abscesses, colorectal cancer, rectal varices, and itching have similar symptoms and may be incorrectly referred to as hemorrhoids. Rectal bleeding may also occur owing to colorectal cancer, colitis including inflammatory bowel disease, diverticular disease, and angiodysplasia. If anemia is present, other potential causes should be considered.Other conditions that produce an anal mass include skin tags, anal warts, rectal prolapse, polyps, and enlarged anal papillae. Anorectal varices due to portal hypertension (blood pressure in the portal venous system) may present similar to hemorrhoids but are a different condition. Portal hypertension does not increase the risk of hemorrhoids.
Prevention
A number of preventative measures are recommended, including avoiding straining while attempting to defecate, avoiding constipation and diarrhea either by eating a high-fiber diet and drinking plenty of fluid or by taking fiber supplements and getting sufficient exercise. Spending less time attempting to defecate, avoiding reading while on the toilet, and losing weight for overweight persons and avoiding heavy lifting are also recommended.
Management
Conservative
Conservative treatment typically consists of foods rich in dietary fiber, intake of oral fluids to maintain hydration, nonsteroidal anti-inflammatory drugs, sitz baths, and rest. Increased fiber intake has been shown to improve outcomes and may be achieved by dietary alterations or the consumption of fiber supplements. Evidence for benefits from sitz baths during any point in treatment, however, is lacking. If they are used, they should be limited to 15 minutes at a time.: 182 Decreasing time spent on the toilet and not straining is also recommended.While many topical agents and suppositories are available for the treatment of hemorrhoids, little evidence supports their use. As such, they are not recommended by the American Society of Colon and Rectal Surgeons. Steroid-containing agents should not be used for more than 14 days, as they may cause thinning of the skin. Most agents include a combination of active ingredients. These may include a barrier cream such as petroleum jelly or zinc oxide, an analgesic agent such as lidocaine, and a vasoconstrictor such as epinephrine. Some contain Balsam of Peru to which certain people may be allergic.Flavonoids are of questionable benefit, with potential side effects. Symptoms usually resolve following pregnancy; thus active treatment is often delayed until after delivery. Evidence does not support the use of traditional Chinese herbal treatment.Several professional organizations weakly recommend the use of phlebotonics in the treatment of the symptoms of haemorrhoids grade I to II, although these drugs are not approved in the United States as of 2013 and in Germany, and restricted in Spain for the treatment of chronic venous diseases.
Procedures
A number of office-based procedures may be performed. While generally safe, rare serious side effects such as perianal sepsis may occur.
Rubber band ligation is typically recommended as the first-line treatment in those with grade I to III disease. It is a procedure in which elastic bands are applied onto internal hemorrhoid at least 1 cm above the pectinate line to cut off its blood supply. Within 5–7 days, the withered hemorrhoid falls off. If the band is placed too close to the pectinate line, intense pain results immediately afterwards. The cure rate has been found to be about 87%, with a complication rate of up to 3%.
Sclerotherapy involves the injection of a sclerosing agent, such as phenol, into the hemorrhoid. This causes the vein walls to collapse and the hemorrhoids to shrivel up. The success rate four years after treatment is about 70%.
A number of cauterization methods have been shown to be effective for hemorrhoids, but are usually used only when other methods fail. This procedure can be done using electrocautery, infrared radiation, laser surgery, or cryosurgery. Infrared cauterization may be an option for grade I or II disease. In those with grade III or IV disease, reoccurrence rates are high.
Surgery
A number of surgical techniques may be used if conservative management and simple procedures fail. All surgical treatments are associated with some degree of complications, including bleeding, infection, anal strictures, and urinary retention, due to the close proximity of the rectum to the nerves that supply the bladder. Also, a small risk of fecal incontinence occurs, particularly of liquid, with rates reported between 0% and 28%. Mucosal ectropion is another condition which may occur after hemorrhoidectomy (often together with anal stenosis). This is where the anal mucosa becomes everted from the anus, similar to a very mild form of rectal prolapse.
Excisional hemorrhoidectomy is a surgical excision of the hemorrhoid used primarily only in severe cases. It is associated with significant postoperative pain and usually requires two to four weeks for recovery. However, the long-term benefit is greater in those with grade III hemorrhoids as compared to rubber band ligation. It is the recommended treatment in those with a thrombosed external hemorrhoid if carried out within 24–72 hours. Evidence to support this is weak, however. Glyceryl trinitrate ointment after the procedure helps both with pain and with healing.
Doppler-guided transanal hemorrhoidal dearterialization is a minimally invasive treatment using an ultrasound Doppler to accurately locate the arterial blood inflow. These arteries are then "tied off" and the prolapsed tissue is sutured back to its normal position. It has a slightly higher recurrence rate but fewer complications compared to a hemorrhoidectomy.
Stapled hemorrhoidectomy, also known as stapled hemorrhoidopexy, involves the removal of much of the abnormally enlarged hemorrhoidal tissue, followed by a repositioning of the remaining hemorrhoidal tissue back to its normal anatomical position. It is generally less painful and is associated with faster healing compared to complete removal of hemorrhoids. However, the chance of symptomatic hemorrhoids returning is greater than for conventional hemorrhoidectomy, so it is typically recommended only for grade II or III disease.
Epidemiology
It is difficult to determine how common hemorrhoids are as many people with the condition do not see a healthcare provider. However, symptomatic hemorrhoids are thought to affect at least 50% of the US population at some time during their lives, and around 5% of the population is affected at any given time. Both sexes experience about the same incidence of the condition, with rates peaking between 45 and 65 years. They are more common in Caucasians and those of higher socioeconomic status.Long-term outcomes are generally good, though some people may have recurrent symptomatic episodes. Only a small proportion of persons end up needing surgery.
History
The first known mention of this disease is from a 1700 BCE Egyptian papyrus, which advises: "... Thou shouldest give a recipe, an ointment of great protection; acacia leaves, ground, titurated and cooked together. Smear a strip of fine linen there-with and place in the anus, that he recovers immediately." In 460 BCE, the Hippocratic corpus discusses a treatment similar to modern rubber band ligation: "And hemorrhoids in like manner you may treat by transfixing them with a needle and tying them with very thick and woolen thread, for application, and do not foment until they drop off, and always leave one behind; and when the patient recovers, let him be put on a course of Hellebore." Hemorrhoids may have been described in the Bible, with earlier English translations using the now-obsolete spelling "emerods".Celsus (25 BCE – 14 CE) described ligation and excision procedures and discussed the possible complications. Galen advocated severing the connection of the arteries to veins, claiming it reduced both pain and the spread of gangrene. The Susruta Samhita (4th–5th century BCE) is similar to the words of Hippocrates, but emphasizes wound cleanliness. In the 13th century, European surgeons such as Lanfranc of Milan, Guy de Chauliac, Henri de Mondeville, and John of Ardene made great progress and development of the surgical techniques.In medieval times, hemorrhoids were also known as Saint Fiacres curse after a sixth-century saint who developed them following tilling the soil. The first use of the word "hemorrhoid" in English occurs in 1398, derived from the Old French "emorroides", from Latin hæmorrhoida, in turn from the Greek αἱμορροΐς (haimorrhois), "liable to discharge blood", from αἷμα (haima), "blood" and ῥόος (rhoos), "stream, flow, current", itself from ῥέω (rheo), "to flow, to stream".
Notable cases
Hall-of-Fame baseball player George Brett was removed from a game in the 1980 World Series due to hemorrhoid pain. After undergoing minor surgery, Brett returned to play in the next game, quipping, "My problems are all behind me". Brett underwent further hemorrhoid surgery the following spring. Conservative political commentator Glenn Beck underwent surgery for hemorrhoids, subsequently describing his unpleasant experience in a widely viewed 2008 YouTube video. Former U.S. President Jimmy Carter had surgery for hemorrhoids in 1984. Cricketers Matthew Hayden and Viv Richards have also had the condition.
References
External links
Hemorrhoid at Curlie
Davis, BR; Lee-Kong, SA; Migaly, J; Feingold, DL; Steele, SR (March 2018). "The American Society of Colon and Rectal Surgeons Clinical Practice Guidelines for the Management of Hemorrhoids". Diseases of the Colon and Rectum. 61 (3): 284–292. doi:10.1097/DCR.0000000000001030. PMID 29420423. S2CID 4198610. |
Heparin-induced thrombocytopenia | Heparin-induced thrombocytopenia (HIT) is the development of thrombocytopenia (a low platelet count), due to the administration of various forms of heparin, an anticoagulant. HIT predisposes to thrombosis (the abnormal formation of blood clots inside a blood vessel) because platelets release microparticles that activate thrombin, thereby leading to thrombosis. When thrombosis is identified the condition is called heparin-induced thrombocytopenia and thrombosis (HITT). HIT is caused by the formation of abnormal antibodies that activate platelets. If someone receiving heparin develops new or worsening thrombosis, or if the platelet count falls, HIT can be confirmed with specific blood tests.The treatment of HIT requires stopping heparin treatment, and both protection from thrombosis and choice of an agent that will not reduce the platelet count any further. Several alternatives are available for this purpose; mainly used are danaparoid, fondaparinux, argatroban, and bivalirudin.While heparin was discovered in the 1930s, HIT was not reported until the 1960s.
Signs and symptoms
Heparin may be used for both prevention and the treatment of thrombosis. It exists in two main forms: an "unfractionated" form that can be injected under the skin (subcutaneously) or through an intravenous infusion, and a "low molecular weight" form that is generally given subcutaneously. Commonly used low molecular weight heparins are enoxaparin, dalteparin, nadroparin and tinzaparin.In HIT, the platelet count in the blood falls below the normal range, a condition called thrombocytopenia. However, it is generally not low enough to lead to an increased risk of bleeding. Most people with HIT, therefore, do not experience any symptoms. Typically, the platelet count falls 5–14 days after heparin is first given; if someone has received heparin in the previous three months, the fall in platelet count may occur sooner, sometimes within a day.The most common symptom of HIT is enlargement or extension of a previously diagnosed blood clot, or the development of a new blood clot elsewhere in the body. This may take the form of clots either in arteries or veins, causing arterial or venous thrombosis, respectively. Examples of arterial thrombosis are stroke, myocardial infarction ("heart attack"), and acute leg ischemia. Venous thrombosis may occur in the leg or arm in the form of deep vein thrombosis (DVT) and in the lung in the form of a pulmonary embolism (PE); the latter usually originates in the leg, but migrates to the lung.In those receiving heparin through an intravenous infusion, a complex of symptoms ("systemic reaction") may occur when the infusion is started. These include fever, chills, high blood pressure, a fast heart rate, shortness of breath, and chest pain. This happens in about a quarter of people with HIT. Others may develop a skin rash consisting of red spots.
Mechanism
The administration of heparin can cause the development of HIT antibodies, suggesting heparin may act as a hapten, thus may be targeted by the immune system. In HIT, the immune system forms antibodies against heparin when it is bound to a protein called platelet factor 4 (PF4). These antibodies are usually of the IgG class and their development usually takes about 5 days. However, those who have been exposed to heparin in the last few months may still have circulating IgG, as IgG-type antibodies generally continue to be produced even when their precipitant has been removed. This is similar to immunity against certain microorganisms, with the difference that the HIT antibody does not persist more than three months. HIT antibodies have been found in individuals with thrombocytopenia and thrombosis who had no prior exposure to heparin, but the majority are found in people who are receiving heparin.The IgG antibodies form a complex with heparin and PF4 in the bloodstream. The tail of the antibody then binds to the FcγIIa receptor, a protein on the surface of the platelet. This results in platelet activation and the formation of platelet microparticles, which initiate the formation of blood clots; the platelet count falls as a result, leading to thrombocytopenia. In addition, the reticuloendothelial system (mostly the spleen) removes the antibody-coated platelets, further contributing to the thrombocytopenia.
Formation of PF4-heparin antibodies is common in people receiving heparin, but only a proportion of these develop thrombocytopenia or thrombosis. This has been referred to as an "iceberg phenomenon".
Diagnosis
HIT may be suspected if blood tests show a falling platelet count in someone receiving heparin, even if the heparin has already been discontinued. Professional guidelines recommend that people receiving heparin have a complete blood count (which includes a platelet count) on a regular basis while receiving heparin.However, not all people with a falling platelet count while receiving heparin turn out to have HIT. The timing, severity of the thrombocytopenia, the occurrence of new thrombosis, and the presence of alternative explanations, all determine the likelihood that HIT is present. A commonly used score to predict the likelihood of HIT is the "4 Ts" score introduced in 2003. A score of 0–8 points is generated; if the score is 0–3, HIT is unlikely. A score of 4–5 indicates intermediate probability, while a score of 6–8 makes it highly likely. Those with a high score may need to be treated with an alternative drug, while more sensitive and specific tests for HIT are performed, while those with a low score can safely continue receiving heparin, as the likelihood that they have HIT is extremely low. In an analysis of the reliability of the 4T score, a low score had a negative predictive value of 0.998, while an intermediate score had a positive predictive value of 0.14 and a high score a positive predictive value of 0.64; intermediate and high scores, therefore, warrant further investigation.
The first screening test in someone suspected of having HIT is aimed at detecting antibodies against heparin-PF4 complexes. This may be with a laboratory test of the enzyme-linked immunosorbent assay type. This ELISA test, however, detects all circulating antibodies that bind heparin-PF4 complexes, and may also falsely identify antibodies that do not cause HIT. Therefore, those with a positive ELISA are tested further with a functional assay. This test uses platelets and serum from the patient; the platelets are washed and mixed with serum and heparin. The sample is then tested for the release of serotonin, a marker of platelet activation. If this serotonin release assay (SRA) shows high serotonin release, the diagnosis of HIT is confirmed. The SRA test is difficult to perform and is usually only done in regional laboratories.If someone has been diagnosed with HIT, some recommend routine Doppler sonography of the leg veins to identify deep vein thromboses, as this is very common in HIT.
Treatment
Given the fact that HIT predisposes strongly to new episodes of thrombosis, simply discontinuing the heparin administration is insufficient. Generally, an alternative anticoagulant is needed to suppress the thrombotic tendency while the generation of antibodies stops and the platelet count recovers. To make matters more complicated, the other most commonly used anticoagulant, warfarin, should not be used in HIT until the platelet count is at least 150 x 109/L because a very high risk of warfarin necrosis exists in people with HIT who have low platelet counts. Warfarin necrosis is the development of skin gangrene in those receiving warfarin or a similar vitamin K inhibitor. If the patient was receiving warfarin at the time when HIT is diagnosed, the activity of warfarin is reversed with vitamin K. Transfusing platelets is discouraged, as a theoretical risk indicates that this may worsen the risk of thrombosis; the platelet count is rarely low enough to be the principal cause of significant hemorrhage.Various nonheparin agents are used as alternatives to heparin therapy to provide anticoagulation in those with strongly suspected or proven HIT: danaparoid, fondaparinux, bivalirudin, and argatroban. Not all agents are available in all countries, and not all are approved for this specific use. For instance, argatroban is only recently licensed in the United Kingdom, and danaparoid is not available in the United States. Fondaparinux, a factor Xa inhibitor, is commonly used off label for HIT treatment in the United States.According to a systematic review, people with HIT treated with lepirudin showed a relative risk reduction of clinical outcome (death, amputation, etc.) to be 0.52 and 0.42 when compared to patient controls. In addition, people treated with argatroban for HIT showed a relative risk reduction of the above clinical outcomes to be 0.20 and 0.18. Lepirudin production stopped on May 31, 2012.
Epidemiology
Up to 8% of patients receiving heparin are at risk to develop HIT antibodies, but only 1–5% on heparin will progress to develop HIT with thrombocytopenia and subsequently one-third of them may develop arterial or venous thrombosis. After vascular surgery, 34% of patients receiving heparin developed HIT antibodies without clinical symptoms. The exact number of cases of HIT in the general population is unknown. What is known is that women receiving heparin after a recent surgical procedure, particularly cardiothoracic surgery, have a higher risk, while the risk is very low in women just before and after giving birth. Some studies have shown that HIT is less common in those receiving low molecular weight heparin.
History
While heparin was introduced for clinical use in the late 1930s, new thrombosis in people treated with heparin was not described until 1957, when vascular surgeons reported the association. The fact that this phenomenon occurred together with thrombocytopenia was reported in 1969; prior to this time, platelet counts were not routinely performed. A 1973 report established HIT as a diagnosis, as well as suggesting that its features were the result of an immune process.Initially, various theories existed about the exact cause of the low platelets in HIT. Gradually, evidence accumulated on the exact underlying mechanism. In 1984–1986, John G. Kelton and colleagues at McMaster University Medical School developed the laboratory tests that could be used to confirm or exclude heparin-induced thrombocytopenia.Treatment was initially limited to aspirin and warfarin, but the 1990s saw the introduction of a number of agents that could provide anticoagulation without a risk of recurrent HIT. Older terminology distinguishes between two forms of heparin-induced thrombocytopenia: type 1 (mild, nonimmune mediated and self-limiting fall in platelet count) and type 2, the form described above. Currently, the term HIT is used without a modifier to describe the immune-mediated severe form.In 2021 a condition resembling HIT but without heparin exposure was described to explain unusual post-vaccination embolic and thrombotic events after the Oxford–AstraZeneca COVID-19 vaccine. It is a rare adverse event (1:1 million to 1:100,000) resulting from COVID-19 vaccines (particularly adenoviral vector vaccines). This is also known as Thrombosis with Thrombocytopenia Syndrome or TTS.
References
== External links == |
Hepatic porphyria | Hepatic porphyrias is a form of porphyria in which toxic porphyrin molecules build up in the liver. Hepatic porphyrias can result from a number of different enzyme deficiencies.Examples include (in order of synthesis pathway):
Acute intermittent porphyria
Porphyria cutanea tarda and Hepatoerythropoietic porphyria
Hereditary coproporphyria
Variegate porphyria
See also
Erythropoietic porphyria
Givosiran
References
External links
Porphyrias,+Hepatic at the US National Library of Medicine Medical Subject Headings (MeSH)
www.drugs-porphyria.com
www.porphyria-europe.com |
Hereditary hemorrhagic telangiectasia | Hereditary hemorrhagic telangiectasia (HHT), also known as Osler–Weber–Rendu disease and Osler–Weber–Rendu syndrome, is a rare autosomal dominant genetic disorder that leads to abnormal blood vessel formation in the skin, mucous membranes, and often in organs such as the lungs, liver, and brain.It may lead to nosebleeds, acute and chronic digestive tract bleeding, and various problems due to the involvement of other organs. Treatment focuses on reducing bleeding from blood vessel lesions, and sometimes surgery or other targeted interventions to remove arteriovenous malformations in organs. Chronic bleeding often requires iron supplements and sometimes blood transfusions. HHT is transmitted in an autosomal dominant fashion, and occurs in one in 5,000–8,000 people in North America.The disease carries the names of Sir William Osler, Henri Jules Louis Marie Rendu, and Frederick Parkes Weber, who described it in the late 19th and early 20th centuries.
Signs and symptoms
Telangiectasias
Telangiectasia (small vascular malformations) may occur in the skin and mucosal linings of the nose and gastrointestinal tract. The most common problem is nosebleeds (epistaxis), which happen frequently from childhood and affect about 90–95% of people with HHT. Lesions on the skin and in the mouth bleed less often but may be considered cosmetically displeasing; they affect about 80%. The skin lesions characteristically occur on the lips, the nose and the fingers, and on the skin of the face in sun-exposed areas. They appear suddenly, with the number increasing over time.About 20% are affected by symptomatic digestive tract lesions, although a higher percentage have lesions that do not cause symptoms. These lesions may bleed intermittently, which is rarely significant enough to be noticed (in the form of bloody vomiting or black stool), but can eventually lead to depletion of iron in the body, resulting in iron-deficiency anemia.
Arteriovenous malformation
Arteriovenous malformations (AVMs, larger vascular malformations) occur in larger organs, predominantly the lungs (pulmonary AVMs) (50%), liver (30–70%) and the brain (cerebral AVMs, 10%), with a very small proportion (<1%) of AVMs in the spinal cord.Vascular malformations in the lungs may cause a number of problems. The lungs normally "filter out" bacteria and blood clots from the bloodstream; AVMs bypass the capillary network of the lungs and allow these to migrate to the brain, where bacteria may cause a brain abscess and blood clots may lead to stroke. HHT is the most common cause of lung AVMs: out of all people found to have lung AVMs, 70–80% are due to HHT. Bleeding from lung AVMs is relatively unusual, but may cause hemoptysis (coughing up blood) or hemothorax (blood accumulating in the chest cavity). Large vascular malformations in the lung allow oxygen-depleted blood from the right ventricle to bypass the alveoli, meaning that this blood does not have an opportunity to absorb fresh oxygen. This may lead to breathlessness. Large AVMs may lead to platypnea, difficulty in breathing that is more marked when sitting up compared to lying down; this probably reflects changes in blood flow associated with positioning. Very large AVMs cause a marked inability to absorb oxygen, which may be noted by cyanosis (bluish discoloration of the lips and skin), clubbing of the fingernails (often encountered in chronically low oxygen levels), and a humming noise over the affected part of the lung detectable by stethoscope.The symptoms produced by AVMs in the liver depend on the type of abnormal connection that they form between blood vessels. If the connection is between arteries and veins, a large amount of blood bypasses the bodys organs, for which the heart compensates by increasing the cardiac output. Eventually congestive cardiac failure develops ("high-output cardiac failure"), with breathlessness and leg swelling among other problems. If the AVM creates a connection between the portal vein and the blood vessels of the liver, the result may be portal hypertension (increased portal vein pressure), in which collateral blood vessels form in the esophagus (esophageal varices), which may bleed violently; furthermore, the increased pressure may give rise to fluid accumulation in the abdominal cavity (ascites). If the flow in the AVM is in the other direction, portal venous blood flows directly into the veins rather than running through the liver; this may lead to hepatic encephalopathy (confusion due to portal waste products irritating the brain). Rarely, the bile ducts are deprived of blood, leading to severe cholangitis (inflammation of the bile ducts). Liver AVMs are detectable in over 70% of people with HHT, but only 10% experience problems as a result.In the brain, AVMs occasionally exert pressure, leading to headaches. They may also increase the risk of seizures, as would any abnormal tissue in the brain. Finally, hemorrhage from an AVM may lead to intracerebral hemorrhage (bleeding into the brain), which causes any of the symptoms of stroke such as weakness in part of the body or difficulty speaking. If the bleeding occurs into the subarachnoid space (subarachnoid hemorrhage), there is usually a severe, sudden headache and decreased level of consciousness and often weakness in part of the body.
Other problems
A very small proportion (those affected by SMAD4 (MADH4) mutations, see below) have multiple benign polyps in the large intestine, which may bleed or transform into colorectal cancer. A similarly small proportion experiences pulmonary hypertension, a state in which the pressure in the lung arteries is increased, exerting pressure on the right side of the heart and causing peripheral edema (swelling of the legs), fainting and attacks of chest pain. It has been observed that the risk of thrombosis (particularly venous thrombosis, in the form of deep vein thrombosis or pulmonary embolism) may be increased. There is a suspicion that those with HHT may have a mild immunodeficiency and are therefore at a slightly increased risk from infections.
Genetics
HHT is a genetic disorder with an autosomal dominant inheritance pattern. Those with HHT symptoms that have no relatives with the disease may have a new mutation. Homozygosity appears to be fatal in utero.Five genetic types of HHT are recognized. Of these, three have been linked to particular genes, while the two remaining have currently only been associated with a particular locus. More than 80% of all cases of HHT are due to mutations in either ENG or ACVRL1. A total of over 600 different mutations are known. There is likely to be a predominance of either type in particular populations, but the data are conflicting. MADH4 mutations, which cause colonic polyposis in addition to HHT, comprise about 2% of disease-causing mutations. Apart from MADH4, it is not clear whether mutations in ENG and ACVRL1 lead to particular symptoms, although some reports suggest that ENG mutations are more likely to cause lung problems while ACVRL1 mutations may cause more liver problems, and pulmonary hypertension may be a particular problem in people with ACVRL1 mutations. People with exactly the same mutations may have different nature and severity of symptoms, suggesting that additional genes or other risk factors may determine the rate at which lesions develop; these have not yet been identified.
Pathophysiology
Telangiectasias and arteriovenous malformations in HHT are thought to arise because of changes in angiogenesis, the development of blood vessels out of existing ones. The development of a new blood vessel requires the activation and migration of various types of cells, chiefly endothelium, smooth muscle and pericytes. The exact mechanism by which the HHT mutations influence this process is not yet clear, and it is likely that they disrupt a balance between pro- and antiangiogenic signals in blood vessels. The wall of telangiectasias is unusually friable, which explains the tendency of these lesions to bleed.All genes known so far to be linked to HHT code for proteins in the TGF-β signaling pathway. This is a group of proteins that participates in signal transduction of hormones of the transforming growth factor beta superfamily (the transforming growth factor beta, bone morphogenetic protein and growth differentiation factor classes), specifically BMP9/GDF2 and BMP10. The hormones do not enter the cell but link to receptors on the cell membrane; these then activate other proteins, eventually influencing cellular behavior in a number of ways such as cellular survival, proliferation (increasing in number) and differentiation (becoming more specialized). For the hormone signal to be adequately transduced, a combination of proteins is needed: two each of two types of serine/threonine-specific kinase type membrane receptors and endoglin. When bound to the hormone, the type II receptor proteins phosphorylate (transfer phosphate) onto type I receptor proteins (of which Alk-1 is one), which in turn phosphorylate a complex of SMAD proteins (chiefly SMAD1, SMAD5 and SMAD8). These bind to SMAD4 and migrate to the cell nucleus where they act as transcription factors and participate in the transcription of particular genes. In addition to the SMAD pathway, the membrane receptors also act on the MAPK pathway, which has additional actions on the behavior of cells. Both Alk-1 and endoglin are expressed predominantly in endothelium, perhaps explaining why HHT-causing mutations in these proteins lead predominantly to blood vessel problems. Both ENG and ACVRL1 mutations lead predominantly to underproduction of the related proteins, rather than misfunctioning of the proteins.
Diagnosis
Diagnostic tests may be conducted for various reasons. Firstly, some tests are needed to confirm or refute the diagnosis. Secondly, some are needed to identify any potential complications.
Telangiectasias
The skin and oral cavity telangiectasias are visually identifiable on physical examination, and similarly the lesions in the nose may be seen on endoscopy of the nasopharynx or on laryngoscopy. The severity of nosebleeds may be quantified objectively using a grid-like questionnaire in which the number of nosebleed episodes and their duration is recorded.Digestive tract telangiectasias may be identified on esophagogastroduodenoscopy (endoscopy of the esophagus, stomach and first part of the small intestine). This procedure will typically only be undertaken if there is anemia that is more marked than expected by the severity of nosebleeds, or if there is evidence of severe bleeding (vomiting blood, black stools). If the number of lesions seen on endoscopy is unexpectedly low, the remainder of the small intestine may be examined with capsule endoscopy, in which the patient swallows a capsule-shaped device containing a miniature camera which transmits images of the digestive tract to a portable digital recorder.
Arteriovenous malformations
Identification of AVMs requires detailed medical imaging of the organs most commonly affected by these lesions. Not all AVMs cause symptoms or are at risk of doing so, and hence there is a degree of variation between specialists as to whether such investigations would be performed, and by which modality; often, decisions on this issue are reached together with the patient.Lung AVMs may be suspected because of the abnormal appearance of the lungs on a chest X-ray, or hypoxia (low oxygen levels) on pulse oximetry or arterial blood gas determination. Bubble contrast echocardiography (bubble echo) may be used as a screening tool to identify abnormal connections between the lung arteries and veins. This involves the injection of agitated saline into a vein, followed by ultrasound-based imaging of the heart. Normally, the lungs remove small air bubbles from the circulation, and they are therefore only seen in the right atrium and the right ventricle. If an AVM is present, bubbles appear in the left atrium and left ventricle, usually 3–10 cardiac cycles after the right side; this is slower than in heart defects, in which there are direct connections between the right and left side of the heart. A larger number of bubbles is more likely to indicate the presence of an AVM. Bubble echo is not a perfect screening tool as it can miss smaller AVMs and does not identify the site of AVMs. Often contrast-enhanced computed tomography (CT angiography) is used to identify lung lesions; this modality has a sensitivity of over 90%. It may be possible to omit contrast administration on modern CT scanners. Echocardiography is also used if there is a suspicion of pulmonary hypertension or high-output cardiac failure due to large liver lesions, sometimes followed by cardiac catheterization to measure the pressures inside the various chambers of the heart.
Liver AVMs may be suspected because of abnormal liver function tests in the blood, because the symptoms of heart failure develop, or because of jaundice or other symptoms of liver dysfunction. The most reliable initial screening test is Doppler ultrasonography of the liver; this has a very high sensitivity for identifying vascular lesions in the liver. If necessary, contrast-enhanced CT may be used to further characterize AVMs. It is extremely common to find incidental nodules on liver scans, most commonly due to focal nodular hyperplasia (FNH), as these are a hundredfold times more common in HHT compared to the general population. FNH is regarded as harmless. Generally, tumor markers and additional imaging modalities are used to differentiate between FNH and malignant tumors of the liver. Liver biopsy is discouraged in people with HHT as the risk of hemorrhage from liver AVMs may be significant. Liver scans may be useful if someone is suspected of HHT, but does not meet the criteria (see below) unless liver lesions can be demonstrated.Brain AVMs may be detected on computed tomography angiography (CTA or CT angio) or magnetic resonance angiography (MRA); CTA is better in showing the vessels themselves, and MRA provides more detail about the relationship between an AVM and surrounding brain tissue. In general, MRI is recommended. Various types of vascular malformations may be encountered: AVMs, micro-AVMs, telangiectasias and arteriovenous fistulas. If surgery, embolization, or other treatment is contemplated (see below), cerebral angiography may be required to get sufficient detail of the vessels. This procedure carries a small risk of stroke (0.5%) and is therefore limited to specific circumstances. Recent professional guidelines recommend that all children with suspected or definite HHT undergo a brain MRI early in life to identify AVMs that can cause major complications. Others suggest that screening for cerebral AVMs is probably unnecessary in those who are not experiencing any neurological symptoms, because most lesions discovered on screening scans would not require treatment, creating undesirable conundrums.
Genetic testing
Genetic tests are available for the ENG, ACVRL1 and MADH4 mutations. Testing is not always needed for diagnosis, because the symptoms are sufficient to distinguish the disease from other diagnoses. There are situations in which testing can be particularly useful. Firstly, children and young adults with a parent with definite HHT may have limited symptoms, yet be at risk from some of the complications mentioned above; if the mutation is known in the affected parent, absence of this mutation in the child would prevent the need for screening tests. Furthermore, genetic testing may confirm the diagnosis in those with limited symptoms who otherwise would have been labeled "possible HHT" (see below).Genetic diagnosis in HHT is difficult, as mutations occur in numerous different locations in the linked genes, without particular mutations being highly frequent (as opposed to, for instance, the ΔF508 mutation in cystic fibrosis). Sequence analysis of the involved genes is therefore the most useful approach (sensitivity 75%), followed by additional testing to detect large deletions and duplications (additional 10%). Not all mutations in these genes have been linked with disease.Mutations in the MADH4 gene is usually associated with juvenile polyposis, and detection of such a mutation would indicate a need to screen the patient and affected relatives for polyps and tumors of the large intestine.
Criteria
The diagnosis can be made depending on the presence of four criteria, known as the "Curaçao criteria". If three or four are met, a patient has "definite HHT", while two gives "possible HHT":
Spontaneous recurrent epistaxis
Multiple telangiectasias in typical locations (see above)
Proven visceral AVM (lung, liver, brain, spine)
First-degree family member with HHTDespite the designation "possible", someone with a visceral AVM and a family history but no nosebleeds or telangiectasias is still extremely likely to have HHT, because these AVMs are very uncommon in the general population. At the same time, the same cannot be said of nosebleeds and sparse telangiectasias, both of which occur in people without HHT, in the absence of AVMs. Someones diagnostic status may change in the course of life, as young children may not yet exhibit all the symptoms; at age 16, thirteen percent are still indeterminate, while at age 60 the vast majority (99%) have a definite diagnostic classification. The children of established HHT patients may therefore be labeled as "possible HHT", as 50% may turn out to have HHT in the course of their life.
Treatment
Treatment of HHT is symptomatic (it deals with the symptoms rather than the disease itself), as there is no therapy that stops the development of telangiectasias and AVMs directly. Furthermore, some treatments are applied to prevent the development of common complications. Chronic nosebleeds and digestive tract bleeding can both lead to anemia; if the bleeding itself cannot be completely stopped, the anemia requires treatment with iron supplements. Those who cannot tolerate iron tablets or solutions may require administration of intravenous iron, and blood transfusion if the anemia is causing severe symptoms that warrant rapid improvement of the blood count.Most treatments used in HHT have been described in adults, and the experience in treating children is more limited. Women with HHT who get pregnant are at an increased risk of complications, and are observed closely, although the absolute risk is still low (1%).
Nosebleeds
An acute nosebleed may be managed with a variety of measures, such as packing of the nasal cavity with absorbent swabs or gels. Removal of the packs after the bleeding may lead to reopening of the fragile vessels, and therefore lubricated or atraumatic packing is recommended. Some patients may wish to learn packing themselves to deal with nosebleeds without having to resort to medical help.Frequent nosebleeds can be prevented in part by keeping the nostrils moist, and by applying saline solution, estrogen-containing creams or tranexamic acid; these have few side effects and may have a small degree of benefit. A number of additional modalities has been used to prevent recurrent bleeding if simple measures are unsuccessful. Medical therapies include oral tranexamic acid and estrogen; the evidence for these is relatively limited, and estrogen is poorly tolerated by men and possibly carries risks of cancer and heart disease in women past the menopause. Nasal coagulation and cauterization may reduce the bleeding from telangiectasias, and is recommended before surgery is considered. However, it is highly recommended to use the least heat and time to prevent septal perforations and excessive trauma to the nasal mucosa that are already susceptible to bleeding. Sclerotherapy is another option to manage the bleeding. This process involves injecting a small amount of an aerated irritant (detergent such as sodium tetradecyl sulfate) directly into the telangiectasias. The detergent causes the vessel to collapse and harden, resulting in scar tissue residue. This is the same procedure used to treat varicose veins and similar disorders.It may be possible to embolize vascular lesions through interventional radiology; this requires passing a catheter through a large artery and locating the maxillary artery under X-ray guidance, followed by the injection into the vessel of particles that occlude the blood vessels. The benefit from the procedure tends to be short-lived, and it may be most appropriate in episodes of severe bleeding.To more effectively minimize recurrence and severity of epistaxis, other options may be used in conjunction with therapies listed above. Intravenously administered anti-VEGF substances such as bevacizumab (brand name Avastin), pazopinab and thalidomide or its derivatives interfere with the production of new blood vessels that are weak and therefore prone to bleeding. Due to the past experiences with prescribing thalidomide to pregnant women to alleviate symptoms of nausea and the terrible birth defects that followed, thalidomide is a last resort therapy. Additionally, thalidomide can cause neuropathy. Though this can be mitigated by tinkering with dosages and prescribing its derivatives such as lenolidomide and pomalidomide, many doctors prefer alternative VEGF inhibitors. Bevacizumab has been shown to significantly reduce the severity of epistaxis without side effects.If other interventions have failed, several operations have been reported to provide benefit. One is septal dermoplasty or Saunders procedure, in which skin is transplanted into the nostrils, and the other is Youngs procedure, in which the nostrils are sealed off completely.
Skin and digestive tract
The skin lesions of HHT can be disfiguring, and may respond to treatment with long-pulsed Nd:YAG laser. Skin lesions in the fingertips may sometimes bleed and cause pain. Skin grafting is occasionally needed to treat this problem.With regards to digestive tract lesions, mild bleeding and mild resultant anemia is treated with iron supplementation, and no specific treatment is administered. There is limited data on hormone treatment and tranexamic acid to reduce bleeding and anemia. Severe anemia or episodes of severe bleeding are treated with endoscopic argon plasma coagulation (APC) or laser treatment of any lesions identified; this may reduce the need for supportive treatment. The expected benefits are not such that repeated attempts at treating lesions are advocated. Sudden, very severe bleeding is unusual—if encountered, alternative causes (such as a peptic ulcer) need to be considered—but embolization may be used in such instances.
Lung AVMs
Lung lesions, once identified, are usually treated to prevent episodes of bleeding and more importantly embolism to the brain. This is particularly done in lesions with a feeding blood vessel of 3 mm or larger, as these are the most likely to cause long-term complications unless treated. The most effective current therapy is embolization with detachable metal coils or plugs. The procedure involves puncture of a large vein (usually under a general anesthetic), followed by advancing of a catheter through the right ventricle and into the pulmonary artery, after which radiocontrast is injected to visualize the AVMs (pulmonary angiography). Once the lesion has been identified, coils are deployed that obstruct the blood flow and allow the lesion to regress. In experienced hands, the procedure tends to be very effective and with limited side effects, but lesions may recur and further attempts may be required. CTA scans are repeated to monitor for recurrence. Surgical excision has now essentially been abandoned due to the success of embolotherapy.Those with either definite pulmonary AVMs or an abnormal contrast echocardiogram with no clearly visible lesions are deemed to be at risk from brain emboli. They are therefore counselled to avoid scuba diving, during which small air bubbles may form in the bloodstream that may migrate to the brain and cause stroke. Similarly, antimicrobial prophylaxis is advised during procedures in which bacteria may enter the bloodstream, such as dental work, and avoidance of air bubbles during intravenous therapy.
Liver AVMs
Given that liver AVMs generally cause high-output cardiac failure, the emphasis is on treating this with diuretics to reduce the circulating blood volume, restriction of salt and fluid intake, and antiarrhythmic agents in case of irregular heart beat. This may be sufficient in treating the symptoms of swelling and breathlessness. If this treatment is not effective or leads to side effects or complications, the only remaining option is liver transplantation. This is reserved for those with severe symptoms, as it carries a mortality of about 10%, but leads to good results if successful. The exact point at which liver transplantion is to be offered is not yet completely established. Embolization treatment has been attempted, but leads to severe complications in a proportion of patients and is discouraged.Other liver-related complications (portal hypertension, esophageal varices, ascites, hepatic encephalopathy) are treated with the same modalities as used in cirrhosis, although the use of transjugular intrahepatic portosystemic shunt treatment is discouraged due to the lack of documented benefit.
Brain AVMs
The decision to treat brain arteriovenous malformations depends on the symptoms that they cause (such as seizures or headaches). The bleeding risk is predicted by previous episodes of hemorrhage, and whether on the CTA or MRA scan the AVM appears to be deep-seated or have deep venous drainage. Size of the AVM and the presence of aneurysms appears to matter less. In HHT, some lesions (high-flow arteriovenous fistulae) tend to cause more problems, and treatment is warranted. Other AVMs may regress over time without intervention. Various modalities are available, depending on the location of the AVM and its size: surgery, radiation-based treatment and embolization. Sometimes, multiple modalities are used on the same lesion.Surgery (by craniotomy, open brain surgery) may be offered based on the risks of treatment as determined by the Spetzler–Martin scale (grade I-V); this score is higher in larger lesions that are close to important brain structures and have deep venous drainage. High grade lesions (IV and V) have an unacceptably high risk and surgery is not typically offered in those cases. Radiosurgery (using targeted radiation therapy such as by a gamma knife) may be used if the lesion is small but close to vital structures. Finally, embolization may be used on small lesions that have only a single feeding vessel.
Experimental treatments
Several anti-angiogenesis drugs approved for other conditions, such as cancer, have been investigated in small clinical trials. The anti-VEGF antibody bevacizumab, for instance, has been used off-label in several studies. In a large clinical trial, bevacizumab infusion was associated with a decrease in cardiac output and reduced duration and number of episodes of epistaxis in treated HHT patients. Thalidomide, another anti-angiogenesis drug, was also reported to have beneficial effects in HHT patients. Thalidomide treatment was found to induce vessel maturation in an experimental mouse model of HHT and to reduce the severity and frequency of nosebleeds in the majority of a small group of HHT patients. The blood hemoglobin levels of these treated patients rose as a result of reduced hemorrhage and enhanced blood vessel stabilization.
Epidemiology
Population studies from numerous areas in the world have shown that HHT occurs at roughly the same rate in almost all populations: somewhere around 1 in 5000. In some areas, it is much more common; for instance, in the French region of Haut Jura the rate is 1:2351 - twice as common as in other populations. This has been attributed to a founder effect, in which a population descending from a small number of ancestors has a high rate of a particular genetic trait because one of these ancestors harbored this trait. In Haut Jura, this has been shown to be the result of a particular ACVRL1 mutation (named c.1112dupG or c.1112_1113insG). The highest rate of HHT is 1:1331, reported in Bonaire and Curaçao, two islands in the Caribbean belonging to the Netherlands Antilles.Most people with HHT have a normal lifespan. The skin lesions and nosebleeds tend to develop during childhood. AVMs are probably present from birth, but dont necessarily cause any symptoms. Frequent nosebleeds are the most common symptom and can significantly affect quality of life.
History
Several 19th century English physicians, starting with Henry Gawen Sutton (1836–1891) and followed |
Hereditary hemorrhagic telangiectasia | by Benjamin Guy Babington (1794–1866) and John Wickham Legg (1843–1921), described the most common features of HHT, particularly the recurrent nosebleeds and the hereditary nature of the disease. The French physician Henri Jules Louis Marie Rendu (1844–1902) observed the skin and mucosal lesions, and distinguished the condition from hemophilia. The Canadian-born Sir William Osler (1849–1919), then at Johns Hopkins Hospital and later at Oxford University, made further contributions with a 1901 report in which he described characteristic lesions in the digestive tract. The English physician Frederick Parkes Weber (1863–1962) reported further on the condition in 1907 with a series of cases. The term "hereditary hemorrhagic telangiectasia" was first used by the American physician Frederic M. Hanes (1883–1946) in a 1909 article on the condition.The diagnosis of HHT remained a clinical one until the genetic defects that cause HHT were identified by a research group at Duke University Medical Center, in 1994 and 1996 respectively. In 2000, the international scientific advisory committee of cureHHT formerly called the HHT Foundation International published the now widely used Curaçao criteria. In 2006, a group of international experts met in Canada and formulated an evidence-based guideline, sponsored by cureHHT. This guideline has since been updated in 2020 and can be found here.
== References == |
Orotic aciduria | Orotic aciduria (AKA hereditary orotic aciduria) is a disease caused by an enzyme deficiency resulting in a decreased ability to synthesize pyrimidines. It was the first described enzyme deficiency of the de novo pyrimidine synthesis pathway.Orotic aciduria is characterized by excessive excretion of orotic acid in urine because of the inability to convert orotic acid to UMP. It causes megaloblastic anemia and may be associated with mental and physical developmental delays.
Signs and symptoms
Patients typically present with excessive orotic acid in the urine, failure to thrive, developmental delay, and megaloblastic anemia which cannot be cured by administration of vitamin B12 or folic acid.
Cause and genetics
This autosomal recessive disorder is caused by a deficiency in the enzyme UMPS, a bifunctional protein that includes the enzyme activities of OPRT and ODC. In one study of three patients, UMPS activity ranged from 2-7% of normal levels.Two types of orotic aciduria have been reported. Type I has a severe deficiency of both activities of UMP synthase. In Type II orotic aciduria, the ODC activity is deficient while OPRT activity is elevated. As of 1988, only one case of type II orotic aciduria had ever been reported.Orotic aciduria is associated with megaloblastic anemia due to decreased pyrimidine synthesis, which leads to decreased nucleotide-lipid cofactors needed for erythrocyte membrane synthesis in the bone marrow.
Diagnosis
Elevated urinary orotic acid levels can also arise secondary to blockage of the urea cycle, particularly in ornithine transcarbamylase deficiency (OTC deficiency). This can be distinguished from hereditary orotic aciduria by assessing blood ammonia levels and blood urea nitrogen (BUN). In OTC deficiency, hyperammonemia and decreased BUN are seen because the urea cycle is not functioning properly, but megaloblastic anemia will not occur because pyrimidine synthesis is not affected. In orotic aciduria, the urea cycle is not affected.
Orotic aciduria can be diagnosed through genetic sequencing of the UMPS gene.
Treatment
Treatment is administration of uridine monophosphate (UMP) or uridine triacetate (which is converted to UMP). These medications will bypass the missing enzyme and provide the body with a source of pyrimidines.
References
== External links == |
Herpes labialis | Herpes labialis, commonly known as cold sores or fever blisters, is a type of infection by the herpes simplex virus that affects primarily the lip. Symptoms typically include a burning pain followed by small blisters or sores. The first attack may also be accompanied by fever, sore throat, and enlarged lymph nodes. The rash usually heals within ten days, but the virus remains dormant in the trigeminal ganglion. The virus may periodically reactivate to create another outbreak of sores in the mouth or lip.The cause is usually herpes simplex virus type 1 (HSV-1) and occasionally herpes simplex virus type 2 (HSV-2). The infection is typically spread between people by direct non-sexual contact. Attacks can be triggered by sunlight, fever, psychological stress, or a menstrual period. Direct contact with the genitals can result in genital herpes. Diagnosis is usually based on symptoms but can be confirmed with specific testing.Prevention includes avoiding kissing or using the personal items of a person who is infected. A zinc oxide, anesthetic, or antiviral cream appears to decrease the duration of symptoms by a small amount. Antiviral medications may also decrease the frequency of outbreaks.About 2.5 per 1000 people are affected with outbreaks in any given year. After one episode about 33% of people develop subsequent episodes. Onset often occurs in those less than 20 years old and 80% develop antibodies for the virus by this age. In those with recurrent outbreaks, these typically happen less than three times a year. The frequency of outbreaks generally decreases over time.
Signs and symptoms
Herpes infections usually show no symptoms; when symptoms do appear they typically resolve within two weeks. The main symptom of oral infection is inflammation of the mucosa of the cheek and gums—known as acute herpetic gingivostomatitis—which occurs within 5–10 days of infection. Other symptoms may also develop, including headache, nausea, dizziness and painful ulcers—sometimes confused with canker sores—fever, and sore throat.Primary HSV infection in adolescents frequently manifests as severe pharyngitis with lesions developing on the cheek and gums. Some individuals develop difficulty in swallowing (dysphagia) and swollen lymph nodes (lymphadenopathy). Primary HSV infections in adults often results in pharyngitis similar to that observed in glandular fever (infectious mononucleosis), but gingivostomatitis is less likely.Recurrent oral infection is more common with HSV-1 infections than with HSV-2. Symptoms typically progress in a series of eight stages:
Latent (weeks to months incident-free): The remission period; After initial infection, the viruses move to sensory nerve ganglia (trigeminal ganglion), where they reside as lifelong, latent viruses. Asymptomatic shedding of contagious virus particles can occur during this stage.
Prodromal (day 0–1): Symptoms often precede a recurrence. Symptoms typically begin with tingling (itching) and reddening of the skin around the infected site. This stage can last from a few days to a few hours preceding the physical manifestation of an infection and is the best time to start treatment.
Inflammation (day 1): Virus begins reproducing and infecting cells at the end of the nerve. The healthy cells react to the invasion with swelling and redness displayed as symptoms of infection.
Pre-sore (day 2–3): This stage is defined by the appearance of tiny, hard, inflamed papules and vesicles that may itch and are painfully sensitive to touch. In time, these fluid-filled blisters form a cluster on the lip (labial) tissue, the area between the lip and skin (vermilion border), and can occur on the nose, chin, and cheeks.
Open lesion (day 4): This is the most painful and contagious of the stages. All the tiny vesicles break open and merge to create one big, open, weeping ulcer. Fluids are slowly discharged from blood vessels and inflamed tissue. This watery discharge is teeming with active viral particles and is highly contagious. Depending on the severity, one may develop a fever and swollen lymph glands under the jaw.
Crusting (day 5–8): A honey/golden crust starts to form from the syrupy exudate. This yellowish or brown crust or scab is not made of active virus but from blood serum containing useful proteins such as immunoglobulins. This appears as the healing process begins. The sore is still painful at this stage, but, more painful, however, is the constant cracking of the scab as one moves or stretches their lips, as in smiling or eating. Virus-filled fluid will still ooze out of the sore through any cracks.
Healing (day 9–14): New skin begins to form underneath the scab as the virus retreats into latency. A series of scabs will form over the sore (called Meier Complex), each one smaller than the last. During this phase irritation, itching, and some pain are common.
Post-scab (12–14 days): A reddish area may linger at the site of viral infection as the destroyed cells are regenerated. Virus shedding can still occur during this stage.The recurrent infection is thus often called herpes simplex labialis. Rare reinfections occur inside the mouth (intraoral HSV stomatitis) affecting the gums, alveolar ridge, hard palate, and the back of the tongue, possibly accompanied by herpes labialis.A lesion caused by herpes simplex can occur in the corner of the mouth and be mistaken for angular cheilitis of another cause. Sometimes termed "angular herpes simplex". A cold sore at the corner of the mouth behaves similarly to elsewhere on the lips. Rather than utilizing antifungal creams, angular herpes simplex is treated in the same way as a cold sore, with topical antiviral drugs.
Causes
Herpes labialis infection occurs when the herpes simplex virus comes into contact with oral mucosal tissue or abraded skin of the mouth. Infection by the type 1 strain of herpes simplex virus (HSV-1) is most common; however, cases of oral infection by the type 2 strain are increasing.Oral HSV-2 shedding is rare, and "usually noted in the context of first episode genital herpes." In general, both types can cause oral or genital herpes.Cold sores are the result of the virus reactivating in the body. Once HSV-1 has entered the body, it never leaves. The virus moves from the mouth to remain latent in the central nervous system. In approximately one-third of people, the virus can "wake up" or reactivate to cause disease. When reactivation occurs, the virus travels down the nerves to the skin where it may cause blisters (cold sores) around the lips or mouth area.In case of Herpes zoster the nose can be affected.Cold sore outbreaks may be influenced by stress, menstruation, sunlight, sunburn, fever, dehydration, or local skin trauma. Surgical procedures such as dental or neural surgery, lip tattooing, or dermabrasion are also common triggers. HSV-1 can in rare cases be transmitted to newborn babies by family members or hospital staff who have cold sores; this can cause a severe disease called neonatal herpes simplex.
The colloquial term for this condition, "cold sore" comes from the fact that herpes labialis is often triggered by fever, for example, as may occur during an upper respiratory tract infection (i.e. a cold).People can transfer the virus from their cold sores to other areas of the body, such as the eye, skin, or fingers; this is called autoinoculation. Eye infection, in the form of conjunctivitis or keratitis, can happen when the eyes are rubbed after touching the lesion. Finger infection (herpetic whitlow) can occur when a child with cold sores or primary HSV-1 infection sucks their fingers.Blood tests for herpes may differentiate between type 1 and type 2. When a person is not experiencing any symptoms, a blood test alone does not reveal the site of infection. Genital herpes infections occurred with almost equal frequency as type 1 or 2 in younger adults when samples were taken from genital lesions. Herpes in the mouth is more likely to be caused by type 1, but (see above) also can be type 2. The only way to know for certain if a positive blood test for herpes is due to infection of the mouth, genitals, or elsewhere, is to sample from lesions. This is not possible if the affected individual is asymptomatic. The bodys immune system typically fight the virus.
Prevention
Primary Infection
The likelihood of the infection can be reduced through avoidance of touching an area with active infection and contact sports and frequent hand washing, use of mouth rinsing (anti-viral, anti-bacterial) products.</ref> During active infection (outbreaks with oral lesions) avoid oral-to-oral kissing and oral-genital sex without protection. HSV1 can be transmitted to uninfected partners through oral sex, resulting in genital lesions. Healthcare workers working with patients who have active lesions are advised to use gloves, eye protection, and mouth protection during physical, mucosal, and bronchoscopic procedures and examinations.
Recurrent Infection
In some cases, sun exposure can lead to HSV-1 reactivation, therefore use of zinc-based sunscreen or topical and oral therapeutics such as acyclovir and valacyclovir may prove helpful. Other triggers for recurrent herpetic infection includes fever, common cold, fatigue, emotional stress, trauma, sideropenia, oral cancer therapy, immunosuppression, chemotherapy, oral and facial surgery, menstruation, and epidural morphine, and upset GI. Surgical procedures like nerve root decompression, facial dermabrasion, and ablative laser resurfacing can increase risks of reactivation by 50-70%.
Treatment
Despite no cure or vaccine for the virus, a human bodys immune system and specific antibodies typically fight the virus. Treatment options include no treatment, topical creams (indifferent, antiviral, and anaesthetic), and oral antiviral medications. Indifferent topical creams include zinc oxide and glycerin cream, which can have itching and burning sensation as side effects and docosanoll. Docosanol, a saturated fatty alcohol was approved by the United States Food and Drug Administration for herpes labialis in adults with properly functioning immune systems. It is comparable in effectiveness to prescription topical antiviral agents. Due to docosanols mechanism of action, there is little risk of drug resistance. Antivirals creams include acyclovir and penciclovir, which can speed healing by as much as 10%. Oral antivirals include acyclovir, valaciclovir, and famciclovir. Famciclovir or valacyclovir, taken in pill form, can be effective using a single day, high-dose application and is more cost effective and convenient than the traditional treatment of lower doses for 5–7 days. Anaesthetic creams include lidocaine and prilocaine which has shown reduction in duration of subjective symptoms and eruptions.Treatment recommendations vary on the severity of the symptoms and chronicity of the infection. Treatment with oral antivirals such as acyclovir in children within 72 hours of illness onset has shown to shorten duration of fever, odynophagia, and lesions, and to reduce viral shedding. For patient with mild to moderate symptoms, local anaesthetic such as lidocaine for pain without antiviral may be sufficient. However, those with occasional severe recurrences of lesions may use oral antivirals. Patients with severe cases such as those with frequent recurrences of lesions, presence of disfiguring lesions, and serious systematic complications may need chronic suppressive therapy on top of the antiviral therapies.Mouth-rinse with combinations of ethanol and essential oils against herpes as therapeutic method is recommended by the German Society of Hospital Hygiene. Further research into virucidal effects of essential oils exists.
Epidemiology
Herpes labialis is common throughout the world. A large survey of young adults on six continents reported that 33% of males and 28% of females had herpes labialis on two or more occasions during the year before the study. The lifetime prevalence in the United States of America is estimated at 20–45% of the adult population. Lifetime prevalence in France was reported by one study as 32% in males and 42% in females. In Germany, the prevalence was reported at 32% in people aged between 35 and 44 years, and 20% in those aged 65–74. In Jordan, another study reported a lifetime prevalence of 26%.
Research
Research has gone into vaccines and drugs for both prevention and treatment of herpes infections.
Terminology
The term labia means "lip". Herpes labialis does not refer to the labia of the genitals, though the origin of the word is the same. When the viral infection affects both face and mouth, the broader term orofacial herpes is used, whereas herpetic stomatitis describes infection of the mouth specifically; stomatitis is derived from the Greek word stoma, which means "mouth".
References
== External links == |
Heterotopic ossification | Heterotopic ossification (HO) is the process by which bone tissue forms outside of the skeleton in muscles and soft tissue.
Symptoms
In traumatic heterotopic ossification (traumatic myositis ossificans), the patient may complain of a warm, tender, firm swelling in a muscle and decreased range of motion in the joint served by the muscle involved. There is often a history of a blow or other trauma to the area a few weeks to a few months earlier. Patients with traumatic neurological injuries, severe neurologic disorders or severe burns who develop heterotopic ossification experience limitation of motion in the areas affected.
Causes
Heterotopic ossification of varying severity can be caused by surgery or trauma to the hips and legs. About every third patient who has total hip arthroplasty (joint replacement) or a severe fracture of the long bones of the lower leg will develop heterotopic ossification, but is uncommonly symptomatic. Between 50% and 90% of patients who developed heterotopic ossification following a previous hip arthroplasty will develop additional heterotopic ossification.Heterotopic ossification often develops in patients with traumatic brain or spinal cord injuries, other severe neurologic disorders or severe burns, most commonly around the hips. The mechanism is unknown. This may account for the clinical impression that traumatic brain injuries cause accelerated fracture healing.There are also rare genetic disorders causing heterotopic ossification such as fibrodysplasia ossificans progressiva (FOP), a condition that causes injured bodily tissues to be replaced by heterotopic bone. Characteristically exhibiting in the big toe at birth, it causes the formation of heterotopic bone throughout the body over the course of the sufferers life, causing chronic pain and eventually leading to the immobilisation and fusion of most of the skeleton by abnormal growths of bone.Another rare genetic disorder causing heterotopic ossification is progressive osseous heteroplasia (POH), is a condition characterized by cutaneous or subcutaneous ossification.
Diagnosis
During the early stage, an x-ray will not be helpful because there is no calcium in the matrix. (In an acute episode which is not treated, it will be 3– 4 weeks after onset before the x-ray is positive.) Early laboratory tests are not very helpful. Alkaline phosphatase will be elevated at some point, but initially may be only slightly elevated, rising later to a high value for a short time. Unless weekly tests are done, this peak value may not be detected. It is not useful in patients who have had fractures or spine fusion recently, as they will cause elevations.The only definitive diagnostic test in the early acute stage is a bone scan, which will show heterotopic ossification 7 – 10 days earlier than an x-ray. The three-phase bone scan may be the most sensitive method of detecting early heterotopic bone formation. However, an abnormality detected in the early phase may not progress to the formation of heterotopic bone. Another finding, often misinterpreted as early heterotopic bone formation, is an increased (early) uptake around the knees or the ankles in a patient with a very recent spinal cord injury. It is not clear exactly what this means, because these patients do not develop heterotopic bone formation. It has been hypothesized that this may be related to the autonomic nervous system and its control over circulation.When the initial presentation is swelling and increased temperature in a leg, the differential diagnosis includes thrombophlebitis. It may be necessary to do both a bone scan and a venogram to differentiate between heterotopic ossification and thrombophlebitis, and it is even possible that both could be present simultaneously. In heterotopic ossification, the swelling tends to be more proximal and localized, with little or no foot/ankle edema, whereas in thrombophlebitis the swelling is usually more uniform throughout the leg.
Treatment
There is no clear form of treatment. Originally, bisphosphonates were expected to be of value after hip surgery but there has been no convincing evidence of benefit, despite having been used prophylactically.Depending on the growths location, orientation and severity, surgical removal may be possible.
Radiation Therapy.
Prophylactic radiation therapy for the prevention of heterotopic ossification has been employed since the 1970s. A variety of doses and techniques have been used. Generally, radiation therapy should be delivered as close as practical to the time of surgery. A dose of 7-8 Gray in a single fraction within 24–48 hours of surgery has been used successfully. Treatment volumes include the peri-articular region, and can be used for hip, knee, elbow, shoulder, jaw or in patients after spinal cord trauma.
Single dose radiation therapy is well tolerated and is cost effective, without an increase in bleeding, infection or wound healing disturbances.Other possible treatments.
Certain antiinflammatory agents, such as indomethacin, ibuprofen and aspirin, have shown some effect in preventing recurrence of heterotopic ossification after total hip replacement.
Conservative treatments such as passive range of motion exercises or other mobilization techniques provided by physical therapists or occupational therapists may also assist in preventing HO. A review article looked at 114 adult patients retrospectively and suggested that the lower incidence of HO in patients with a very severe TBI may have been due to early intensive physical and occupational therapy in conjunction with pharmacological treatment. Another review article also recommended physiotherapy as an adjunct to pharmacological and medical treatments because passive range of motion exercises may maintain range at the joint and prevent secondary soft tissue contractures, which are often associated with joint immobility.
See also
Intramembranous ossification
Myositis ossificans
Fibrodysplasia ossificans progressiva
Progressive osseous heteroplasia
References
External links
pmr/112 at eMedicine
radio/336 at eMedicine |
Hirsutism | Hirsutism is excessive body hair on parts of the body where hair is normally absent or minimal. The word is from early 17th century: from Latin hirsutus meaning "hairy". It may refer to a "male" pattern of hair growth that may be a sign of a more serious medical condition, especially if it develops well after puberty. Cultural stigma against hirsutism can cause much psychological distress and social difficulty. Discrimination based on facial hirsutism often leads to the avoidance of social situations and to symptoms of anxiety and depression.Hirsutism is usually the result of an underlying endocrine imbalance, which may be adrenal, ovarian, or central. It can be caused by increased levels of androgen hormones. The amount and location of the hair is measured by a Ferriman-Gallwey score. It is different from hypertrichosis, which is excessive hair growth anywhere on the body.Treatments may include birth control pills that contain estrogen and progestin, antiandrogens, or insulin sensitizers.Hirsutism affects between 5–15% of women across all ethnic backgrounds. Depending on the definition and the underlying data, approximately 40% of women have some degree of facial hair.
Causes
The causes of hirsutism can be divided into endocrine imbalances and non-endocrine etiologies. It is important to begin by first determining the distribution of body hair growth. If hair growth follows a male distribution, it could indicate the presence of increased androgens or hyperandrogenism. However, there are other hormones not related to androgens that can lead to hirsutism. A detailed history is taken by a provider in search of possible causes for hyperandrogenism or other non-endocrine-related causes. If the distribution of hair growth occurs throughout the body, this is referred to as hypertrichosis, not hirsutism.
Endocrine causes
Ovarian cysts such as in polycystic ovary syndrome (PCOS), the most common cause in women.
Adrenal gland tumors, adrenocortical adenomas, and adrenocortical carcinoma, as well as adrenal hyperplasia due to pituitary adenomas (as in Cushings disease).
Inborn errors of steroid metabolism such as in congenital adrenal hyperplasia, most commonly caused by 21-hydroxylase deficiency.
Acromegaly and gigantism (growth hormone and IGF-1 excess), usually due to pituitary tumors.
Causes of hirsutism not related to hyperandrogenism include
Familial: Family history of hirsutism with normal androgen levels.
Drug-induced: medications were used before the onset of hirsutism. The recommendation is to stop the medication and replace it with another.Minoxidil
Testosterone, danazol, progestins, anabolic steroids, valproic acid, methyldopa
Pregnancy or post-menopause: moderate hirsutism due to prolactin secretion and hyperandrogenism due to decrease estrogen production, respectively.
Idiopathic: When no other cause can be attributed to an individuals hirsutism, the cause is considered idiopathic by exclusion. In these cases, mensuration cycles and androgen levels are normal.
Diagnosis
Hirsutism is a clinical diagnosis of excessive androgenic, terminal hair growth. A complete physical evaluation should be done prior to initiating more extensive studies, the examiner should differentiate between widespread body hair increase and male pattern virilization. One method of evaluating hirsutism is the Ferriman-Gallwey Score which gives a score based on the amount and location of hair growth. The Ferriman-Gallwey Score has various cutoffs due to variable expressivity of hair growth based on ethnic background.Diagnosis of patients with even mild hirsutism should include assessment of ovulation and ovarian ultrasound, due to the high prevalence of polycystic ovary syndrome (PCOS), as well as 17α-hydroxyprogesterone (because of the possibility of finding nonclassic 21-hydroxylase deficiency). People with hirsutism may present with an elevated serum dehydroepiandrosterone sulfate (DHEA-S) level, however, additional imaging is required to discriminate between malignant and benign etiologies of adrenal hyperandrogenism. Levels greater than 700 μg/dL are indicative of adrenal gland dysfunction, particularly congenital adrenal hyperplasia due to 21-hydroxylase deficiency. However, PCOS and idiopathic hirsutism make up 90% of cases.
Treatment
Treatment of hirsutism is indicated when hair growth causes patient distress. The two main approaches to treatment are pharmacologic therapies targeting androgen production/action, and direct hair removal methods including electrolysis and photoepilaiton. These may be used independently or in combination.
Pharmacologic therapies
Common medications consist of antiandrogens, insulin sensitizers, and oral contraceptive pills. All three types of therapy have demonstrated efficacy on their own, however insulin sensitizers are shown to be less effective than antiandrogens and oral contraceptive pills. The therapies may be combined, as directed by a physician, in line with the patients medical goals. Antiandrogens are drugs that block the effects of androgens like testosterone and dihydrotestosterone (DHT) in the body. They are the most effective pharmacologic treatment for patient-important hirsutism, however they have teratogenic potential, and are therefore not recommended in people who are pregnant or desire pregnancy. Current data does not favor any one type of oral contraceptive over another.List of medications:
Spironolactone: An antimineralocorticoid with additional antiandrogenic activity at high dosages
Cyproterone acetate: A dual antiandrogen and progestogen. In addition to single form, it is also available in some formulations of combined oral contraceptives at a low dosage (see below). It has a risk of liver damage.
Flutamide: A pure antiandrogen. It has been found to possess equivalent or greater effectiveness than spironolactone, cyproterone acetate, and finasteride in the treatment of hirsutism. However, it has a high risk of liver damage and hence is no longer recommended as a first- or second-line treatment. Flutamide is safe and effective.
Bicalutamide: A pure antiandrogen. It is effective similarly to flutamide but is much safer as well as better-tolerated.
Finasteride and dutasteride: 5α-Reductase inhibitors. They inhibit the production of the potent androgen DHT. A meta-analysis showed inconsistent results of finasteride in the treatment of hirsutism.
GnRH analogues: Suppress androgen production by the gonads and reduce androgen concentrations to castrate levels.
Birth control pills that consist of an estrogen, usually ethinylestradiol, and a progestin are supported by the evidence. They are functional antiandrogens. In addition, certain birth control pills contain a progestin that also has antiandrogenic activity. Examples include birth control pills containing cyproterone acetate, chlormadinone acetate, drospirenone, and dienogest.
Metformin: Insulin sensitizer. Antihyperglycemic drug used for diabetes mellitus and treatment of hirsutism associated with insulin resistance (e.g. polycystic ovary syndrome). Metformin appears ineffective in the treatment of hirsutism, although the evidence was of low quality.
Eflornithine: Blocks putrescine that is necessary for the growth of hair follicles
Other methods
Epilation
Waxing
Shaving
Laser hair removal
Electrology
Lifestyle change, including reducing excessive weight and addressing insulin resistance, may be beneficial. Insulin resistance can cause excessive testosterone levels in women, resulting in hirsutism.
See also
Ferriman-Gallwey score
Petrus Gonsalvus
Androgenic hair
Pubic hair
Hypertrichosis
Hair removal
Laser hair removal
Bearded lady
Trichophilia
Polycystic ovary syndrome (PCOS)
Social model of disability
References
External links
Why the Bearded Lady Was Never a Laughing Matter: Hirsutism
The Bearded Lady |
Hookworm infection | Hookworm infection is an infection by a type of intestinal parasite known as a hookworm. Initially, itching and a rash may occur at the site of infection. Those only affected by a few worms may show no symptoms. Those infected by many worms may experience abdominal pain, diarrhea, weight loss, and tiredness. The mental and physical development of children may be affected. Anemia may result.Two common hookworm infections in humans are ancylostomiasis and necatoriasis, caused by the species Ancylostoma duodenale and Necator americanus respectively. Hookworm eggs are deposited in the stools of infected people. If these end up in the environment, they can hatch into larvae (immature worms), which can then penetrate the skin. One type can also be spread through contaminated food. Risk factors include walking barefoot in warm climates, where sanitation is poor. Diagnosis is by examination of a stool sample with a microscope.The disease can be prevented on an individual level by not walking barefoot in areas where the disease is common. At a population level, decreasing outdoor defecation, not using raw feces as fertilizer, and mass deworming is effective. Treatment is typically with the medications albendazole or mebendazole for one to three days. Iron supplements may be needed in those with anemia.Hookworms infected about 428 million people in 2015. Heavy infections can occur in both children and adults, but are less common in adults. They are rarely fatal. Hookworm infection is a soil-transmitted helminthiasis and classified as a neglected tropical disease.
Signs and symptoms
No symptoms or signs are specific for hookworm infection, but they give rise to a combination of intestinal inflammation and progressive iron-deficiency anemia and protein deficiency. Coughing, chest pain, wheezing, and fever sometimes result from severe infection. Epigastric pains, indigestion, nausea, vomiting, constipation, and diarrhea can occur early or in later stages, as well, although gastrointestinal symptoms tend to improve with time. Signs of advanced severe infection are those of anemia and protein deficiency, including emaciation, cardiac failure, and abdominal distension with ascites.Larval invasion of the skin (mostly in the Americas) can produce a skin disease called cutaneous larva migrans also known as creeping eruption. The hosts of these worms are not human and the larvae can only penetrate the upper five layers of the skin, where they give rise to intense, local itching, usually on the foot or lower leg, known as ground itch. This infection is due to larvae from the A. braziliense hookworm. The larvae migrate in tortuous tunnels between the stratum basale and stratum corneum of the skin, causing serpiginous vesicular lesions. With advancing movement of the larvae, the rear portions of the lesions become dry and crusty. The lesions are typically intensely itchy.
Incubation period
The incubation period can vary between a few weeks to many months, and is largely dependent on the number of hookworm parasites an individual is infected with.
Cause
Hookworm infections in humans include ancylostomiasis and necatoriasis. Ancylostomiasis is caused by Ancylostoma duodenale, which is the more common type found in the Middle East, North Africa, India, and (formerly) in southern Europe. Necatoriasis is caused by Necator americanus, the more common type in the Americas, sub-Saharan Africa, Southeast Asia, China, and Indonesia.Other animals such as birds, dogs, and cats may also be affected. A. tubaeforme infects cats, A. caninum infects dogs, and A. braziliense and Uncinaria stenocephala infect both cats and dogs. Some of these infections can be transmitted to humans.
Morphology
A. duodenale worms are grayish white or pinkish with the head slightly bent in relation to the rest of the body. This bend forms a definitive hook shape at the anterior end for which hookworms are named. They possess well-developed mouths with two pairs of teeth. While males measure approximately one centimeter by 0.5 millimeter, the females are often longer and stouter. Additionally, males can be distinguished from females based on the presence of a prominent posterior copulatory bursa.N. americanus is very similar in morphology to A. duodenale. N. americanus is generally smaller than A. duodenale with males usually 5 to 9 mm long and females about 1 cm long. Whereas A. duodenale possesses two pairs of teeth, N. americanus possesses a pair of cutting plates in the buccal capsule. Additionally, the hook shape is much more defined in Necator than in Ancylostoma.
Life cycle
The hookworm thrives in warm soil where temperatures are over 18 °C (64 °F). They exist primarily in sandy or loamy soil and cannot live in clay or muck. Rainfall averages must be more than 1,000 mm (39 in) a year for them to survive. Only if these conditions exist can the eggs hatch. Infective larvae of N. americanus can survive at higher temperatures, whereas those of A. duodenale are better adapted to cooler climates. Generally, they live for only a few weeks at most under natural conditions, and die almost immediately on exposure to direct sunlight or desiccation.Infection of the host is by the larvae, not the eggs. While A. duodenale can be ingested, the usual method of infection is through the skin; this is commonly caused by walking barefoot through areas contaminated with fecal matter. The larvae are able to penetrate the skin of the foot, and once inside the body, they migrate through the vascular system to the lungs, and from there up the trachea, and are swallowed. They then pass down the esophagus and enter the digestive system, finishing their journey in the intestine, where the larvae mature into adult worms.Once in the host gut, Necator tends to cause a prolonged infection, generally 1 to 5 years (many worms die within a year or two of infecting), though some adult worms have been recorded to live for 15 years or more. Ancylostoma adults are short-lived, surviving on average for only about 6 months. However, the infection can be prolonged because dormant larvae can be "recruited" sequentially from tissue "stores" (see Pathology, above) over many years, to replace expired adult worms. This can give rise to seasonal fluctuations in infection prevalence and intensity (apart from normal seasonal variations in transmission).
They mate inside the host, females laying up to 30,000 eggs per day and some 18 to 54 million eggs during their lifetimes, which pass out in feces. Because 5 to 7 weeks are needed for adult worms to mature, mate, and produce eggs, in the early stages of very heavy infection, acute symptoms might occur without any eggs being detected in the patients feces. This can make diagnosis very difficult.N. americanus and A. duodenale eggs can be found in warm, moist soil where they eventually hatch into first-stage larvae, or L1. L1, the feeding noninfective rhabditoform stage, will feed on soil microbes and eventually molt into second-stage larvae, L2, which is also in the rhabditoform stage. It will feed for about 7 days and then molt into the third-stage larvae, or L3. This is the filariform stage of the parasite, that is, the nonfeeding infective form of the larvae. The L3 larvae are extremely motile and seek higher ground to increase their chances of penetrating the skin of a human host. The L3 larvae can survive up to 2 weeks without finding a host. While N. americanus larvae only infect through penetration of skin, A. duodenale can infect both through penetration and orally. After the L3 larvae have successfully entered the host, they then travel through the subcutaneous venules and lymphatic vessels of the human host. Eventually, the L3 larvae enter the lungs through the pulmonary capillaries and break out into the alveoli. They then travel up the trachea to be coughed and swallowed by the host. After being swallowed, the L3 larvae are then found in the small intestine, where they molt into the L4, or adult worm stage. The entire process from skin penetration to adult development takes about 5–9 weeks. The female adult worms release eggs (N. americanus about 9,000–10,000 eggs/day and A. duodenale 25,000–30,000 eggs/day), which are passed in the feces of the human host. These eggs hatch in the environment within several days and the cycle starts anew.
Pathophysiology
Hookworm infection is generally considered to be asymptomatic, but as Norman Stoll described in 1962, it is an extremely dangerous infection because its damage is "silent and insidious." An individual may experience general symptoms soon after infection. Ground-itch, which is an allergic reaction at the site of parasitic penetration and entry, is common in patients infected with N. americanus. Additionally, cough and pneumonitis may result as the larvae begin to break into the alveoli and travel up the trachea. Then once the larvae reach the small intestine of the host and begin to mature, the infected individual will experience diarrhea and other gastrointestinal discomfort. However, the "silent and insidious" symptoms referred to by Stoll are related to chronic, heavy-intensity hookworm infections. Major morbidity associated with hookworm infection is caused by intestinal blood loss, iron deficiency anemia, and protein malnutrition. They result mainly from adult hookworms in the small intestine ingesting blood, rupturing erythrocytes, and degrading hemoglobin in the host. This long-term blood loss can manifest itself physically through facial and peripheral edema; eosinophilia and pica/geophagy caused by iron deficiency anemia are also experienced by some hookworm-infected patients. Recently, more attention has been given to other important outcomes of hookworm infection that play a large role in public health. It is now widely accepted that children who have chronic hookworm infection can experience growth retardation as well as intellectual and cognitive impairments. Additionally, recent research has focused on the potential of adverse maternal-fetal outcomes when the mother is infected with hookworm during pregnancy.The disease was linked to nematode worms (Ankylostoma duodenalis) from one-third to half an inch long in the intestine chiefly through the labours of Theodor Bilharz and Griesinger in Egypt (1854).The symptoms can be linked to inflammation in the gut stimulated by feeding hookworms, such as nausea, abdominal pain and intermittent diarrhea, and to progressive anemia in prolonged disease: capricious appetite, pica/geophagy (or dirt-eating), obstinate constipation followed by diarrhea, palpitations, thready pulse, coldness of the skin, pallor of the mucous membranes, fatigue and weakness, shortness of breath and in cases running a fatal course, dysentery, hemorrhages and edema. The worms suck blood and damage the mucosa. However, the blood loss in the stools is not visibly apparent.
Blood tests in early infection often show a rise in numbers of eosinophils, a type of white blood cell that is preferentially stimulated by worm infections in tissues (large numbers of eosinophils are also present in the local inflammatory response). Falling blood hemoglobin levels will be seen in cases of prolonged infection with anemia.
In contrast to most intestinal helminthiases, where the heaviest parasitic loads tend to occur in children, hookworm prevalence and intensity can be higher among adult males. The explanation for this is that hookworm infection tends to be occupational, so that coworkers and other close groups maintain a high prevalence of infection among themselves by contaminating their work environment. However, in most endemic areas, adult women are the most severely affected by anemia, mainly because they have much higher physiological needs for iron (menstruation, repeated pregnancy).An interesting consequence of this in the case of Ancylostoma duodenale infection is translactational transmission of infection: the skin-invasive larvae of this species do not all immediately pass through the lungs and on into the gut, but spread around the body via the circulation, to become dormant inside muscle fibers. In a pregnant woman, after childbirth some or all of these larvae are stimulated to re-enter the circulation (presumably by sudden hormonal changes), then to pass into the mammary glands, so that the newborn baby can receive a large dose of infective larvae through its mothers milk. This accounts for otherwise inexplicable cases of very heavy, even fatal, hookworm infections in children a month or so of age, in places such as China, India and northern Australia.
An identical phenomenon is much more commonly seen with Ancylostoma caninum infections in dogs, where the newborn pups can even die of hemorrhaging from their intestines caused by massive numbers of feeding hookworms. This also reflects the close evolutionary link between the human and canine parasites, which probably have a common ancestor dating back to when humans and dogs first started living closely together.
Filariform larvae is the infective stage of the parasite: infection occurs when larvae in soil penetrate the skin, or when they are ingested through contaminated food and water following skin penetration.
Diagnosis
Diagnosis depends on finding characteristic worm eggs on microscopic examination of the stools, although this is not possible in early infection. Early signs of infection in most dogs include limbular limping and anal itching. The eggs are oval or elliptical, measuring 60 by 40 µm, colorless, not bile stained and with a thin transparent hyaline shell membrane. When released by the worm in the intestine, the egg contains an unsegmented ovum. During its passage down the intestine, the ovum develops and thus the eggs passed in feces have a segmented ovum, usually with 4 to 8 blastomeres.
As the eggs of both Ancylostoma and Necator (and most other hookworm species) are indistinguishable, to identify the genus, they must be cultured in the lab to allow larvae to hatch out. If the fecal sample is left for a day or more under tropical conditions, the larvae will have hatched out, so eggs might no longer be evident. In such a case, it is essential to distinguish hookworms from Strongyloides larvae, as infection with the latter has more serious implications and requires different management. The larvae of the two hookworm species can also be distinguished microscopically, although this would not be done routinely, but usually for research purposes. Adult worms are rarely seen (except via endoscopy, surgery or autopsy), but if found, would allow definitive identification of the species. Classification can be performed based on the length of the buccal cavity, the space between the oral opening and the esophagus: hookworm rhabditoform larvae have long buccal cavities whereas Strongyloides rhabditoform larvae have short buccal cavities.Recent research has focused on the development of DNA-based tools for diagnosis of infection, specific identification of hookworm, and analysis of genetic variability within hookworm populations. Because hookworm eggs are often indistinguishable from other parasitic eggs, PCR assays could serve as a molecular approach for accurate diagnosis of hookworm in the feces.
Prevention
The infective larvae develop and survive in an environment of damp dirt, particularly sandy and loamy soil. They cannot survive in clay or muck. The main lines of precaution are those dictated by good hygiene behaviors:
Do not defecate in the open, but rather in toilets.
Do not use untreated human excreta or raw sewage as fertilizer in agriculture.
Do not walk barefoot in known infected areas.
Deworm pet dogs and cats. Canine and feline hookworms rarely develop to adulthood in humans. Ancylostoma caninum, the common dog hookworm, occasionally develops into an adult to cause eosinophilic enteritis in people, but their invasive larvae can cause an itchy rash called cutaneous larva migrans.Moxidectin is available in the United States as (imidacloprid + moxidectin) topical solution for dogs and cats. It utilizes moxidectin for control and prevention of roundworms, hookworms, heartworms, and whipworms.
Children
Most of these public health concerns have focused on children who are infected with hookworm. This focus on children is largely due to the large body of evidence that has demonstrated strong associations between hookworm infection and impaired learning, increased absences from school, and decreased future economic productivity. In 2001, the 54th World Health Assembly passed a resolution demanding member states to attain a minimum target of regular deworming of at least 75% of all at-risk school children by the year 2010. A 2008 World Health Organization publication reported on these efforts to treat at-risk school children. Some of the interesting statistics were as follows: 1) only 9 out of 130 endemic countries were able to reach the 75% target goal; and 2) less than 77 million school-aged children (of the total 878 million at risk) were reached, which means that only 8.78% of at-risk children are being treated for hookworm infection.
School-based mass deworming
School-based mass deworming programs have been the most popular strategy to address the issue of hookworm infection in children. School-based programs are extremely cost-effective as schools already have an available, extensive, and sustained infrastructure with a skilled workforce that has a close relationship with the community. With little training from a local health system, teachers can easily administer the drugs which often cost less than US$0.50 per child per year.Recently, many people have begun to question if the school-based programs are necessarily the most effective approach. An important concern with school-based programs is that they often do not reach children who do not attend school, thus ignoring a large number of at-risk children. A 2008 study by Massa et al. continued the debate regarding school-based programs. They examined the effects of community-directed treatments versus school-based treatments in the Tanga Region of Tanzania. A major conclusion was that the mean infection intensity of hookworm was significantly lower in the villages employing the community-directed treatment approach than the school-based approach. The community-directed treatment model used in this specific study allowed villagers to take control of the childs treatment by having villagers select their own community drug distributors to administer the antihelminthic drugs. Additionally, villagers organized and implemented their own methods for distributing the drugs to all children. The positive results associated with this new model highlight the need for large-scale community involvement in deworming campaigns.
Public health education
Many mass deworming programs also combine their efforts with a public health education. These health education programs often stress important preventative techniques such as: washing your hands before eating, and staying away from water/areas contaminated by human feces. These programs may also stress that shoes must be worn, however, these come with their own health risks and may not be effective. Shoe wearing patterns in towns and villages across the globe are determined by cultural beliefs, and the levels of education within that society. The wearing of shoes will prevent the entry of hookworm infections from the surrounding soils into tender skin regions; such as areas between the toes.
Sanitation
Historical examples, such as the hookworm campaigns in Mississippi and Florida from 1943 to 1947 have shown that the primary cause of hookworm infection is poor sanitation, which can be solved by building and maintaining toilets. But while these may seem like simple tasks, they raise important public health challenges. Most infected populations are from poverty-stricken areas with very poor sanitation. Thus, it is most likely that at-risk children do not have access to clean water to wash their hands and live in environments with no proper sanitation infrastructure. Health education, therefore, must address preventive measures in ways that are both feasible and sustainable in the context of resource-limited settings.
Integrated approaches
Evaluation of numerous public health interventions has generally shown that improvement in each individual component ordinarily attributed to poverty (for example, sanitation, health education and underlying nutrition status) often have minimal impact on transmission. For example, one study found that the introduction of latrines into a resource-limited community only reduced the prevalence of hookworm infection by four percent. However, another study in Salvador, Brazil found that improved drainage and sewerage had a significant impact on the prevalence of hookworm infection but no impact at all on the intensity of hookworm infection. This seems to suggest that environmental control alone has a limited but incomplete effect on the transmission of hookworms. It is imperative, therefore, that more research is performed to understand the efficacy and sustainability of integrated programs that combine numerous preventive methods including education, sanitation, and treatment.
Treatment
Anthelmintic drugs
The most common treatment for hookworm are benzimidazoles, specifically albendazole and mebendazole. BZAs kill adult worms by binding to the nematodes β-tubulin and subsequently inhibiting microtubule polymerization within the parasite. In certain circumstances, levamisole and pyrantel pamoate may be used. A 2008 review found that the efficacy of single-dose treatments for hookworm infections were as follows: 72% for albendazole, 15% for mebendazole, and 31% for pyrantel pamoate. This substantiates prior claims that albendazole is much more effective than mebendazole for hookworm infections. Also of note is that the World Health Organization does recommend anthelmintic treatment in pregnant women after the first trimester. It is also recommended that if the patient also has anemia that ferrous sulfate (200 mg) be administered three times daily at the same time as anthelmintic treatment; this should be continued until hemoglobin values return to normal which could take up to 3 months.Hookworm infection can be treated with local cryotherapy when the hookworm is still in the skin.Albendazole is effective both in the intestinal stage and during the stage the parasite is still migrating under the skin.In case of anemia, iron supplementation can cause relief symptoms of iron-deficiency anemia. However, as red blood cell levels are restored, shortage of other essentials such as folic acid or vitamin B12 may develop, so these might also be supplemented.
During the 1910s, common treatments for hookworm included thymol, 2-naphthol, chloroform, gasoline, and eucalyptus oil. By the 1940s, the treatment of choice used tetrachloroethylene, given as 3 to 4 cc in the fasting state, followed by 30 to 45 g of sodium sulfate. Tetrachloroethylene was reported to have a cure rate of 80 percent for Necator infections, but 25 percent in Ancylostoma infections, and often produced mild intoxication in the patient.
Reinfection and drug resistance
Other important issues related to the treatment of hookworm are reinfection and drug resistance. It has been shown that reinfection after treatment can be extremely high. Some studies even show that 80% of pretreatment hookworm infection rates can be seen in treated communities within 30–36 months. While reinfection may occur, it is still recommended that regular treatments be conducted as it will minimize the occurrence of chronic outcomes. There are also increasing concerns about the issue of drug resistance. Drug resistance has appeared in front-line anthelmintics used for livestock nematodes. Generally human nematodes are less likely to develop resistance due to longer reproducing times, less frequent treatment, and more targeted treatment. Nonetheless, the global community must be careful to maintain the effectiveness of current anthelmintic as no new anthelmintic drugs are in the late-stage development.
Epidemiology
It is estimated that between 576 and 740 million individuals are infected with hookworm. Of these infected individuals, about 80 million are severely affected. The major cause of hookworm infection is N. americanus which is found in the Americas, sub-Saharan Africa, and Asia. A. duodenale is found in more scattered focal environments, namely Europe and the Mediterranean. Most infected individuals are concentrated in sub-Saharan Africa and East Asia/the Pacific Islands with each region having estimates of 198 million and 149 million infected individuals, respectively. Other affected regions include: South Asia (50 million), Latin America and the Caribbean (50 million), South Asia (59 million), Middle East/North Africa (10 million). A majority of these infected individuals live in poverty-stricken areas with poor sanitation. Hookworm infection is most concentrated among the worlds poorest who live on less than $2 a day.While hookworm infection may not directly lead to mortality, its effects on morbidity demand immediate attention. When considering disability-adjusted life years (DALYs), neglected tropical diseases, including hookworm infection, rank among diarrheal diseases, ischemic heart disease, malaria, and tuberculosis as one of the most important health problems of the developing world.
It has been estimated that as many as 22.1 million DALYs have been lost due to hookworm infection. Recently, there has been increasing interest to address the public health concerns associated with hookworm infection. For example, the Bill & Melinda Gates Foundation recently donated US$34 million to fight Neglected Tropical Diseases including hookworm infection. Former US President Clinton also announced a mega-commitment at the Clinton Global Initiative (CGI) 2008 Annual Meeting to de-worm 10 million children.Many of the numbers regarding the prevalence of hookworm infection are estimates as there is no international surveillance mechanism currently in place to determine prevalence and global distribution. Some prevalence rates have been measured through survey data in endemic regions around the world. The following are some of the most recent findings on prevalence rates in regions endemic with hookworm.
Darjeeling, Hooghly District, West Bengal, India (Pal et al. 2007)
43% infection rate of predominantly N. americanus although with some A. duodenale infection
Both hookworm infection load and degree of anemia in the mild rangeXiulongkan Village, Hainan Province, China (Gandhi et al. 2001)
60% infection rate of predominantly N. americanus
Important trends noted were that prevalence increased with age (plateau of about 41 years) and women had higher prevalence rates than menHòa Bình, Northwest Vietnam (Verle et al. 2003)
52% of a total of 526 tested households infected
Could not identify species, but previous studies in North Vietnam reported N. americanus in more than 95% of hookworm larvaeMinas Gerais, Brazil (Fleming et al. 2006)
63% infection rate of predominantly N. americanusKwaZulu-Natal, South Africa (Mabaso et al. 2004)
Inland areas had a prevalence rate of 9% of N. americanus
Coastal plain areas had a prevalence rate of 63% of N. americanusLowndes County, Alabama, United States
35% infection rate of predominantly N. americanusThere have also been technological developments that may facilitate more accurate mapping of hookworm prevalence. Some researchers have begun to use geographical information systems (GIS) and remote sensing (RS) to examine helminth ecology and epidemiology. Brooker et al. utilized this technology to create helminth distribution maps of sub-Saharan Africa. By relating satellite derived environmental data with prevalence data from school-based surveys, they were able to create detailed prevalence maps. The study focused on a wide range of helminths, but interesting conclusions about hookworm specifically were found. As compared to other helminths, hookworm is able to survive in much hotter conditions and was highly prevalent throughout the upper end of the thermal range.Improved molecular diagnostic tools are another technological advancement that could help improve existing prevalence statistics. Recent research has focused on the development of a DNA-based tool that can be used for diagnosis of infection, specific identification of hookworm, and analysis of genetic variability in hookworm populations. Again this can serve as a major tool for different public health measures against hookworm infection. Most research regarding diagnostic tools is now focused on the creation of a rapid and cost-effective assay for the specific diagnosis of hookworm infection. Many are hopeful that its development can be achieved within the next five years.
History
Discovery
The symptoms now attributed to hookworm appear in papyrus papers of ancient Egypt (c. 1500 BC), described as a derangement characterized by anemia. Avicenna, |
Hookworm infection | a Persian physician of the eleventh century, discovered the worm in several of his patients and related it to their disease. In later times, the condition was noticeably prevalent in the mining industry in England, France, Germany, Belgium, North Queensland, and elsewhere.Italian physician Angelo Dubini was the modern-day discoverer of the worm in 1838 after an autopsy of a peasant woman. Dubini published details in 1843 and identified the species as A. duodenale. Working in the Egyptian medical system in 1852 German physician Theodor Bilharz, drawing upon the work of colleague Wilhelm Griesinger, found these worms during autopsies and went a step further in linking them to local endemic occurrences of chlorosis, which would probably be called iron-deficiency anemia today.
A breakthrough came 25 years later following a diarrhea and anemia epidemic that took place among Italian workmen employed on the Gotthard Rail Tunnel. In an 1880 paper, physicians Camillo Bozzolo, Edoardo Perroncito, and Luigi Pagliani correctly hypothesized that hookworm was linked to the fact that workers had to defecate inside the 15 km tunnel, and that many wore worn-out shoes. The work environment often contained standing water, sometimes knee-deep, and the larvae were capable of surviving several weeks in the water, allowing them to infect many of the workers. In 1897, it was established that the skin was the principal avenue of infection and the biological life cycle of the hookworm was clarified.
Eradication programmes
In 1899, American zoologist Charles Wardell Stiles identified progressive pernicious anemia seen in the southern United States as being caused by the hookworm A. duodenale. Testing in the 1900s revealed very heavy infestations in school-age children. In Puerto Rico, Dr. Bailey K. Ashford, a US Army physician, organized and conducted a parasite treatment campaign, which cured approximately 300,000 people (one-third of the Puerto Rican population) and reduced the death rate from this anemia by 90 percent during the years 1903–04.
On October 26, 1909, the Rockefeller Sanitary Commission for the Eradication of Hookworm Disease was organized as a result of a gift of US$1 million from John D. Rockefeller, Sr. The five-year program was a remarkable success and a great contribution to the United States public health, instilling public education, medication, field work and modern government health departments in eleven southern states.
The hookworm exhibit was a prominent part of the 1910 Mississippi state fair.
The commission found that an average of 40% of school-aged children were infected with hookworm. Areas with higher levels of hookworm infection prior to the eradication program experienced greater increases in school enrollment, attendance, and literacy after the intervention. Econometric studies have shown that this effect cannot be explained by a variety of alternative factors, including differential trends across areas, changing crop prices, shifts in certain educational and health policies and the effect of malaria eradication. No significant contemporaneous results were found for adults who should have benefited less from the intervention owing to their substantially lower (prior) infection rates. The program nearly eradicated hookworm and would flourish afterward with new funding as the Rockefeller Foundation International Health Division.The RFs hookworm campaign in Mexico showed how science and politics play a role in developing health policies. It brought together government officials, health officials, public health workers, Rockefeller officials and the community. This campaign was launched to eradicate hookworms in Mexico. Although the campaign did not focus on long-term treatments, it did set the terms of the relationship between Mexico and the Rockefeller Foundation. The scientific knowledge behind this campaign helped shape public health policies, improved public health and built a strong relationship between US and Mexico.In the 1920s, hookworm eradication reached the Caribbean and Latin America, where great mortality was reported among people in the West Indies towards the end of the 18th century, as well as through descriptions sent from Brazil and various other tropical and sub-tropical regions.
Treatments
Treatment in the early 20th century relied on the use of Epsom salt to reduce protective mucus, followed by thymol to kill the worms. By the 1940s, tetrachloroethylene was the leading method. It was not until later in the mid-20th century when new organic drug compounds were developed.
Research
Anemia in pregnancy
It is estimated that a third of all pregnant women in developing countries are infected with hookworm, 56% of all pregnant women in developing countries experience anemia, 20% of all maternal deaths are either directly or indirectly related to anemia. Numbers like this have led to an increased interest in the topic of hookworm-related anemia during pregnancy. With the understanding that chronic hookworm infection can often lead to anemia, many people are now questioning if the treatment of hookworm could effect change in severe anemia rates and thus also on maternal and child health as well. Most evidence suggests that the contribution of hookworm to maternal anemia merits that all women of child-bearing age living in endemic areas be subject to periodic anthelmintic treatment. The World Health Organization even recommends that infected pregnant women be treated after their first trimester. Regardless of these suggestions, only Madagascar, Nepal and Sri Lanka have added deworming to their antenatal care programs.This lack of deworming of pregnant women is explained by the fact that most individuals still fear that anthelmintic treatment will result in adverse birth outcomes. But a 2006 study by Gyorkos et al. found that when comparing a group of pregnant women treated with mebendazole with a control placebo group, both illustrated rather similar rates in adverse birth outcomes. The treated group demonstrated 5.6% adverse birth outcomes, while the control group had 6.25% adverse birth outcomes. Furthermore, Larocque et al. illustrated that treatment for hookworm infection actually led to positive health results in the infant. This study concluded that treatment with mebendazole plus iron supplements during antenatal care significantly reduced the proportion of very low birth weight infants when compared to a placebo control group. Studies so far have validated recommendations to treat infected pregnant women for hookworm infection during pregnancy.
A review found that a single dose of antihelminthics (anti-worm drugs) given in the second trimester of pregnancy "may reduce maternal anaemia and worm prevalence when used in settings with high prevalence of maternal helminthiasis".The intensity of hookworm infection as well as the species of hookworm have yet to be studied as they relate to hookworm-related anemia during pregnancy. Additionally, more research must be done in different regions of the world to see if trends noted in completed studies persist.
Malaria co-infection
Co-infection with hookworm and Plasmodium falciparum is common in Africa. Although exact numbers are unknown, preliminary analyses estimate that as many as a quarter of African schoolchildren (17.8–32.1 million children aged 5–14 years) may be coincidentally at-risk of both P. falciparum and hookworm. While original hypotheses stated that co-infection with multiple parasites would impair the hosts immune response to a single parasite and increase susceptibility to clinical disease, studies have yielded contrasting results. For example, one study in Senegal showed that the risk of clinical malaria infection was increased in helminth-infected children in comparison to helminth-free children while other studies have failed to reproduce such results, and even among laboratory mouse experiments the effect of helminths on malaria is variable.Some hypotheses and studies suggest that helminth infections may protect against cerebral malaria due to the possible modulation of pro-inflammatory and anti-inflammatory cytokines responses. Furthermore, the mechanisms underlying this supposed increased susceptibility to disease are unknown. For example, helminth infections cause potent and highly polarized immune response characterized by increased T-helper cell type 2 (Th2) cytokine and Immunoglobulin E (IgE) production. However, the effect of such responses on the human immune response is unknown. Additionally, both malaria and helminth infection can cause anemia, but the effect of co-infection and possible enhancement of anemia is poorly understood.
Hygiene hypothesis and hookworm as therapy
The hygiene hypothesis states that infants and children who lack exposure to infectious agents are more susceptible to allergic diseases via modulation of immune system development. The theory was first proposed by David P. Strachan who noted that hay fever and eczema were less common in children who belonged to large families. Since then, studies have noted the effect of gastrointestinal worms on the development of allergies in the developing world. For example, a study in Gambia found that eradication of worms in some villages led to increased skin reactions to allergies among children.
Vaccines
While annual or semi-annual mass antihelminthic administration is a critical aspect of any public health intervention, many have begun to realize how unsustainable it is due to aspects such as poverty, high rates of re-infection, and diminished efficacy of drugs with repeated use. Current research, therefore, has focused on the development of a vaccine that could be integrated into existing control programs. The goal of vaccine development is not necessarily to create a vaccine with sterilizing immunity or complete protection against immunity. A vaccine that reduces the likelihood of vaccinated individuals developing severe infections and thus reduced blood and nutrient levels could still have a significant impact on the high burden of disease throughout the world.
Current research focuses on targeting two stages in the development of the worm: the larval stage and the adult stage. Research on larval antigens has focused on proteins that are members of the pathogenesis-related protein superfamily, Ancylostoma Secreted Proteins. Although they were first described in Anyclostoma, these proteins have also been successfully isolated from the secreted product of N. americanus. N. americanus ASP-2 (Na-ASP-2) is currently the leading larval-stage hookworm vaccine candidate. A randomized, double-blind, placebo-controlled study has already been performed; 36 healthy adults without a history of hookworm infection were given three intramuscular injections of three different concentrations of Na-ASP-2 and observed for six months after the final vaccination. The vaccine induced significant anti-Na-ASP-2 IgG and cellular immune responses. In addition, it was safe and produced no debilitating side effects. The vaccine is now in a phase one trial; healthy adult volunteers with documented evidence of previous infection in Brazil are being given the same dose concentration on the same schedule used in the initial study. If this study is successful, the next step would be to conduct a phase two trial to assess the rate and intensity of hookworm infection among vaccinated persons. Because the Na-ASP-2 vaccine only targets the larval stage, it is critical that all subjects enrolled in the study be treated with antihelminthic drugs to eliminate adult worms prior to vaccination.
Adult hookworm antigens have also been identified as potential candidates for vaccines. When adult worms attach to the intestinal mucosa of the human host, erythrocytes are ruptured in the worms digestive tract which causes the release of free hemoglobin which is subsequently degraded by a proteolytic cascade. Several of these proteins that are responsible for this proteolytic cascade are also essential for the worms nutrition and survival. Therefore, a vaccine that could induce antibodies for these antigens could interfere with the hookworms digestive pathway and impair the worms survival. Three proteins have been identified: the aspartic protease-hemoglobinase APR-1, the cysteine protease-hemoglobinase CP-2, and a glutathione S-transferase. Vaccination with APR-1 and CP-2 led to reduced host blood loss and fecal egg counts in dogs. With APR-1, vaccination even led to reduced worm burden. Research is currently stymied at the development of at least one of these antigens as a recombinant protein for testing in clinical trials.
Terminology
The term "hookworm" is sometimes used to refer to hookworm infection. A hookworm is a type of parasitic worm (helminth).
See also
List of parasites (human)
References
External links
CDC Department of Parasitic Diseases images of the hookworm life cycle
Centers for Disease Control and Prevention
Dog hookworm (Ancylostoma caninum) at MetaPathogen: facts, life cycle, references
Human hookworms (Ancylostoma duodenale and Necator americanus) at MetaPathogen: facts, life cycle, references |
Molar pregnancy | A molar pregnancy also known as a hydatidiform mole, is an abnormal form of pregnancy in which a non-viable fertilized egg implants in the uterus. A molar pregnancy is a type of gestational trophoblastic disease that used to be known as a hydatidiform mole. A molar pregnancy grows into a mass in the uterus that has swollen chorionic villi that grow in clusters resembling grapes. A molar pregnancy can develop when a fertilized egg does not contain an original maternal nucleus. The products of conception may or may not contain fetal tissue. Molar pregnancies are categorized as partial moles or complete moles, with the word mole being used to denote simply a clump of growing tissue, or a growth.
A complete mole is caused by a single sperm (90% of the time) or two (10% of the time) sperm combining with an egg which has lost its DNA. In the first case, the sperm then reduplicates, forming a "complete" 46 chromosome set. The genotype is typically 46,XX (diploid) due to the subsequent mitosis of the fertilizing sperm but can also be 46,XY (diploid). 46,YY (diploid) is not observed. In contrast, a partial mole occurs when a normal egg is fertilized by one or two sperm which then reduplicates itself, yielding the genotypes of 69,XXY (triploid) or 92,XXXY (tetraploid).Complete moles have a 2–4% risk of developing into choriocarcinoma in Western countries and 10–15% in Eastern countries and a 15% risk of becoming an invasive mole. Incomplete moles can become invasive (<5% risk) but are not associated with choriocarcinoma. Complete hydatidiform moles account for 50% of all cases of choriocarcinoma.
Molar pregnancies are a relatively rare complication of pregnancy, making up 1 in 1,000 pregnancies in the US, with much higher rates in Asia (e.g. up to 1 in 100 pregnancies in Indonesia).
Signs and symptoms
Molar pregnancies usually present with painless vaginal bleeding in the fourth to fifth months of pregnancy. The uterus may be larger than expected, or the ovaries may be enlarged. There may also be more vomiting than would be expected (hyperemesis). Sometimes there is an increase in blood pressure along with protein in the urine. Blood tests will show very high levels of human chorionic gonadotropin (hCG).
Cause
The cause of this condition is not completely understood. Potential risk factors may include defects in the egg, abnormalities within the uterus, or nutritional deficiencies. Women under 20 or over 40 years of age have a higher risk. Other risk factors include diets low in protein, folic acid, and carotene. The diploid set of sperm-only DNA means that all chromosomes have sperm-patterned methylation suppression of genes. This leads to overgrowth of the syncytiotrophoblast whereas dual egg-patterned methylation leads to a devotion of resources to the embryo, with an underdeveloped syncytiotrophoblast. This is considered to be the result of evolutionary competition, with male genes driving for high investment into the fetus versus female genes driving for resource restriction to maximise the number of children.
Pathophysiology
A hydatidiform mole is a pregnancy/conceptus in which the placenta contains grapelike vesicles (small sacs) that are usually visible to the naked eye. The vesicles arise by distention of the chorionic villi by fluid. When inspected under the microscope, hyperplasia of the trophoblastic tissue is noted. If left untreated, a hydatidiform mole will almost always end as a spontaneous abortion (miscarriage).
Based on morphology, hydatidiform moles can be divided into two types: in complete moles, all the chorionic villi are vesicular, and no sign of embryonic or fetal development is present. In partial moles some villi are vesicular, whereas others appear more normal, and embryonic/fetal development may be seen but the fetus is always malformed and is never viable.
In rare cases a hydatidiform mole co-exists in the uterus with a normal, viable fetus. These cases are due to twinning. The uterus contains the products of two conceptions: one with an abnormal placenta and no viable fetus (the mole), and one with a normal placenta and a viable fetus. Under careful surveillance it is often possible for the woman to give birth to the normal child and to be cured of the mole.
Parental origin
In most complete moles, all nuclear genes are inherited from the father only (androgenesis). In approximately 80% of these androgenetic moles, the most probable mechanism is that an empty egg is fertilized by a single sperm, followed by a duplication of all chromosomes/genes (a process called endoreduplication). In approximately 20% of complete moles, the most probable mechanism is that an empty egg is fertilized by two sperm. In both cases, the moles are diploid (i.e. there are two copies of every chromosome). In all these cases, the mitochondrial genes are inherited from the mother, as usual.
Most partial moles are triploid (three chromosome sets). The nucleus contains one maternal set of genes and two paternal sets. The mechanism is usually the reduplication of the paternal haploid set from a single sperm, but may also be the consequence of dispermic (two sperm) fertilization of the egg.In rare cases, hydatidiform moles are tetraploid (four chromosome sets) or have other chromosome abnormalities.
A small percentage of hydatidiform moles have biparental diploid genomes, as in normal living persons; they have two sets of chromosomes, one inherited from each biological parent. Some of these moles occur in women who carry mutations in the gene NLRP7, predisposing them towards molar pregnancy. These rare variants of hydatidiform mole may be complete or partial.
Diagnosis
The diagnosis is strongly suggested by ultrasound (sonogram), but definitive diagnosis requires histopathological examination. On ultrasound, the mole resembles a bunch of grapes ("cluster of grapes" or "honeycombed uterus" or "snow-storm"). There is increased trophoblast proliferation and enlarging of the chorionic villi, and angiogenesis in the trophoblasts is impaired.Sometimes symptoms of hyperthyroidism are seen, due to the extremely high levels of hCG, which can mimic the effects of thyroid-stimulating hormone.
Treatment
Hydatidiform moles should be treated by evacuating the uterus by uterine suction or by surgical curettage as soon as possible after diagnosis, in order to avoid the risks of choriocarcinoma. Patients are followed up until their serum human chorionic gonadotrophin (hCG) level has fallen to an undetectable level. Invasive or metastatic moles (cancer) may require chemotherapy and often respond well to methotrexate. As they contain paternal antigens, the response to treatment is nearly 100%. Patients are advised not to conceive for half a year after hCG levels have normalized. The chances of having another molar pregnancy are approximately 1%.
Management is more complicated when the mole occurs together with one or more normal fetuses.
In some women, the growth can develop into gestational trophoblastic neoplasia. For women who have complete hydatidiform mole and are at high risk of this progression, evidence suggests giving prophylactic chemotherapy (known as P-chem) may reduce the risk of this happening. However P-chem may also increase toxic side effects, so more research is needed to explore its effects.
Anesthesia
The uterine curettage is generally done under the effect of anesthesia, preferably spinal anesthesia in hemodynamically stable patients. The advantages of spinal anesthesia over general anesthesia include ease of technique, favorable effects on the pulmonary system, safety in patients with hyperthyroidism and non-tocolytic pharmacological properties. Additionally, by maintaining patients consciousness one can diagnose the complications like uterine perforation, cardiopulmonary distress and thyroid storm at an earlier stage than when the patient is sedated or is under general anesthesia.
Prognosis
More than 80% of hydatidiform moles are benign. The outcome after treatment is usually excellent. Close follow-up is essential to ensure that treatment has been successful. Highly effective means of contraception are recommended to avoid pregnancy for at least 6 to 12 months. Women who have had a prior partial or complete mole, have a slightly increased risk of a second hydatidiform mole in a subsequent pregnancy, meaning a future pregnancy will require an earlier ultrasound scan.In 10 to 15% of cases, hydatidiform moles may develop into invasive moles. This condition is named persistent trophoblastic disease (PTD). The moles may intrude so far into the uterine wall that hemorrhage or other complications develop. It is for this reason that a post-operative full abdominal and chest X-ray will often be requested.
In 2 to 3% of cases, hydatidiform moles may develop into choriocarcinoma, which is a malignant, rapidly growing, and metastatic (spreading) form of cancer. Despite these factors which normally indicate a poor prognosis, the rate of cure after treatment with chemotherapy is high.
Over 90% of women with malignant, non-spreading cancer are able to survive and retain their ability to conceive and bear children. In those with metastatic (spreading) cancer, remission remains at 75 to 85%, although their childbearing ability is usually lost.
Epidemiology
Hydatidiform moles are a rare complication of pregnancy, occurring once in every 1,000 pregnancies in the US, with much higher rates in Asia (e.g. up to one in 100 pregnancies in Indonesia).
Etymology
The etymology is derived from hydatisia (Greek "a drop of water"), referring to the watery contents of the cysts, and mole (from Latin mola = millstone/false conception). The term, however, comes from the similar appearance of the cyst to a hydatid cyst in an Echinococcosis.
References
External links
Humpath #3186 (Pathology images)
Clinically reviewed molar pregnancy and choriocarcinoma information for patients from Cancer Research UK
MyMolarPregnancy.com: Resource for those diagnosed with molar pregnancy. Links, personal stories, and support groups. |
Hyperaldosteronism | Hyperaldosteronism is a medical condition wherein too much aldosterone is produced by the adrenal glands, which can lead to lowered levels of potassium in the blood (hypokalemia) and increased hydrogen ion excretion (alkalosis).
This cause of mineralocorticoid excess is primary hyperaldosteronism reflecting excess production of aldosterone by adrenal zona glomerulosa. Bilateral micronodular hyperplasia is more common than unilateral adrenal adenoma.
Signs and symptoms
It can be asymptomatic, but these symptoms may be present:
Fatigue
Headache
High blood pressure
Hypokalemia
Hypernatraemia
Hypomagnesemia
Intermittent or temporary paralysis
Muscle spasms
Muscle weakness
Numbness
Polyuria
Polydipsia
Tingling
Metabolic alkalosis
Nocturia
Blurry Vision
Dizziness/Vertigo
Cause
The causes of primary hyperaldosteronism are adrenal hyperplasia and adrenal adenoma (Conns syndrome).
These cause hyperplasia of aldosterone-producing cells of the adrenal cortex resulting in primary hyperaldosteronism.
The causes of secondary hyperaldosteronism are accessory renal veins, fibromuscular dysplasia, reninoma, renal tubular acidosis, nutcracker syndrome, ectopic tumors, massive ascites, left ventricular failure, and cor pulmonale.
These act either by decreasing circulating fluid volume or by decreasing cardiac output, with resulting increase in renin release leading to secondary hyperaldosteronism. Secondary hyperaldosteronism can also be caused by proximal renal tubular acidosis
Secondary hyperaldosteronism can also be a symptom of genetic conditions Bartters Syndrome and Gitelmans Syndrome.
Diagnosis
When taking a blood test, the aldosterone-to-renin ratio is abnormally increased in primary hyperaldosteronism, and decreased or normal but with high renin in secondary hyperaldosteronism.
Types
In endocrinology, the terms primary and secondary are used to describe the abnormality (e.g., elevated aldosterone) in relation to the defect, i.e., the tumors location. It also refers to causes that are genetic (primary) or due to another condition or influence (secondary).
Primary
Primary aldosteronism (hyporeninemic hyperaldosteronism) was previously thought to be most commonly caused by an adrenal adenoma, termed Conns syndrome. However, recent studies have shown that bilateral idiopathic adrenal hyperplasia is the cause in up to 70% of cases. Differentiating between the two is important, as this determines treatment. Also, see congenital adrenal hyperplasia.
Adrenal carcinoma is an extremely rare cause of primary hyperaldosteronism.Two familial forms have been identified: type I (dexamethasone suppressible), and type II, which has been linked to the 7p22 gene.Features
Hypertension
Hypokalemia (e.g., may cause muscle weakness)
AlkalosisInvestigations
High serum aldosterone
Low serum renin
High-resolution CT abdomenManagement
Adrenal adenoma: surgery
Bilateral adrenocortical hyperplasia: aldosterone antagonist, e.g., spironolactone
Secondary
Secondary hyperaldosteronism (also hyperreninism, or hyperreninemic hyperaldosteronism) is due to overactivity of the renin–angiotensin–aldosterone system (RAAS).Secondary refers to an abnormality that indirectly results in pathology through a predictable physiologic pathway, i.e., a renin-producing tumor leads to increased aldosterone, as the bodys aldosterone production is normally regulated by renin levels. One cause is a juxtaglomerular cell tumor. Another is renal artery stenosis, in which the reduced blood supply across the juxtaglomerular apparatus stimulates the production of renin. Likewise, fibromuscular dysplasia may cause stenosis of the renal artery, and therefore secondary hyperaldosteronism. Other causes can come from the tubules: low reabsorption of sodium (as seen in Bartter and Gitelman syndromes) will lead to hypovolemia/hypotension, which will activate the renin–angiotensin system (RAAS).Secondary hyperaldosteronism can also be caused by excessive ingestion of licorice or other members of the Glycyrrhiza genus of plants that contain the triterpenoid saponin glycoside known as glycyrrhizin. Licorice and closely related plants are perennial shrubs, the roots of which are used in medicine as well as making candies and in cooking other desserts because of the sweet taste. Through inhibition of 11-beta-hydroxysteroid dehydrogenase type 2 (11-beta-HSD2), glycyrrhizin allows cortisol to activate mineralocorticoid receptors in the kidney. This severely potentiates mineralocorticoid receptor-mediated renal sodium reabsorbtion, due to much higher circulating concentrations of cortisol compared to aldosterone. This, in turn, expands the extracellular volume, increases total peripheral resistance and increases arterial blood pressure. The condition is termed pseudohyperaldosteronism.Secondary hyperaldosterone can also be caused by a genetic mutation in the kidneys which causes sodium and potassium wasting. These conditions can be referred to syndromes such as Bartter Syndrome and Gitelman Syndrome.
Treatment
Treatment includes removing the causative agent (such as licorice), a high-potassium, low-sodium diet (for primary) and high-sodium diet (for secondary), spironolactone and eplerenone, potassium-sparing diuretics that act as aldosterone antagonists, and surgery, depending on the cause. Secondary hyperaldosteronism may also be treated with cox2 inhibitors which cause water retention, sodium retention and potassium retention as well as raising blood pressure. Bartter and Gitleman syndrome tend to cause low blood pressure in significant populations and treatment with blood pressure medications tend to lower the blood pressure even more.
Other animals
Cats can be affected by hyperaldosteronism. The most common signs in cats are muscle weakness and loss of eyesight, although only one of these signs may be present. Muscle weakness is due to low potassium concentrations in the blood, and signs of muscle weakness, such as being unable to jump, may be intermittent. High blood pressure causes either detachment of the retina, or blood inside the eye, which leads to loss of vision. Hyperaldosteronism caused by a tumor is treated by surgical removal of the affected adrenal gland.
See also
Hypoaldosteronism
Glucocorticoid remediable aldosteronism
References
External links
Primary Hyperaldosteronism Nursing Management |
Hyperkeratosis | Hyperkeratosis is thickening of the stratum corneum (the outermost layer of the epidermis, or skin), often associated with the presence of an abnormal quantity of keratin, and also usually accompanied by an increase in the granular layer. As the corneum layer normally varies greatly in thickness in different sites, some experience is needed to assess minor degrees of hyperkeratosis.
It can be caused by vitamin A deficiency or chronic exposure to arsenic.
Hyperkeratosis can also be caused by B-Raf inhibitor drugs such as Vemurafenib and Dabrafenib.It can be treated with urea-containing creams, which dissolve the intercellular matrix of the cells of the stratum corneum, promoting desquamation of scaly skin, eventually resulting in softening of hyperkeratotic areas.
Types
Follicular
Follicular hyperkeratosis, also known as keratosis pilaris (KP), is a skin condition characterized by excessive development of keratin in hair follicles, resulting in rough, cone-shaped, elevated papules. The openings are often closed with a white plug of encrusted sebum. When called phrynoderma the condition is associated with nutritional deficiency or malnourishment.
This condition has been shown in several small-scale studies to respond well to supplementation with vitamins and fats rich in essential fatty acids. Deficiencies of vitamin E, vitamin A, and B-complex vitamins have been implicated in causing the condition.
By other specific site
Plantar hyperkeratosis is hyperkeratosis of the sole of the foot. It is recommended to surgically remove the dead skin, to provide symptomatic relief.
Hyperkeratosis of the nipple and areola is an uncommon benign, asymptomatic, acquired condition of unknown pathogenesis.: 636
Hereditary
Epidermolytic hyperkeratosis (also known as "Bullous congenital ichthyosiform erythroderma," "Bullous ichthyosiform erythroderma,": 482 or "bullous congenital ichthyosiform erythroderma of Brocq") is a rare skin disease in the ichthyosis family affecting around 1 in 250,000 people. It involves the clumping of keratin filaments.: 562
Multiple minute digitate hyperkeratosis, a rare cutaneous condition, with about half of cases being familial
Focal acral hyperkeratosis (also known as "Acrokeratoelastoidosis lichenoides,") is a late-onset keratoderma, inherited as an autosomal dominant condition, characterized by oval or polygonal crateriform papules developing along the border of the hands, feet, and wrists.: 509
Keratosis pilaris appears similar to gooseflesh, is usually asymptomatic and may be treated by moisturizing the skin.
Other
Hyperkeratosis lenticularis perstans (also known as "Flegels disease") is a cutaneous condition characterized by rough, yellow-brown keratotic, flat-topped papules.: 639
In mucous membranes
The term hyperkeratosis is often used in connection with lesions of the mucous membranes, such as leukoplakia. Because of the differences between mucous membranes and the skin (e.g. keratinizing mucosa does not have a stratum lucidum and non keratinizing mucosa does not have this layer or normally a stratum corneum or a stratum granulosum), sometimes specialized texts give slightly different definitions of hyperkeratosis in the context of mucosae. Examples are "an excessive formation of keratin (e.g., as seen in leukoplakia)" and "an increase in the thickness of the keratin layer of the epithelium, or the presence of such a layer in a site where none would normally be expected."
Etymology and pronunciation
The word hyperkeratosis () is based on the Ancient Greek morphemes hyper- + kerato- + -osis, meaning the condition of too much keratin.
Hyperkeratosis in dogs
Nasodigitic hyperkeratosis in dogs may be idiopathic, secondary to an underlying disease, or due to congenital abnormalities in the normal anatomy of the nose and fingertips.
In the case of congenital anatomical abnormalities, contact between the affected area and rubbing surfaces is impaired. It is roughly the same with finger pads - in animals with an anatomical abnormality part of the pad is not in contact with rubbing surfaces and excessive keratin deposition is formed. The idiopathic form of nasodigitic hyperkeratosis in dogs develops from unknown causes and is more common in older animals (senile form). Of all dog breeds, Labradors, Golden Retrievers, Cocker Spaniels, Irish Terriers, Bordeaux Dogs are the most prone to hyperkeratosis.
Therapy
Since the deposition of excess keratin cannot be stopped, therapy is aimed at softening and removing it. For moderate to severe cases, the affected areas should be hydrated (moisturised) with warm water or compresses for 5-10 minutes. Softening preparations are then applied once a day until the excess keratin is removed.
In dogs with severe hyperkeratosis and a significant excess of keratin, it is removed with scissors or a blade. After proper instructions, pet owners are able to perform this procedure at home and it may be the only method of correction.
See also
Calluses
Keratin disease
List of skin diseases
Skin disease
Skin lesion
Epidermal hyperplasia
References
== External links == |
Hypoestrogenism | Hypoestrogenism, or estrogen deficiency, refers to a lower than normal level of estrogen. It is an umbrella term used to describe estrogen deficiency in various conditions. Estrogen deficiency is also associated with an increased risk of cardiovascular disease, and has been linked to diseases like urinary tract infections and osteoporosis.
In women, low levels of estrogen may cause symptoms such as hot flashes, sleeping disturbances, decreased bone health, and changes in the genitourinary system. Hypoestrogenism is most commonly found in women who are postmenopausal, have primary ovarian insufficiency (POI), or are presenting with amenorrhea (absence of menstrual periods). Hypoestrogenism includes primarily genitourinary effects, including thinning of the vaginal tissue layers and an increase in vaginal pH. With normal levels of estrogen, the environment of the vagina is protected against inflammation, infections, and sexually transmitted infections. Hypoestrogenism can also occur in men, for instance due to hypogonadism.
There are both hormonal and non-hormonal treatments to prevent the negative effects of low estrogen levels and improve quality of life.
Signs and symptoms
Vasomotor
Presentations of low estrogen levels include hot flashes, which are sudden, intense feelings of heat predominantly in the upper body, causing the skin to redden as if blushing. They are believed to occur due to the narrowing of the thermonuclear zone in the hypothalamus, making the body more sensitive to body temperature changes. Night disturbances are also common symptoms associated with hypoestrogenism. People may experience difficulty falling asleep, waking up several times a night, and early awakening with different variability between races and ethnic groups.
Genitourinary
Other classic symptoms include both physical and chemical changes of the vulva, vagina, and lower urinary tract. Genitals go through atrophic changes such as losing elasticity, losing vaginal rugae, and increasing of vaginal pH, which can lead to changes in the vaginal flora and increase the risk of tissue fragility and fissure. Other genital signs include dryness or lack of lubrication, burning, irritation, discomfort or pain, as well as impaired function. Low levels of estrogen can lead to limited genital arousal and cause dyspareunia, or painful sexual intercourse because of changes in the four layers of the vaginal wall. People with low estrogen will also experience higher urgency to urinate and dysuria, or painful urination. Hypoestrogenism is also considered one of the major risk factors for developing uncomplicated urinary tract infection in postmenopausal women who do not take hormone replacement therapy.
Bone health
Estrogen contributes to bone health in several ways; low estrogen levels increase bone resorption via osteoclasts and osteocytes, cells that help with bone remodeling, making bones more likely to deteriorate and increase risk of fracture. The decline in estrogen levels can ultimately lead to more serious illnesses, such as scoliosis or type I osteoporosis, a disease that thins and weakens bones, resulting in low bone density and fractures. Estrogen deficiency plays an important role in osteoporosis development for both genders, and it is more pronounced for women and at younger (menopausal) ages by five to ten years compared with men. Females are also at higher risk for osteopenia and osteoporosis.
Causes
A variety of conditions can lead to hypoestrogenism: menopause is the most common. Primary ovarian insufficiency (premature menopause) due to varying causes, such as radiation therapy, chemotherapy, or a spontaneous manifestation, can also lead to low estrogen and infertility.Hypogonadism (a condition where the gonads – testes for men and ovaries for women – have diminished activity) can decrease estrogen. In primary hypogonadism, elevated serum gonadotropins are detected on at least two occasions several weeks apart, indicating gonadal failure. In secondary hypogonadism (where the cause is hypothalamic or pituitary dysfunction) serum levels of gonadotropins may be low.Other causes include certain medications, gonadotropin insensitivity, inborn errors of steroid metabolism (for example, aromatase deficiency, 17α-hydroxylase deficiency, 17,20-lyase deficiency, 3β-hydroxysteroid dehydrogenase deficiency, and cholesterol side-chain cleavage enzyme or steroidogenic acute regulatory protein deficiency) and functional amenorrhea.
Risks
Low endogenous estrogen levels can elevate the risk of cardiovascular disease in women who reach early menopause. Estrogen is needed to relax arteries using endothelial-derived nitric oxide resulting in better heart health by decreasing adverse atherogenic effects. Women with POI may have an increased risk of cardiovascular disease due to low estrogen production.
Pathophysiology
Estrogen deficiency has both vaginal and urologic effects; the female genitalia and lower urinary tract share common estrogen receptor function due to their embryological development. Estrogen is a vasoactive hormone (one that affects blood pressure) which stimulates blood flow and increases vaginal secretions and lubrication. Activated estrogen receptors also stimulate tissue proliferation in the vaginal walls, which contribute to the formation of rugae. This rugae aids in sexual stimulation by becoming lubricated, distended, and expanded.Genitourinary effects of low estrogen include thinning of the vaginal epithelium, loss of vaginal barrier function, decrease of vaginal folding, decrease of the elasticity of the tissues, and decrease of the secretory activity of the Bartholin glands, which leads to traumatization of the vaginal mucosa and painful sensations. This thinning of the vaginal epithelium layers can increase the risk of developing inflammation and infection, such as urinary tract infection.The vagina is largely dominated by bacteria from the genus Lactobacillus, which typically comprise more than 70% of the vaginal bacteria in women. These lactobacilli process glycogen and its breakdown products, which result in a maintained low vaginal pH. Estrogen levels are closely linked to lactobacilli abundance and vaginal pH, as higher levels of estrogen promote thickening of the vaginal epithelium and intracellular production of glycogen. This large presence of lactobacilli and subsequent low pH levels are hypothesized to benefit women by protecting against sexually transmitted pathogens and opportunistic infections, and therefore reducing disease risk.
Diagnosis
Hypoestrogenism is typically found in menopause and aids in diagnosis of other conditions such as POI and functional amenorrhea. Estrogen levels can be tested through several laboratory tests: vaginal maturation index, progestogen challenge test, and vaginal swabs for small parabasal cells.
Menopause
Menopause is usually diagnosed through symptoms of vaginal atrophy, pelvic exams, and taking a comprehensive medical history consisting of last menstruation cycle. There is no definitive testing available for determining menopause as the symptom complex is the primary indicator and because the lower levels of estradiol are harder to accurately detect after menopause. However, there can be laboratory tests done to differentiate between menopause and other diagnoses.
Functional hypothalamic amenorrhea
Functional hypothalamic amenorrhea (FHA) is diagnosed based on findings of amenorrhea lasting three months or more, low serum hormone of gonadotropins and estradiol. Since common causes of FHA include exercising too much, eating too little, or being under too much stress, diagnosis of FHA includes assessing for any changes in exercise, weight, and stress. In addition, evaluation of amenorrhea includes a history and physical examination, biochemical testing, imaging, and measuring estrogen level. Examination of menstrual problems and clinical tests to measure hormones such as serum prolactin, thyroid-stimulating hormone, and follicle-stimulating hormone (FSH) can help rule out other potential causes of amenorrhea. These potential conditions include hyperprolactinemia, POI, and polycystic ovary syndrome.
Primary ovarian insufficiency
Primary ovarian insufficiency, also known as premature ovarian failure, can develop in women before the age of forty as a consequence of hypergonadotropic hypogonadism. POI can present as amenorrhea and has similar symptoms to menopause, but measuring FSH levels is used for diagnosis.
Treatment
Hormone replacement therapy (HRT) can be used to treat hypoestrogenism and menopause related symptoms, and low estrogen levels in both premenopausal and postmenopausal women. Low-dose estrogen medications are approved by the U.S. Food and Drug Administration for treatment of menopause-related symptoms. HRT can be used with or without a progestogen to improve symptoms such as hot flashes, sweating, trouble sleeping, vaginal dryness and discomfort. The FDA recommends HRT to be avoided in women with a history or risk of breast cancer, undiagnosed genital bleeding, untreated high blood pressure, unexplained blood clots, or liver disease.HRT for the vasomotor symptoms of hypoestrogenism include different forms of estrogen, such as conjugated equine estrogens, 17β-estradiol, transdermal estradiol, ethinyl estradiol, and the estradiol ring. In addition to HRT, there are common progestogens that are used to protect the inner layer of the uterus, the endometrium. These medications include medroxyprogesterone acetate, progesterone, norethisterone acetate, and drospirenone.Non-pharmacological treatment of hot flashes includes using portable fans to lower the room temperature, wearing layered clothing, and avoiding tobacco, spicy food, alcohol and caffeine. There is a lack of evidence to support other treatments such as acupuncture, yoga, and exercise to reduce symptoms.
In men
Estrogens are also important in male physiology. Hypoestrogenism can occur in men due to hypogonadism. Very rare causes include aromatase deficiency and estrogen insensitivity syndrome. Medications can also be a cause of hypoestrogenism in men. Hypoestrogenism in men can lead to osteoporosis, among other symptoms. Estrogens may also be positively involved in sexual desire in men.
See also
Estrogen insensitivity syndrome
Aromatase excess syndrome
References
== External links == |
Hypophosphatasia | Hypophosphatasia (; also called deficiency of alkaline phosphatase, phosphoethanolaminuria, or Rathbuns syndrome; sometimes abbreviated HPP) is a rare, and sometimes fatal, inherited metabolic bone disease. Clinical symptoms are heterogeneous, ranging from the rapidly fatal, perinatal variant, with profound skeletal hypomineralization, respiratory compromise or vitamin B6 dependent seizures to a milder, progressive osteomalacia later in life. Tissue non-specific alkaline phosphatase (TNSALP) deficiency in osteoblasts and chondrocytes impairs bone mineralization, leading to rickets or osteomalacia. The pathognomonic finding is subnormal serum activity of the TNSALP enzyme, which is caused by one of 388 genetic mutations identified to date, in the gene encoding TNSALP. Genetic inheritance is autosomal recessive for the perinatal and infantile forms but either autosomal recessive or autosomal dominant in the milder forms.
The prevalence of hypophosphatasia is not known; one study estimated the live birth incidence of severe forms to be 1:100,000. and some studies report a higher prevalence of milder disease.
Symptoms and signs
There is a remarkable variety of symptoms that depends, largely, on the age of the patient at initial presentation, ranging from death in utero to relatively mild bone problems with or without dentition symptoms in adult life although neurological and extra-skeletal symptoms are also reported. The stages of this disease are generally included in the following categories: perinatal, infantile, childhood, adult, benign prenatal and odontohypophosphatasia. Although several clinical sub-types of the disease have been characterized, based on the age at which skeletal lesions are discovered, the disease is best understood as a single continuous spectrum of severity.As the presentation of adult disease is highly variable, incorrect or missed diagnosis may occur. In one study, 19% of patients diagnosed with fibromyalgia had laboratory findings suggestive of possible hypophosphatasia.One case report details a 35-year old female with low serum ALP and mild pains but no history of rickets, fractures or dental problems. Subsequent evaluation showed osteopenia and renal microcalcifications and an elevation of PEA. The genetic mutations found in this case were previously reported in perinatal, infantile and childhood hypophosphatasia, but not adult hypophosphatasia.
Perinatal hypophosphatasia
Perinatal hypophosphatasia is the most lethal form. Profound hypomineralization results in caput membranaceum (a soft calvarium), deformed or shortened limbs during gestation and at birth, and rapid death due to respiratory failure. Stillbirth is not uncommon and long-term survival is rare. Neonates who manage to survive suffer increasing respiratory compromise due to softening of the bones (osteomalacia) and underdeveloped lungs (hypoplastic). Ultimately, this leads to respiratory failure. Epilepsy (seizures) can occur and can prove lethal. Regions of developing, unmineralized bone (osteoid) may expand and encroach on the marrow space, resulting in myelophthisic anemia.In radiographic examinations, perinatal hypophosphatasia can be distinguished from even the most severe forms of osteogenesis imperfecta and congenital dwarfism. Some stillborn skeletons show almost no mineralization; others have marked undermineralization and severe osteomalacia. Occasionally, there can be a complete absence of ossification in one or more vertebrae. In the skull, individual bones may calcify only at their centers. Another unusual radiographic feature is bony spurs that protrude laterally from the shafts of the ulnae and fibulae. Despite the considerable patient-to-patient variability and the diversity of radiographic findings, the X-ray can be considered diagnostic.
Infantile hypophosphatasia
Infantile hypophosphatasia presents in the first 6 months of life, with the onset of poor feeding and inadequate weight gain. Clinical manifestations of rickets often appear at this time. Although cranial sutures appear to be wide, this reflects hypomineralization of the skull, and there is often “functional” craniosynostosis. If the patient survives infancy, these sutures can permanently fuse. Defects in the chest, such as flail chest resulting from rib fractures, lead to respiratory compromise and pneumonia. Elevated calcium in the blood (hypercalcemia) and urine (hypercalcenuria) are also common, and may explain the renal problems and recurrent vomiting seen is this disease.Radiographic features in infants are generally less severe than those seen in perinatal hypophosphatasia. In the long bones, there is an abrupt change from a normal appearance in the shaft (diaphysis) to uncalcified regions near the ends (metaphysis), which suggests the occurrence of an abrupt metabolic change. In addition, serial radiography studies suggest that defects in skeletal mineralization (i.e. rickets) persist and become more generalized. Mortality is estimated to be 50% in the first year of life.
Childhood hypophosphatasia
Hypophosphatasia in childhood has variable clinical expression. As a result of defects in the development of the dental cementum, the deciduous teeth (baby teeth) are often lost before the age of 5. Frequently, the incisors are lost first; occasionally all of the teeth are lost prematurely. Dental radiographs can show the enlarged pulp chambers and root canals that are characteristic of rickets.Patients may experience delayed walking, a characteristic waddling gait, stiffness and pain, and muscle weakness (especially in the thighs) consistent with nonprogressive myopathy. Typically, radiographs show defects in calcification and characteristic bony defects near the ends of major long bones. Growth retardation, frequent fractures, and low bone density (osteopenia) are common. In severely-affected infants and young children, cranial bones can fuse prematurely, despite the appearance of open fontanels on radiographic studies. The illusion of open fontanels results from hypomineralization of large areas of the calvarium. Premature bony fusion of the cranial sutures may elevate intracranial pressure.
Adult hypophosphatasia
Adult hypophosphatasia can be associated with rickets, premature loss of deciduous teeth, or early loss of adult dentation followed by relatively good health. Osteomalacia results in painful feet due to poor healing of metatarsal stress fractures. Discomfort in the thighs or hips due to femoral pseudofractures can be distinguished from other types of osteomalacia by their location in the lateral cortices of the femora. The symptoms of this disease usually begin during middle age of an adult patient and can include bone pain, and hypomineralization.Some patients suffer from calcium pyrophosphate dihydrate crystal depositions with occasional attacks of arthritis (pseudogout), which appears to be the result of elevated endogenous inorganic pyrophosphate (PPi) levels. These patients may also suffer articular cartilage degeneration and pyrophosphate arthropathy. Radiographs reveal pseudofractures in the lateral cortices of the proximal femora and stress fractures, and patients may experience osteopenia, chondrocalcinosis, features of pyrophosphate arthropathy, and calcific periarthritis.
Odontohypophosphatasia
Odontohypophosphatasia is present when dental disease is the only clinical abnormality, and radiographic and/or histologic studies reveal no evidence of rickets or osteomalacia. Although hereditary leukocyte abnormalities and other disorders usually account for this condition, odontohypophosphatasia may explain some “early-onset periodontitis” cases.
Causes
Hypophosphatasia is associated with a molecular defect in the gene encoding tissue non-specific alkaline phosphatase (TNSALP). TNSALP is an enzyme that is tethered to the outer surface of osteoblasts and chondrocytes. TNSALP hydrolyzes several substances, including mineralization-inhibiting inorganic pyrophosphate (PPi) and pyridoxal 5’-phosphate (PLP), a major form of vitamin B. A relationship describing physiologic regulation of mineralization has been termed the Stenciling Principle of mineralization, whereby enzyme-substrate pairs imprint mineralization patterns locally into the extracellular matrix (most notably described for bone) by degrading mineralization inhibitors (e.g. TNAP/TNSALP/ALPL enzyme degrading the pyrophosphate inhibition of mineralization, and PHEX enzyme degrading the osteopontin inhibition of mineralization). The Stenciling Principle for mineralization is particularly relevant to the osteomalacia and odontomalacia observed in hypophosphatasia (HPP) and X-linked hypophosphatemia (XLH).6.
When TSNALP enzymatic activity is low, inorganic pyrophosphate (PPi) accumulates outside of cells in the extracellular matrix of bones and teeth, and inhibits formation of hydroxyapatite mineral, the main hardening component of bone, causing rickets in infants and children and osteomalacia (soft bones) and odontomalacia (soft teeth) in children and adults. PLP is the principal form of vitamin B6 and must be dephosphorylated by TNSALP before it can cross the cell membrane. Vitamin B6 deficiency in the brain impairs synthesis of neurotransmitters, which can cause seizures. In some cases, a build-up of calcium pyrophosphate dihydrate (CPPD) crystals in the joint can cause pseudogout.
Genetics
Perinatal and infantile hypophosphatasia are inherited as autosomal recessive traits with homozygosity or compound heterozygosity for two defective TNSALP alleles. The mode of inheritance for childhood, adult, and odonto forms of hypophosphatasia can be either autosomal dominant or recessive. Autosomal transmission accounts for the fact that the disease affects males and females with equal frequency. Genetic counseling is complicated by the disease’s variable inheritance pattern, and by incomplete penetration of the trait.Hypophosphatasia is a rare disease that has been reported worldwide and appears to affect individuals of all ethnicities. The prevalence of severe hypophosphatasia is estimated to be 1:100,000 in a population of largely Anglo-Saxon origin. The frequency of mild hypophosphatasia is more challenging to assess because the symptoms may escape notice or be misdiagnosed. The highest incidence of hypophosphatasia has been reported in the Mennonite population in Manitoba, Canada where one in every 25 individuals are considered carriers and one in every 2,500 newborns exhibits severe disease. Hypophosphatasia is considered particularly rare in people of African ancestry in the U.S.
Diagnosis
Dental findings
Hypophosphatasia is often discovered because of an early loss of deciduous (baby or primary) teeth with the root intact. Researchers have recently documented a positive correlation between dental abnormalities and clinical phenotype. Poor dentition is also noted in adults.
Laboratory testing
The symptom that best characterizes hypophosphatasia is low serum activity of alkaline phosphatase enzyme (ALP). In general, lower levels of enzyme activity correlate with more severe symptoms. The decrease in ALP activity leads to an increase in pyridoxal 5’-phosphate (PLP), which is the major form of Vitamin B6, in the blood, although tissue levels of Vitamin B6 may be unremarkable and correlates with disease severity. Urinary inorganic pyrophosphate (PPi) levels are elevated in most hypophosphatasia patients and, although it remains only a research technique, this increase has been reported to accurately detect carriers of the disease. In addition, most patients have an increased level of urinary phosphoethanolamine (PEA) although some may not. PLP screening is preferred over PEA due to cost and sensitivity.Tests for serum tissue-non-specific ALP (sometimes referred to as TNSALP) levels are part of the standard comprehensive metabolic panel (CMP) that is used in routine exams, although bone-specific ALP testing may be indicative of disease severity.
Radiography
Despite patient-to-patient variability and the diversity of radiographic findings, the X-ray is diagnostic in infantile hypophosphatasia. Skeletal defects are found in nearly all patients and include hypomineralization, rachitic changes, incomplete vertebrate ossification and, occasionally, lateral bony spurs on the ulnae and fibulae.In newborns, X-rays readily distinguish hypophosphatasia from osteogenesis imperfecta and congenital dwarfism. Some stillborn skeletons show almost no mineralization; others have marked undermineralization and severe rachitic changes. Occasionally there can be peculiar complete or partial absence of ossification in one or more vertebrae. In the skull, individual membranous bones may calcify only at their centers, making it appear that areas of the unossified calvarium have cranial sutures that are widely separated when, in fact, they are functionally closed. Small protrusions (or "tongues") of radiolucency often extend from the metaphyses into the bone shaft.
In infants, radiographic features of hypophosphatasia are striking, though generally less severe than those found in perinatal hypophosphatasia. In some newly diagnosed patients, there is an abrupt transition from relatively normal-appearing diaphyses to uncalcified metaphases, suggesting an abrupt metabolic change has occurred. Serial radiography studies can reveal the persistence of impaired skeletal mineralization (i.e. rickets), instances of sclerosis, and gradual generalized demineralization.
In adults, X-rays may reveal bilateral femoral pseudofractures in the lateral subtrochanteric diaphysis. These pseudofractures may remain for years, but they may not heal until they break completely or the patient receives intramedullary fixation. These patients may also experience recurrent metatarsal fractures. DXA may show abnormal bone mineral density which may correlate with disease severity, although bone mineral density in HPP patients may not be systemically reduced.
Genetic analysis
All clinical sub-types of hypophosphatasia have been traced to genetic mutations in the gene encoding TNSALP, which is localized on chromosome 1p36.1-34 in humans (ALPL; OMIM#171760). Approximately 388 distinct mutations have been described in the TNSALP gene. "The Tissue Nonspecific Alkaline Phosphatase Gene Mutations Database". About 80% of the mutations are missense mutations. The number and diversity of mutations results in highly variable phenotypic expression, and there appears to be a correlation between genotype and phenotype in hypophosphatasia”. Mutation analysis is possible and available in 3 laboratories.
Treatment
As of October 2015, asfotase alfa (Strensiq) has been approved by the FDA for the treatment of hypophosphatasia.
Some evidence exists to support the use of teriparatide in adult-HPP.Current management consists of palliating symptoms, maintaining calcium balance and applying physical, occupational, dental and orthopedic interventions, as necessary.
Hypercalcemia in infants may require restriction of dietary calcium or administration of calciuretics. This should be done carefully so as not to increase the skeletal demineralization that results from the disease itself. Vitamin D sterols and mineral supplements, traditionally used for rickets or osteomalacia, should not be used unless there is a deficiency, as blood levels of calcium ions (Ca2+), inorganic phosphate (Pi) and vitamin D metabolites usually are not reduced.
Craniosynostosis, the premature closure of skull sutures, may cause intracranial hypertension and may require neurosurgical intervention to avoid brain damage in infants.
Bony deformities and fractures are complicated by the lack of mineralization and impaired skeletal growth in these patients. Fractures and corrective osteotomies (bone cutting) can heal, but healing may be delayed and require prolonged casting or stabilization with orthopedic hardware. A load-sharing intramedullary nail or rod is the best surgical treatment for complete fractures, symptomatic pseudofractures, and progressive asymptomatic pseudofractures in adult hypophosphatasia patients.
Dental problems: Children particularly benefit from skilled dental care, as early tooth loss can cause malnutrition and inhibit speech development. Dentures may ultimately be needed. Dentists should carefully monitor patients’ dental hygiene and use prophylactic programs to avoid deteriorating health and periodontal disease.
Physical Impairments and pain: Rickets and bone weakness associated with hypophosphatasia can restrict or eliminate ambulation, impair functional endurance, and diminish ability to perform activities of daily living. Nonsteroidal anti-inflammatory drugs may improve pain-associated physical impairment and can help improve walking distance]
Bisphosphonate (a pyrophosphate synthetic analog) in one infant had no discernible effect on the skeleton, and the infant’s disease progressed until death at 14 months of age.
Bone marrow cell transplantation in two severely affected infants produced radiographic and clinical improvement, although the mechanism of efficacy is not fully understood and significant morbidity persisted.
Enzyme replacement therapy with normal, or ALP-rich serum from patients with Paget’s bone disease, was not beneficial.
Phase 2 clinical trials of bone targeted enzyme-replacement therapy for the treatment of hypophosphatasia in infants and juveniles have been completed, and a phase 2 study in adults is ongoing.
Pyridoxine, or Vitamin B6 may be used as adjunctive therapy in some cases, which may be referred to as Pyridoxine responsive seizures.
History
It was discovered initially in 1936 but was fully named and documented by a Canadian pediatrician, John Campbell Rathbun (1915-1972), while examining and treating a baby boy with very low levels of alkaline phosphatase in 1948. The genetic basis of the disease was mapped out only some 40 years later. Hypophosphatasia is sometimes called Rathbuns syndrome after its principal documenter.
See also
Alkaline phosphatase
Choline
References
Further reading
External links
Online Mendelian Inheritance in Man (OMIM): Adult Hypophosphatasia - 146300 |
Hypoprothrombinemia | Hypoprothrombinemia is a rare blood disorder in which a deficiency in immunoreactive prothrombin (Factor II), produced in the liver, results in an impaired blood clotting reaction, leading to an increased physiological risk for spontaneous bleeding. This condition can be observed in the gastrointestinal system, cranial vault, and superficial integumentary system, affecting both the male and female population. Prothrombin is a critical protein that is involved in the process of hemostasis, as well as illustrating procoagulant activities. This condition is characterized as an autosomal recessive inheritance congenital coagulation disorder affecting 1 per 2,000,000 of the population, worldwide, but is also attributed as acquired.
Signs and symptoms
There are various symptoms that are presented and are typically associated to a specific site that they appear at. Hypoprothrombinemia is characterized by a poor blood clotting function of prothrombin. Some symptoms are presented as severe, while others are mild, meaning that blood clotting is slower than normal. Areas that are usually affected are muscles, joints, and the brain, however, these sites are more uncommon.The most common symptoms include:
Easy bruising
Oral mucosal bleeding - Bleeding of the membrane mucus lining inside of the mouth.
Soft tissue bleeding.
Hemarthrosis - Bleeding in joint spaces.
Epistaxis - Acute hemorrhages from areas of the nasal cavity, nostrils, or nasopharynx.
Women with this deficiency experience menorrhagia: prolonged, abnormal heavy menstrual bleeding. This is typically a symptom of the disorder when severe blood loss occurs.Other reported symptoms that are related to the condition:
Prolonged periods of bleeding due to surgery, injury, or post birth.
Melena - Associated with acute gastrointestinal bleeding, dark black, tarry feces.
Hematochezia - Lower gastrointestinal bleeding, passage of fresh, bright red blood through the anus secreted in or with stools. If associated with upper gastrointestinal bleeding, suggestive of a more life-threatening issue.Type I: Severe hemorrhages are indicators of a more severe prothrombin deficiency that account for muscle hematomas, intracranial bleeding, postoperative bleeding, and umbilical cord hemorrhage, which may also occur depending on the severity, respectively.
Type II: Symptoms are usually more capricious, but can include a variety of the symptoms described previously. Less severe cases of the disorder typically do not involve spontaneous bleeding.
Causes
Hypoprothrombinemia can be the result of a genetic defect, may be acquired as the result of another disease process, or may be an adverse effect of medication. For example, 5-10% of patients with systemic lupus erythematosus exhibit acquired hypoprothrombinemia due to the presence of autoantibodies which bind to prothrombin and remove it from the bloodstream (lupus anticoagulant-hypoprothrombinemia syndrome). The most common viral pathogen that is involved is Adenovirus, with a prevalence of 50% in postviral cases.
Inheritance
Autosomal recessive condition in which both parents must carry the recessive gene in order to pass the disease on to offspring. If both parents have the autosomal recessive condition, the chance of mutation in offspring increases to 100%. An individual will be considered a carrier if one mutant copy of the gene is inherited, and will not illustrate any symptoms. The disease affects both men and women equally, and overall, is a very uncommon inherited or acquired disorder.
Non-inheritance and other factors
There are two types of prothrombin deficiencies that occur depending on the mutation:Type I (true deficiency), includes a missense or nonsense mutation, essentially decreasing prothrombin production. This is associated with bleeding from birth. Here, plasma levels of prothrombin are typically less than 10% of normal levels.Type II, known as dysprothrombinemia, includes a missense mutation at specific Xa factor cleavage sites and serine protease prothrombin regions. Type II deficiency creates a dysfunctional protein with decreased activity and usually normal or low-normal antigen levels. A vitamin K-dependent clotting factor is seldom seen as a contributor to inherited prothrombin deficiencies, but lack of Vitamin K decreases the synthesis of prothrombin in liver cells.Acquired underlying causes of this condition include severe liver disease, warfarin overdose, platelet disorders, and disseminated intravascular coagulation (DIC).
It may also be a rare adverse effect to ceftriaxone.
Mechanism
Hypoprothrombinemia is found to present itself as either inherited or acquired, and is a decrease in the synthesis of prothrombin. In the process of inheritance, it marks itself as an autosomal recessive disorder, meaning that both parents must be carriers of the defective gene in order for the disorder to be present in a child. Prothrombin is a glycoprotein that occurs in blood plasma and functions as a precursor to the enzyme, thrombin, which acts to convert fibrinogen into fibrin, therefore, fortifying clots. This clotting process is known as coagulation.The mechanism specific to prothrombin (factor II) includes the proteolytically cleaving, breakdown of proteins into smaller polypeptides or amino acids, of this coagulation factor in order to form thrombin at the beginning of the cascade, leading to stemming of blood loss. A mutation in factor II would essentially lead to hypoprothrombinemia. The mutation is presented on chromosome 11.Areas where the disease has been shown to present itself at include the liver, since the glycoprotein is stored in this area.
Acquired cases are results from an isolated factor II deficiency. Specific cases include:
Vitamin K deficiency: In the liver, vitamin K plays an important role in the synthesis of coagulation factor II. Bodys capacity in the storage of vitamin K is typically very low. Vitamin K-dependent coagulation factors have a very short half-life, sometimes leading to a deficiency when a depletion of vitamin K occurs. The liver synthesizes inactive precursor proteins in the absence of vitamin K (liver disease). Vitamin K deficiency leads to impaired clotting of the blood and in some cases, causes internal bleeding without an associated injury.
Disseminated intravascular coagulation (DIC): Involving abnormal, excessive generation of thrombin and fibrin within the blood. Relative to hypoprothrombinemia, due to increased platelet aggregation and coagulation factor consumption involved in the process.
Anticoagulants: warfarin overdose: Used as a treatment for prevention of blood clots, however, like most drugs, side effects have been shown to increase risk of excessive bleeding by functioning in the disruption of hepatic synthesis of coagulation factors II, VII, IX, and X. Vitamin K is an antagonist to warfarin drug, reversing its activity, causing it to be less effective in the process of blood clotting. Warfarin intake has been shown to interfere with Vitamin-K metabolism.
Diagnosis
Diagnosis of inherited hypoprothrombinemia, relies heavily on a patients medical history, family history of bleeding issues, and lab exams performed by a hematologist. A physical examination by a general physician should also be performed in order to determine whether the condition is congenital or acquired, as well as ruling out other possible conditions with similar symptoms. For acquired forms, information must be taken regarding current diseases and medications taken by the patient, if applicable.Lab tests that are performed to determine diagnosis:
Factor assays: To observe the performance of specific factors (II) to identify missing/poorly performing factors. These lab tests are typically performed first in order to determine the status of the factor.
Prothrombin blood test: Determines if patient has deficient or low levels of Factor II.
Vitamin K1 test: Performed to evaluate bleeding of unknown causes, nosebleeds, and identified bruising. To accomplish this, a band is wrapped around the patients arm, 4 inches above the superficial vein site in the elbow pit. The vein is penetrated with the needle and amount of blood required for testing is obtained. Decreased vitamin K levels are suggestive of hypoprothrombinemia. However, this exam is rarely used as a prothrombin blood test is performed beforehand.
Treatment
Treatment is almost always aimed to control hemorrhages, treating underlying causes, and taking preventative steps before performing invasive surgeries.
Hypoprothrombinemia can be treated with periodic infusions of purified prothrombin complexes. These are typically used as treatment methods for severe bleeding cases in order to boost clotting ability and increasing levels of vitamin K-dependent coagulation factors.
A known treatment for hypoprothrombinemia is menadoxime.
Menatetrenone was also listed as an antihemorrhagic vitamin.
4-Amino-2-methyl-1-naphthol (Vitamin K5) is another treatment for hypoprothrombinemia.
Vitamin K forms are administered orally or intravenously.
Other concentrates include Proplex T, Konyne 80, and Bebulin VH.Fresh frozen plasma infusion (FFP) is a method used for continuous bleeding episodes, every 3–5 weeks for mention.
Used to treat various conditions related to low blood clotting factors.
Administered by intravenous injection and typically at a 15-20 ml/kg/dose.
Can be used to treat acute bleeding.Sometimes, underlying causes cannot be controlled or determined, so management of symptoms and bleeding conditions should be priority in treatment.Invasive options, such as surgery or clotting factor infusions, are required if previous methods do not suffice. Surgery is to be avoided, as it causes significant bleeding in patients with hypoprothrombinemia.
Prognosis
Prognosis for patients varies and is dependent on severity of the condition and how early the treatment is managed.
With proper treatment and care, most people go on to live a normal and healthy life.
With more severe cases, a hematologist will need to be seen throughout the patients life in order to deal with bleeding and continued risks.
References
== External links == |
Hair loss | Hair loss, also known as alopecia or baldness, refers to a loss of hair from part of the head or body. Typically at least the head is involved. The severity of hair loss can vary from a small area to the entire body. Inflammation or scarring is not usually present. Hair loss in some people causes psychological distress.Common types include male- or female-pattern hair loss, alopecia areata, and a thinning of hair known as telogen effluvium. The cause of male-pattern hair loss is a combination of genetics and male hormones; the cause of female pattern hair loss is unclear; the cause of alopecia areata is autoimmune; and the cause of telogen effluvium is typically a physically or psychologically stressful event. Telogen effluvium is very common following pregnancy.Less common causes of hair loss without inflammation or scarring include the pulling out of hair, certain medications including chemotherapy, HIV/AIDS, hypothyroidism, and malnutrition including iron deficiency. Causes of hair loss that occurs with scarring or inflammation include fungal infection, lupus erythematosus, radiation therapy, and sarcoidosis. Diagnosis of hair loss is partly based on the areas affected.Treatment of pattern hair loss may simply involve accepting the condition, which can also include shaving ones head. Interventions that can be tried include the medications minoxidil (or finasteride) and hair transplant surgery. Alopecia areata may be treated by steroid injections in the affected area, but these need to be frequently repeated to be effective. Hair loss is a common problem. Pattern hair loss by age 50 affects about half of men and a quarter of women. About 2% of people develop alopecia areata at some point in time.
Terminology
Baldness is the partial or complete lack of hair growth, and part of the wider topic of "hair thinning". The degree and pattern of baldness varies, but its most common cause is androgenic hair loss, alopecia androgenetica, or alopecia seborrheica, with the last term primarily used in Europe.
Hypotrichosis
Hypotrichosis is a condition of abnormal hair patterns, predominantly loss or reduction. It occurs, most frequently, by the growth of vellus hair in areas of the body that normally produce terminal hair. Typically, the individuals hair growth is normal after birth, but shortly thereafter the hair is shed and replaced with sparse, abnormal hair growth. The new hair is typically fine, short and brittle, and may lack pigmentation. Baldness may be present by the time the subject is 25 years old.
Signs and symptoms
Symptoms of hair loss include hair loss in patches usually in circular patterns, dandruff, skin lesions, and scarring. Alopecia areata (mild – medium level) usually shows in unusual hair loss areas, e.g., eyebrows, backside of the head or above the ears, areas the male pattern baldness usually does not affect. In male-pattern hair loss, loss and thinning begin at the temples and the crown and hair either thins out or falls out. Female-pattern hair loss occurs at the frontal and parietal.
People have between 100,000 and 150,000 hairs on their head. The number of strands normally lost in a day varies but on average is 100. In order to maintain a normal volume, hair must be replaced at the same rate at which it is lost. The first signs of hair thinning that people will often notice are more hairs than usual left in the hairbrush after brushing or in the basin after shampooing. Styling can also reveal areas of thinning, such as a wider parting or a thinning crown.
Skin conditions
A substantially blemished face, back and limbs could point to cystic acne. The most severe form of the condition, cystic acne, arises from the same hormonal imbalances that cause hair loss and is associated with dihydrotestosterone production.
Psychological
The psychology of hair thinning is a complex issue. Hair is considered an essential part of overall identity: especially for women, for whom it often represents femininity and attractiveness. Men typically associate a full head of hair with youth and vigor. People experiencing hair thinning often find themselves in a situation where their physical appearance is at odds with their own self-image and commonly worry that they appear older than they are or less attractive to others. Psychological problems due to baldness, if present, are typically most severe at the onset of symptoms.Hair loss induced by cancer chemotherapy has been reported to cause changes in self-concept and body image. Body image does not return to the previous state after regrowth of hair for a majority of patients. In such cases, patients have difficulties expressing their feelings (alexithymia) and may be more prone to avoiding family conflicts. Family therapy can help families to cope with these psychological problems if they arise.
Causes
Although not completely understood, hair loss can have many causes:
Pattern hair loss
Male pattern hair loss is believed to be due to a combination of genetics and the male hormone dihydrotestosterone. The cause in female pattern hair loss remains unclear.
Infection
Dissecting cellulitis of the scalp
Fungal infections (such as tinea capitis)
Folliculitis from various causes
Demodex folliculitis, caused by Demodex folliculorum, a microscopic mite that feeds on the sebum produced by the sebaceous glands, denies hair essential nutrients and can cause thinning. Demodex folliculorum is not present on every scalp and is more likely to live in an excessively oily scalp environment.
Secondary syphilis
Drugs
Temporary or permanent hair loss can be caused by several medications, including those for blood pressure problems, diabetes, heart disease and cholesterol. Any that affect the bodys hormone balance can have a pronounced effect: these include the contraceptive pill, hormone replacement therapy, steroids and acne medications.
Some treatments used to cure mycotic infections can cause massive hair loss.
Medications (side effects from drugs, including chemotherapy, anabolic steroids, and birth control pills)
Trauma
Traction alopecia is most commonly found in people with ponytails or cornrows who pull on their hair with excessive force. In addition, rigorous brushing and heat styling, rough scalp massage can damage the cuticle, the hard outer casing of the hair. This causes individual strands to become weak and break off, reducing overall hair volume.
Frictional alopecia is hair loss caused by rubbing of the hair or follicles, most infamously around the ankles of men from socks, where even if socks are no longer worn, the hair often will not grow back.
Trichotillomania is the loss of hair caused by compulsive pulling and bending of the hairs. Onset of this disorder tends to begin around the onset of puberty and usually continues through adulthood. Due to the constant extraction of the hair roots, permanent hair loss can occur.
Traumas such as childbirth, major surgery, poisoning, and severe stress may cause a hair loss condition known as telogen effluvium, in which a large number of hairs enter the resting phase at the same time, causing shedding and subsequent thinning. The condition also presents as a side effect of chemotherapy – while targeting dividing cancer cells, this treatment also affects hairs growth phase with the result that almost 90% of hairs fall out soon after chemotherapy starts.
Radiation to the scalp, as when radiotherapy is applied to the head for the treatment of certain cancers there, can cause baldness of the irradiated areas.
Pregnancy
Hair loss often follows childbirth in the postpartum period without causing baldness. In this situation, the hair is actually thicker during pregnancy owing to increased circulating oestrogens. Approximately three months after giving birth (typically between 2 and 5 months), oestrogen levels drop and hair loss occurs, often particularly noticeably around the hairline and temple area. Hair typically grows back normally and treatment is not indicated. A similar situation occurs in women taking the fertility-stimulating drug clomiphene.
Other causes
Autoimmune disease. Alopecia areata is an autoimmune disorder also known as "spot baldness" that can result in hair loss ranging from just one location (Alopecia areata monolocularis) to every hair on the entire body (Alopecia areata universalis). Although thought to be caused by hair follicles becoming dormant, what triggers alopecia areata is not known. In most cases the condition corrects itself, but it can also spread to the entire scalp (alopecia totalis) or to the entire body (alopecia universalis).
Skin diseases and cancer. Localized or diffuse hair loss may also occur in cicatricial alopecia (lupus erythematosus, lichen plano pilaris, folliculitis decalvans, central centrifugal cicatricial alopecia, postmenopausal frontal fibrosing alopecia, etc.). Tumours and skin outgrowths also induce localized baldness (sebaceous nevus, basal cell carcinoma, squamous cell carcinoma).
Hypothyroidism (an under-active thyroid) and the side effects of its related medications can cause hair loss, typically frontal, which is particularly associated with thinning of the outer third of the eyebrows (also seen with syphilis). Hyperthyroidism (an over-active thyroid) can also cause hair loss, which is parietal rather than frontal.
Sebaceous cysts. Temporary loss of hair can occur in areas where sebaceous cysts are present for considerable duration (normally one to several weeks).
Congenital triangular alopecia – It is a triangular, or oval in some cases, shaped patch of hair loss in the temple area of the scalp that occurs mostly in young children. The affected area mainly contains vellus hair follicles or no hair follicles at all, but it does not expand. Its causes are unknown, and although it is a permanent condition, it does not have any other effect on the affected individuals.
Hair growth conditions. Gradual thinning of hair with age is a natural condition known as involutional alopecia. This is caused by an increasing number of hair follicles switching from the growth, or anagen, phase into a resting phase, or telogen phase, so that remaining hairs become shorter and fewer in number. An unhealthy scalp environment can play a significant role in hair thinning by contributing to miniaturization or causing damage.
Obesity. Obesity-induced stress, such as that induced by a high-fat diet (HFD), targets hair follicle stem cells (HFSCs) to accelerate hair thinning in mice. It is likely that similar molecular mechanism play a role in human hair loss.Other causes of hair loss include:
Alopecia mucinosa
Biotinidase deficiency
Chronic inflammation
Diabetes
Pseudopelade of Brocq
Telogen effluvium
Tufted folliculitis
Genetics
Genetic forms of localized autosomal recessive hypotrichosis include:
Pathophysiology
Hair follicle growth occurs in cycles. Each cycle consists of a long growing phase (anagen), a short transitional phase (catagen) and a short resting phase (telogen). At the end of the resting phase, the hair falls out (exogen) and a new hair starts growing in the follicle, beginning the cycle again.
Normally, about 40 (0–78 in men) hairs reach the end of their resting phase each day and fall out. When more than 100 hairs fall out per day, clinical hair loss (telogen effluvium) may occur. A disruption of the growing phase causes abnormal loss of anagen hairs (anagen effluvium).
Diagnosis
Because they are not usually associated with an increased loss rate, male-pattern and female-pattern hair loss do not generally require testing. If hair loss occurs in a young man with no family history, drug use could be the cause.
The pull test helps to evaluate diffuse scalp hair loss. Gentle traction is exerted on a group of hairs (about 40–60) on three different areas of the scalp. The number of extracted hairs is counted and examined under a microscope. Normally, fewer than three hairs per area should come out with each pull. If more than ten hairs are obtained, the pull test is considered positive.
The pluck test is conducted by pulling hair out "by the roots". The root of the plucked hair is examined under a microscope to determine the phase of growth, and is used to diagnose a defect of telogen, anagen, or systemic disease. Telogen hairs have tiny bulbs without sheaths at their roots. Telogen effluvium shows an increased percentage of hairs upon examination. Anagen hairs have sheaths attached to their roots. Anagen effluvium shows a decrease in telogen-phase hairs and an increased number of broken hairs.
Scalp biopsy is used when the diagnosis is unsure; a biopsy allows for differing between scarring and nonscarring forms. Hair samples are taken from areas of inflammation, usually around the border of the bald patch.
Daily hair counts are normally done when the pull test is negative. It is done by counting the number of hairs lost. The hair from the first morning combing or during washing should be counted. The hair is collected in a clear plastic bag for 14 days. The strands are recorded. If the hair count is >100/day, it is considered abnormal except after shampooing, where hair counts will be up to 250 and be normal.
Trichoscopy is a noninvasive method of examining hair and scalp. The test may be performed with the use of a handheld dermoscope or a video dermoscope. It allows differential diagnosis of hair loss in most cases.There are two types of identification tests for female pattern baldness: the Ludwig Scale and the Savin Scale. Both track the progress of diffused thinning, which typically begins on the crown of the head behind the hairline, and becomes gradually more pronounced. For male pattern baldness, the Hamilton–Norwood scale tracks the progress of a receding hairline and/or a thinning crown, through to a horseshoe-shaped ring of hair around the head and on to total baldness.In almost all cases of thinning, and especially in cases of severe hair loss, it is recommended to seek advice from a doctor or dermatologist. Many types of thinning have an underlying genetic or health-related cause, which a qualified professional will be able to diagnose.
Management
Hiding hair loss
Head
One method of hiding hair loss is the comb over, which involves restyling the remaining hair to cover the balding area. It is usually a temporary solution, useful only while the area of hair loss is small. As the hair loss increases, a comb over becomes less effective.
Another method is to wear a hat or a hairpiece such as a wig or toupee. The wig is a layer of artificial or natural hair made to resemble a typical hair style. In most cases the hair is artificial. Wigs vary widely in quality and cost. In the United States, the best wigs – those that look like real hair – cost up to tens of thousands of dollars. Organizations also collect individuals donations of their own natural hair to be made into wigs for young cancer patients who have lost their hair due to chemotherapy or other cancer treatment in addition to any type of hair loss.
Eyebrows
Though not as common as the loss of hair on the head, chemotherapy, hormone imbalance, forms of hair loss, and other factors can also cause loss of hair in the eyebrows. Loss of growth in the outer one third of the eyebrow is often associated with hypothyroidism. Artificial eyebrows are available to replace missing eyebrows or to cover patchy eyebrows. Eyebrow embroidery is another option which involves the use of a blade to add pigment to the eyebrows. This gives a natural 3D look for those who are worried about an artificial look and it lasts for two years. Micropigmentation (permanent makeup tattooing) is also available for those who want the look to be permanent.
Medications
Treatments for the various forms of hair loss have limited success. Three medications have evidence to support their use in male pattern hair loss: minoxidil, finasteride, and dutasteride. They typically work better to prevent further hair loss, than to regrow lost hair. On June 13, 2022, the U.S. Food and Drug Administration (FDA) approved Olumiant (baricitinib) for adults with severe alopecia areatal. It is the first FDA approved drug for systemic treatment, or treatment for any area of the body.
Minoxidil (Rogaine) is a nonprescription medication approved for male pattern baldness and alopecia areata. In a liquid or foam, it is rubbed into the scalp twice a day. Some people have an allergic reaction to the propylene glycol in the minoxidil solution and a minoxidil foam was developed without propylene glycol. Not all users will regrow hair. Minoxidil is also prescribed tablets to be taken orally to encourage hair regrowth although is not FDA approved to treat hair loss. The longer the hair has stopped growing, the less likely minoxidil will regrow hair. Minoxidil is not effective for other causes of hair loss. Hair regrowth can take 1 to 6 months to begin. Treatment must be continued indefinitely. If the treatment is stopped, hair loss resumes. Any regrown hair and any hair susceptible to being lost, while Minoxidil was used, will be lost. Most frequent side effects are mild scalp irritation, allergic contact dermatitis, and unwanted hair in other parts of the body.
Finasteride (Propecia) is used in male-pattern hair loss in a pill form, taken 1 milligram per day. It is not indicated for women and is not recommended in pregnant women (as it is known to cause birth defects in fetuses). Treatment is effective starting within 6 weeks of treatment. Finasteride causes an increase in hair retention, the weight of hair, and some increase in regrowth. Side effects in about 2% of males include decreased sex drive, erectile dysfunction, and ejaculatory dysfunction. Treatment should be continued as long as positive results occur. Once treatment is stopped, hair loss resumes.
Corticosteroids injections into the scalp can be used to treat alopecia areata. This type of treatment is repeated on a monthly basis. Oral pills for extensive hair loss may be used for alopecia areata. Results may take up to a month to be seen.
Immunosuppressants applied to the scalp have been shown to temporarily reverse alopecia areata, though the side effects of some of these drugs make such therapy questionable.
There is some tentative evidence that anthralin may be useful for treating alopecia areata.
Hormonal modulators (oral contraceptives or antiandrogens such as spironolactone and flutamide) can be used for female-pattern hair loss associated with hyperandrogenemia.
Surgery
Hair transplantation is usually carried out under local anaesthetic. A surgeon will move healthy hair from the back and sides of the head to areas of thinning. The procedure can take between four and eight hours, and additional sessions can be carried out to make hair even thicker. Transplanted hair falls out within a few weeks, but regrows permanently within months. Hair transplants, takes tiny plugs of skin, each which contains a few hairs, and implants the plugs into bald sections. The plugs are generally taken from the back or sides of the scalp. Several transplant sessions may be necessary.
Surgical options, such as follicle transplants, scalp flaps, and hair loss reduction, are available. These procedures are generally chosen by those who are self-conscious about their hair loss, but they are expensive and painful, with a risk of infection and scarring. Once surgery has occurred, six to eight months are needed before the quality of new hair can be assessed.
Scalp reduction is the process is the decreasing of the area of bald skin on the head. In time, the skin on the head becomes flexible and stretched enough that some of it can be surgically removed. After the hairless scalp is removed, the space is closed with hair-covered scalp. Scalp reduction is generally done in combination with hair transplantation to provide a natural-looking hairline, especially those with extensive hair loss.
Hairline lowering can sometimes be used to lower a high hairline secondary to hair loss, although there may be a visible scar after further hair loss.
Wigs are an alternative to medical and surgical treatment; some patients wear a wig or hairpiece. They can be used permanently or temporarily to cover the hair loss. High-quality, natural-looking wigs and hairpieces are available.
Chemotherapy
Hypothermia caps may be used to prevent hair loss during some kinds of chemotherapy, specifically, when taxanes or anthracyclines are administered. It is not recommended to be used when cancer is present in the skin of the scalp or for lymphoma or leukemia. There are generally only minor side effects from scalp cooling given during chemotherapy.
Embracing baldness
Instead of attempting to conceal their hair loss, some people embrace it by either doing nothing about it or sporting a shaved head. The general public became more accepting of men with shaved heads in the early 1950s, when Russian-American actor Yul Brynner began sporting the look; the resulting phenomenon inspired many of his male fans to shave their heads. Male celebrities then continued to bring mainstream popularity to shaved heads, including athletes such as Michael Jordan and Zinedine Zidane and actors such as Dwayne Johnson, Ben Kingsley, and Jason Statham. Baldness in females, however, is still viewed as less "normal" in various parts of the world.
Alternative medicine
Dietary supplements are not typically recommended. There is only one small trial of saw palmetto which shows tentative benefit in those with mild to moderate androgenetic alopecia. There is no evidence for biotin. Evidence for most other alternative medicine remedies is also insufficient. There was no good evidence for ginkgo, aloe vera, ginseng, bergamot, hibiscus, or sophora as of 2011.Many people use unproven treatments to treat hair loss. Egg oil, in Indian, Japanese, Unani (Roghan Baiza Murgh) and Chinese traditional medicine, was traditionally used as a treatment for hair loss.
Research
Research is looking into connections between hair loss and other health issues. While there has been speculation about a connection between early-onset male pattern hair loss and heart disease, a review of articles from 1954 to 1999 found no conclusive connection between baldness and coronary artery disease. The dermatologists who conducted the review suggested further study was needed.Environmental factors are under review. A 2007 study indicated that smoking may be a factor associated with age-related hair loss among Asian men. The study controlled for age and family history, and found statistically significant positive associations between moderate or severe male pattern hair loss and smoking status.Vertex baldness is associated with an increased risk of coronary heart disease (CHD) and the relationship depends upon the severity of baldness, while frontal baldness is not. Thus, vertex baldness might be a marker of CHD and is more closely associated with atherosclerosis than frontal baldness.
Hair follicle aging
A key aspect of hair loss with age is the aging of the hair follicle. Ordinarily, hair follicle renewal is maintained by the stem cells associated with each follicle. Aging of the hair follicle appears to be primed by a sustained cellular response to the DNA damage that accumulates in renewing stem cells during aging. This damage response involves the proteolysis of type XVII collagen by neutrophil elastase in response to DNA damage in hair follicle stem cells. Proteolysis of collagen leads to elimination of the damaged cells and, consequently, to terminal hair follicle miniaturization.
Hedgehog signaling
In June 2022 the University of California, Irvine announced that researchers have discovered that hedgehog signaling in murine fibroblasts induces new hair growth and hair multiplication while hedgehog activation increases fibroblast heterogeneity and drives new cell states. A new signaling molecule called SCUBE3 potently stimulates hair growth and may offer a therapeutic treatment for androgenetic alopecia.
Etymology
The term alopecia () is from the Classical Greek ἀλώπηξ, alōpēx, meaning "fox". The origin of this usage is because this animal sheds its coat twice a year, or because in ancient Greece foxes often lost hair because of mange.
See also
Alopecia in animals
Lichen planopilaris
List of conditions caused by problems with junctional proteins
Locks of Love – charity that provides hair prosthetics to alopecia patients
Psychogenic alopecia
References
External links
Hair loss at Curlie |
Ichthyosis vulgaris | Ichthyosis vulgaris (also known as "autosomal dominant ichthyosis" and "Ichthyosis simplex") is a skin disorder causing dry, scaly skin. It is the most common form of ichthyosis,: 486 affecting around 1 in 250 people. For this reason it is known as common ichthyosis. It is usually an autosomal dominant inherited disease (often associated with filaggrin), although a rare non-heritable version called acquired ichthyosis exists.: 560
Presentation
The symptoms of the inherited form of ichthyosis vulgaris are not usually present at birth but generally develop between three months and five years of age. The symptoms will often improve with age, although they may grow more severe again in old age.The condition is not life-threatening; the impact on the patient, if it is a mild case, is generally restricted to mild itching and the social impact of having skin with an unusual appearance. People with mild cases have symptoms that include scaly patches on the shins, fine white scales on the forearms and upper arms, and rough palms. People with the mildest cases have no symptoms other than faint, tell-tale "mosaic lines" between the Achilles tendons and the calf muscles.
Severe cases, although rare, do exist. Severe cases entail the buildup of scales everywhere, with areas of the body that have a concentration of sweat glands being least affected. Areas where the skin rubs against together, such as the armpits, the groin, and the "folded" areas of the elbow and knees, are less affected. Various topical treatments are available to "exfoliate" the scales. These include lotions that contain alpha-hydroxy acids.
Associated conditions
Many people with severe ichthyosis have problems sweating due to the buildup of scales on the skin. This may lead to problems such as "prickly itch", which results from the afflicted skin being unable to sweat due to the buildup of scales, or problems associated with overheating. The majority of people with vulgaris can sweat at least a little. Paradoxically this means most would be more comfortable living in a hot and humid climate. Sweating helps to shed scales, which improves the appearance of the skin and prevents "prickly itch".The dry skin will crack on digits or extremities and create bloody cuts. Skin is painful when inflamed and/or tight. For children and adolescents, psychological concerns may include inconsistent self-image, mood fluctuating due to cyclical outbreaks, tendency to addiction, possibility of social withdrawal when skin is noticeably infected, and preoccupation with appearance.Strong air conditioning and excessive consumption of alcohol can also increase the buildup of scales.
Over 50% of people with ichthyosis vulgaris have some type of atopic disease such as allergies, eczema, or asthma. Another common condition associated with ichthyosis vulgaris is keratosis pilaris (small bumps mainly appearing on the back of the upper arms).
Genetics
Ichthyosis vulgaris is one of the most common genetic disorders caused by a single gene. The disorder is believed to be caused by mutations to the gene encoding profilaggrin (a protein which is converted to filaggrin, which plays a vital role in the structure of the skin). Around 10% of the population have some detrimental mutations to the profilaggrin gene that is also linked to atopic dermatitis (another skin disorder that is often present with ichthyosis vulgaris). The exact mutation is only known for some cases of ichthyosis vulgaris.It is generally considered to be an autosomal dominant condition, i.e., a single genetic mutation causes the disease and an affected person has a 50% chance of passing the condition on to their child. There is some research indicating it may be semi-dominant. This means that a single mutation would cause a mild case of ichthyosis vulgaris and mutations to both copies of the gene would produce a more severe case.
Diagnosis
See also
Harlequin-type ichthyosis
List of cutaneous conditions
List of cutaneous conditions caused by mutations in keratins
References
External links
DermAtlas 28
Photographs from Ichthyosis Information |
Immune thrombocytopenic purpura | Immune thrombocytopenic purpura (ITP), also known as idiopathic thrombocytopenic purpura or immune thrombocytopenia, is a type of thrombocytopenic purpura defined as an isolated low platelet count with a normal bone marrow in the absence of other causes of low platelets. It causes a characteristic red or purple bruise-like rash and an increased tendency to bleed. Two distinct clinical syndromes manifest as an acute condition in children and a chronic condition in adults. The acute form often follows an infection and spontaneously resolves within two months. Chronic immune thrombocytopenia persists longer than six months with a specific cause being unknown.
ITP is an autoimmune disease with antibodies detectable against several platelet surface structures.
ITP is diagnosed by identifying a low platelet count on a complete blood count (a common blood test). However, since the diagnosis depends on the exclusion of other causes of a low platelet count, additional investigations (such as a bone marrow biopsy) may be necessary in some cases.
In mild cases, only careful observation may be required but very low counts or significant bleeding may prompt treatment with corticosteroids, intravenous immunoglobulin, anti-D immunoglobulin, or immunosuppressive medications. Refractory ITP (not responsive to conventional treatment or constant relapsing after splenectomy) requires treatment to reduce the risk of clinically significant bleeding. Platelet transfusions may be used in severe cases with very low platelet counts in people who are bleeding. Sometimes the body may compensate by making abnormally large platelets.
Signs and symptoms
Signs include the spontaneous formation of bruises (purpura) and petechiae (tiny bruises), especially on the extremities, bleeding from the nostrils and/or gums, and menorrhagia (excessive menstrual bleeding), any of which may occur if the platelet count is below 20,000 per μl. A very low count (<10,000 per μl) may result in the spontaneous formation of hematomas (blood masses) in the mouth or on other mucous membranes. Bleeding time from minor lacerations or abrasions is usually prolonged.Serious and possibly fatal complications due to extremely low counts (<5,000 per μl) include subarachnoid or intracerebral hemorrhage (bleeding inside the skull or brain), lower gastrointestinal bleeding or other internal bleeding. An ITP patient with an extremely low count is vulnerable to internal bleeding caused by blunt abdominal trauma, as might be experienced in a motor vehicle crash. These complications are not likely when the platelet count is above 20,000 per μl.
Pathogenesis
In approximately 60 percent of cases, antibodies against platelets can be detected. Most often these antibodies are against platelet membrane glycoproteins IIb-IIIa or Ib-IX, and are of the immunoglobulin G (IgG) type. The Harrington–Hollingsworth experiment established the immune pathogenesis of ITP.The coating of platelets with IgG renders them susceptible to opsonization and phagocytosis by splenic macrophages, as well by Kupffer cells in the liver. The IgG autoantibodies are also thought to damage megakaryocytes, the precursor cells to platelets, although this is believed to contribute only slightly to the decrease in platelet numbers. Recent research now indicates that impaired production of the glycoprotein hormone, thrombopoietin, which is the stimulant for platelet production, may be a contributing factor to the reduction in circulating platelets. This observation has led to the development of a class of ITP-targeted medications referred to as thrombopoietin receptor agonists.The stimulus for auto-antibody production in ITP is probably abnormal T cell activity. Preliminary findings suggest that these T cells can be influenced by medications that target B cells, such as rituximab.
Diagnosis
The diagnosis of ITP is a process of exclusion. First, it has to be determined that there are no blood abnormalities other than a low platelet count, and no physical signs other than bleeding. Then, secondary causes (5–10 percent of suspected ITP cases) should be excluded. Such secondary causes include leukemia, medications (e.g., quinine, heparin), lupus erythematosus, cirrhosis, HIV, hepatitis C, congenital causes, antiphospholipid syndrome, von Willebrand factor deficiency, onyalai and others. All patients with presumed ITP should be tested for HIV and hepatitis C virus, as platelet counts may be corrected by treating the underlying disease. In approximately 2.7 to 5 percent of cases, autoimmune hemolytic anemia and ITP coexist, a condition referred to as Evans syndrome.Despite the destruction of platelets by splenic macrophages, the spleen is normally not enlarged. In fact, an enlarged spleen should lead to a search for other possible causes for the thrombocytopenia. Bleeding time is usually prolonged in ITP patients. However, the use of bleeding time in diagnosis is discouraged by the American Society of Hematology practice guidelines and a normal bleeding time does not exclude a platelet disorder.Bone marrow examination may be performed on patients over the age of 60 and those who do not respond to treatment, or when the diagnosis is in doubt. On examination of the marrow, an increase in the production of megakaryocytes may be observed and may help in establishing a diagnosis of ITP. An analysis for anti-platelet antibodies is a matter of clinicians preference, as there is disagreement on whether the 80 percent specificity of this test is sufficient to be clinically useful.
Treatment
With rare exceptions, there is usually no need to treat based on platelet counts. Many older recommendations suggested a certain platelet count threshold (usually somewhere below 20.0/µl) as an indication for hospitalization or treatment. Current guidelines recommend treatment only in cases of significant bleeding.
Treatment recommendations sometimes differ for adult and pediatric ITP.
Steroids
Initial treatment usually consists of the administration of corticosteroids, a group of medications that suppress the immune system. The dose and mode of administration is determined by platelet count and whether there is active bleeding: in urgent situations, infusions of dexamethasone or methylprednisolone may be used, while oral prednisone or prednisolone may suffice in less severe cases. Once the platelet count has improved, the dose of steroid is gradually reduced while the possibility of relapse is monitored. 60–90 percent will experience a relapse during dose reduction or cessation. Long-term steroids are avoided if possible because of potential side-effects that include osteoporosis, diabetes and cataracts.
Anti-D
Another option, suitable for Rh-positive patients with functional spleens is intravenous administration of Rho(D) immune globulin [Human; Anti-D]. The mechanism of action of anti-D is not fully understood. However, following administration, anti-D-coated red blood cell complexes saturate Fcγ receptor sites on macrophages, resulting in preferential destruction of red blood cells (RBCs), therefore sparing antibody-coated platelets. There are two anti-D products indicated for use in patients with ITP: WinRho SDF and Rhophylac. The most common adverse reactions are headache (15%), nausea/vomiting (12%) chills (<2%) and fever (1%).
Steroid-sparing agents
There is increasing use of immunosuppressants such as mycophenolate mofetil and azathioprine because of their effectiveness. In chronic refractory cases, where immune pathogenesis has been confirmed, the off-label use of the vinca alkaloid and chemotherapy agent vincristine may be attempted. However, vincristine has significant side effects and its use in treating ITP must be approached with caution, especially in children.
Intravenous immunoglobulin
Intravenous immunoglobulin (IVIg) may be infused in some cases in order to decrease the rate at which macrophages consume antibody-tagged platelets. However, while sometimes effective, it is costly and produces improvement that generally lasts less than a month. Nevertheless, in the case of an ITP patient already scheduled for surgery who has a dangerously low platelet count and has experienced a poor response to other treatments, IVIg can rapidly increase platelet counts, and can also help reduce the risk of major bleeding by transiently increasing platelet counts.
Thrombopoietin receptor agonists
Thrombopoietin receptor agonists are pharmaceutical agents that stimulate platelet production in the bone marrow. In this, they differ from the previously discussed agents that act by attempting to curtail platelet destruction. Two such products are currently available:
Romiplostim (trade name Nplate) is a thrombopoiesis stimulating Fc-peptide fusion protein (peptibody) that is administered by subcutaneous injection. Designated an orphan drug in 2003 under United States law, clinical trials demonstrated romiplostim to be effective in treating chronic ITP, especially in relapsed post-splenectomy patients. Romiplostim was approved by the United States Food and Drug Administration (FDA) for long-term treatment of adult chronic ITP on August 22, 2008.
Eltrombopag (trade name Promacta in the US, Revolade in the EU) is an orally-administered agent with an effect similar to that of romiplostim. It too has been demonstrated to increase platelet counts and decrease bleeding in a dose-dependent manner. Developed by GlaxoSmithKline and also designated an orphan drug by the FDA, Promacta was approved by the FDA on November 20, 2008.Thrombopoietin receptor agonists exhibited the greatest success so far in treating patients with refractory ITP.Side effects of thrombopoietin receptor agonists include headache, joint or muscle pain, dizziness, nausea or vomiting, and an increased risk of blood clots.
Surgery
Splenectomy (removal of the spleen) may be considered in patients who are either unresponsive to steroid treatment, have frequent relapses, or cannot be tapered off steroids after a few months. Platelets which have been bound by antibodies are taken up by macrophages in the spleen (which have Fc receptors), and so removal of the spleen reduces platelet destruction. The procedure is potentially risky in ITP cases due to the increased possibility of significant bleeding during surgery. Durable remission following splenectomy is achieved in 60 - 80 percent of ITP cases. Even though there is a consensus regarding the short-term efficacy of splenectomy, findings on its long-term efficacy and side-effects are controversial. After splenectomy, 11.6 - 75 percent of ITP cases relapsed, and 8.7 - 40 percent of ITP cases had no response to splenectomy. The use of splenectomy to treat ITP has diminished since the development of steroid therapy and other pharmaceutical remedies.
Platelet transfusion
Platelet transfusion alone is normally not recommended except in an emergency and is usually unsuccessful in producing a long-term platelet count increase. This is because the underlying autoimmune mechanism that is destroying the patients platelets will also destroy donor platelets, and so platelet transfusions are not considered a long-term treatment option.
H. pylori eradication
In adults, particularly those living in areas with a high prevalence of Helicobacter pylori (which normally inhabits the stomach wall and has been associated with peptic ulcers), identification and treatment of this infection has been shown to improve platelet counts in a third of patients. In a fifth, the platelet count normalized completely; this response rate is similar to that found in treatment with rituximab, which is more expensive and less safe. In children, this approach is not supported by evidence, except in high prevalence areas. Urea breath testing and stool antigen testing perform better than serology-based tests; moreover, serology may be false-positive after treatment with IVIG.
Other agents
Dapsone (also called diphenylsulfone, DDS, or avlosulfon) is an anti-infective sulfone medication. Dapsone may also be helpful in treating lupus, rheumatoid arthritis, and as a second-line treatment for ITP. The mechanism by which dapsone assists in ITP is unclear but an increased platelet count is seen in 40–60 percent of recipients.
The off-label use of rituximab, a chimeric monoclonal antibody against the B cell surface antigen CD20, may sometimes be an effective alternative to splenectomy. However, significant side-effects can occur, and randomized controlled trials are inconclusive.
Prognosis
In general patients with acute ITP will only rarely have life-threatening bleeding. most of the patients ultimately have stable but lower platelet counts which is hemostatic for a person. Unlike in pediatric patients who can be cured; most of the adults will run a chronic course even after splenectomy
Epidemiology
A normal platelet count is considered to be in the range of 150,000–450,000 per microlitre (μl) of blood for most healthy individuals. Hence one may be considered thrombocytopenic below that range, although the threshold for a diagnosis of ITP is not tied to any specific number.The incidence of ITP is estimated at 50–100 new cases per million per year, with children accounting for half of that number. At least 70 percent of childhood cases will end up in remission within six months, even without treatment. Moreover, a third of the remaining chronic cases will usually remit during follow-up observation, and another third will end up with only mild thrombocytopenia (defined as a platelet count above 50,000). A number of immune related genes and polymorphisms have been identified as influencing predisposition to ITP, with FCGR3a-V158 allele and KIRDS2/DL2 increasing susceptibility and KIR2DS5 shown to be protective.ITP is usually chronic in adults and the probability of durable remission is 20–40 percent. The male to female ratio in the adult group varies from 1:1.2 to 1.7 in most age ranges (childhood cases are roughly equal for both sexes) and the median age of adults at the diagnosis is 56–60. The ratio between male and female adult cases tends to widen with age. In the United States, the adult chronic population is thought to be approximately 60,000—with women outnumbering men approximately 2 to 1, which has resulted in ITP being designated an orphan disease.The mortality rate due to chronic ITP varies but tends to be higher relative to the general population for any age range. In a study conducted in Great Britain, it was noted that ITP causes an approximately 60 percent higher rate of mortality compared to sex- and age-matched subjects without ITP. This increased risk of death with ITP is largely concentrated in the middle-aged and elderly. Ninety-six percent of reported ITP-related deaths were individuals 45 years or older. No significant difference was noted in the rate of survival between males and females.
Pregnancy
Anti-platelet autoantibodies in a pregnant woman with ITP will attack the patients own platelets and will also cross the placenta and react against fetal platelets. Therefore, ITP is a significant cause of fetal and neonatal immune thrombocytopenia. Approximately 10% of newborns affected by ITP will have platelet counts <50,000/uL and 1% to 2% will have a risk of intracerebral hemorrhage comparable to infants with neonatal alloimmune thrombocytopenia (NAIT).No lab test can reliably predict if neonatal thrombocytopenia will occur. The risk of neonatal thrombocytopenia is increased with:
Mothers with a history of splenectomy for ITP
Mothers who had a previous infant affected with ITP
Gestational (maternal) platelet count less than 100,000/uLIt is recommended that pregnant women with thrombocytopenia or a previous diagnosis of ITP should be tested for serum antiplatelet antibodies. A woman with symptomatic thrombocytopenia and an identifiable antiplatelet antibody should be started on therapy for their ITP which may include steroids or IVIG. Fetal blood analysis to determine the platelet count is not generally performed as ITP-induced thrombocytopenia in the fetus is generally less severe than NAIT. Platelet transfusions may be performed in newborns, depending on the degree of thrombocytopenia. It is recommended that neonates be followed with serial platelet counts for the first few days after birth.
History
After initial reports by the Portuguese physician Amato Lusitano in 1556 and Lazarus de la Rivière (physician to the King of France) in 1658, it was the German physician and poet Paul Gottlieb Werlhof who in 1735 wrote the most complete initial report of the purpura of ITP. Platelets were unknown at the time. The name "Werlhofs disease" was used more widely before the current descriptive name became more popular. Platelets were described in the early 19th century, and in the 1880s several investigators linked the purpura with abnormalities in the platelet count. The first report of a successful therapy for ITP was in 1916, when a young Polish medical student, Paul Kaznelson, described a female patients response to a splenectomy. Splenectomy remained a first-line remedy until the introduction of steroid therapy in the 1950s.
References
== External links == |
Impetigo | Impetigo is a bacterial infection that involves the superficial skin. The most common presentation is yellowish crusts on the face, arms, or legs. Less commonly there may be large blisters which affect the groin or armpits. The lesions may be painful or itchy. Fever is uncommon.It is typically due to either Staphylococcus aureus or Streptococcus pyogenes. Risk factors include attending day care, crowding, poor nutrition, diabetes mellitus, contact sports, and breaks in the skin such as from mosquito bites, eczema, scabies, or herpes. With contact it can spread around or between people. Diagnosis is typically based on the symptoms and appearance.Prevention is by hand washing, avoiding people who are infected, and cleaning injuries. Treatment is typically with antibiotic creams such as mupirocin or fusidic acid. Antibiotics by mouth, such as cefalexin, may be used if large areas are affected. Antibiotic-resistant forms have been found.Impetigo affected about 140 million people (2% of the world population) in 2010. It can occur at any age, but is most common in young children. In some places the condition is also known as "school sores". Without treatment people typically get better within three weeks. Recurring infections can occur due to colonization of the nose by the bacteria. Complications may include cellulitis or poststreptococcal glomerulonephritis. The name is from the Latin impetere meaning "attack".
Signs and symptoms
Contagious impetigo
This most common form of impetigo, also called nonbullous impetigo, most often begins as a red sore near the nose or mouth which soon breaks, leaking pus or fluid, and forms a honey-colored scab, followed by a red mark which often heals without leaving a scar. Sores are not painful, but they may be itchy. Lymph nodes in the affected area may be swollen, but fever is rare. Touching or scratching the sores may easily spread the infection to other parts of the body.Skin ulcers with redness and scarring also may result from scratching or abrading the skin.
Bullous impetigo
Bullous impetigo, mainly seen in children younger than 2 years, involves painless, fluid-filled blisters, mostly on the arms, legs, and trunk, surrounded by red and itchy (but not sore) skin. The blisters may be large or small. After they break, they form yellow scabs.
Ecthyma
Ecthyma, the nonbullous form of impetigo, produces painful fluid- or pus-filled sores with redness of skin, usually on the arms and legs, become ulcers that penetrate deeper into the dermis. After they break open, they form hard, thick, gray-yellow scabs, which sometimes leave scars. Ecthyma may be accompanied by swollen lymph nodes in the affected area.
Causes
Impetigo is primarily caused by Staphylococcus aureus, and sometimes by Streptococcus pyogenes. Both bullous and nonbullous are primarily caused by S. aureus, with Streptococcus also commonly being involved in the nonbullous form.
Predisposing factors
Impetigo is more likely to infect children ages 2–5, especially those that attend school or day care. 70% of cases are the nonbullous form and 30% are the bullous form. Other factors can increase the risk of contracting impetigo such as diabetes mellitus, dermatitis, immunodeficiency disorders, and other irritable skin disorders. Impetigo occurs more frequently among people who live in warm climates.
Transmission
The infection is spread by direct contact with lesions or with nasal carriers. The incubation period is 1–3 days after exposure to Streptococcus and 4–10 days for Staphylococcus. Dried streptococci in the air are not infectious to intact skin. Scratching may spread the lesions.
Diagnosis
Impetigo is usually diagnosed based on its appearance. It generally appears as honey-colored scabs formed from dried serum and is often found on the arms, legs, or face. If a visual diagnosis is unclear a culture may be done to test for resistant bacteria.
Differential diagnosis
Other conditions that can result in symptoms similar to the common form include contact dermatitis, herpes simplex virus, discoid lupus, and scabies.Other conditions that can result in symptoms similar to the blistering form include other bullous skin diseases, burns, and necrotizing fasciitis.
Prevention
To prevent the spread of impetigo the skin and any open wounds should be kept clean and covered. Care should be taken to keep fluids from an infected person away from the skin of a non-infected person. Washing hands, linens, and affected areas will lower the likelihood of contact with infected fluids. Scratching can spread the sores; keeping nails short will reduce the chances of spreading. Infected people should avoid contact with others and eliminate sharing of clothing or linens. Children with impetigo can return to school 24 hours after starting antibiotic therapy as long as their draining lesions are covered.
Treatment
Antibiotics, either as a cream or by mouth, are usually prescribed. Mild cases may be treated with mupirocin ointments. In 95% of cases, a single 7-day antibiotic course results in resolution in children. It has been advocated that topical antiseptics are inferior to topical antibiotics, and therefore should not be used as a replacement. However, the National Institute for Health and Care Excellence (NICE) as of February 2020 recommends a hydrogen peroxide 1% cream antiseptic rather than topical antibiotics for localised non-bullous impetigo in otherwise well individuals. This recommendation is part of an effort to reduce the overuse of antimicrobials that may contribute to the development of resistant organisms such as MRSA.
More severe cases require oral antibiotics, such as dicloxacillin, flucloxacillin, or erythromycin. Alternatively, amoxicillin combined with clavulanate potassium, cephalosporins (first-generation) and many others may also be used as an antibiotic treatment. Alternatives for people who are seriously allergic to penicillin or infections with methicillin-resistant Staphococcus aureus include doxycycline, clindamycin, and trimethoprim-sulphamethoxazole, although doxycycline should not be used in children under the age of eight years old due to the risk of drug-induced tooth discolouration. When streptococci alone are the cause, penicillin is the drug of choice.When the condition presents with ulcers, valacyclovir, an antiviral, may be given in case a viral infection is causing the ulcer.
Alternative medicine
There is not enough evidence to recommend alternative medicine such as tea tree oil or honey.
Prognosis
Without treatment, individuals with impetigo typically get better within three weeks. Complications may include cellulitis or poststreptococcal glomerulonephritis. Rheumatic fever does not appear to be related.
Epidemiology
Globally, impetigo affects more than 162 million children in low- to middle-income countries. The rates are highest in countries with low available resources and is especially prevalent in the region of Oceania. The tropical climate and high population in lower socioeconomic regions contribute to these high rates. Children under the age of 4 in the United Kingdom are 2.8% more likely than average to contract impetigo; this decreases to 1.6% for children up to 15 years old. As age increases, the rate of impetigo declines, but all ages are still susceptible.
History
Impetigo was originally described and differentiated by William Tilbury Fox around 1864. The word impetigo is the generic Latin word for skin eruption, and it stems from the verb impetere to attack (as in impetus). Before the discovery of antibiotics, the disease was treated with an application of the antiseptic gentian violet, which was an effective treatment.
References
External links
Impetigo at Curlie
Impetigo and Ecthyma at Merck Manual of Diagnosis and Therapy Professional Edition |
Botulism | Botulism is a rare and potentially fatal illness caused by a toxin produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever.
Botulism can be spread in several ways. The bacterial spores which cause it are common in both soil and water. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism happens when the bacteria develops in the intestines and releases the toxin. This typically only occurs in children less than six months old, as protective mechanisms develop after that time. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. It is not passed directly between people. The diagnosis is confirmed by finding the toxin or bacteria in the person in question.
Prevention is primarily by proper food preparation. The bacteria, though not the spores, are destroyed by heating it to more than 85 °C (185 °F) for longer than 5 minutes. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin botulus, meaning sausage.
Signs and symptoms
The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet).Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated.Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium").
Infant botulism
Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life.The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked.Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance.Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents.
Complications
Infant botulism has no long-term side effects.
Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital.
Cause
Clostridium botulinum is an anaerobic, Gram positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholines release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above.
In all cases, illness is caused by the botulinum toxin produced by the bacterium C. botulinum in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first.Mechanisms of entry into the human body for botulinum toxin are described below.
Colonization of the gut
The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare.
Food
Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food.
Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days.
Wound
Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism accounts for 29% of cases.
Inhalation
Isolated cases of botulism have been described after inhalation by laboratory workers.
Injection
Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. Following a 2008 review the FDA added these concerns as a boxed warning.
Mechanism
The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum.
Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores.There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty.
Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals.In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA).Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them.When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years.Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria.The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures.Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring.Botulinum toxin A, C, and E cleave the SNAP-25, ultimately leading to paralysis.
Diagnosis
For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay.
In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony.
Prevention
Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right.A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum.While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at 121 °C (250 °F) for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberrys Food Company outbreak. Foodborne botulism is the rarest form though, accounting for only around 15% of cases (US) and has more frequently been from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation.Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish.
When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned.Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters.Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner.Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; such cans with any of these signs should be discarded.Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety.
The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum.
Vaccine
Vaccines are under development, but they have disadvantages, and in some cases there are concerns that they may revert to dangerous native activity. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism.
Treatment
Botulism is generally treated with botulism antitoxin and supportive care.Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed.In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria.
Antitoxin
Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis.In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism.Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab)2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated.
Prognosis
The paralysis caused by botulism can persist for 2 to 8 weeks, during which supportive care and ventilation may be necessary to keep the person alive. Botulism can be fatal in 5% to 10% of people who are affected. However, if left untreated, botulism is fatal in 40% to 50% of cases.Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than 2% for hospitalized babies.
Epidemiology
Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly.
United States
In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California.Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaska aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less.The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California.All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website.
On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup.
Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented.
In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking.
In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation.
Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberrys Food Company in its Augusta, Georgia plant. It was later identified that the Castleberrys plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberrys Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years.
One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes.
A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak.
United Kingdom
The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firms yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts.
China
From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths.
Qapqal disease
After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place.In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that, the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease.
Canada
From 1985 to 2015 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products.
Ukraine
In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack.
Viet |
Botulism | nam
In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support.
Other susceptible species
Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches.Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western USA and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock.
Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent.
In cattle, the symptoms may include drooling, restlessness, uncoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur.
Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte.
The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs.
Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis.Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of Clostridium botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest.Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis.In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids. Broiler chickens with the toxicoinfectious form may also have diarrhea with excess urates.
See also
List of foodborne illness outbreaks
References
Further reading
Rao AK, Sobel J, Chatham-Stephens K, Luquez C (May 2021). "Clinical Guidelines for Diagnosis and Treatment of Botulism, 2021" (PDF). MMWR Recomm Rep. 70 (2): 1–30. doi:10.15585/mmwr.rr7002a1. PMC 8112830. PMID 33956777.
External links
Botulism in the United States, 1889–1996. Handbook for Epidemiologists, Clinicians and Laboratory Technicians. Centers for Disease Control and Prevention. National Center for Infectious Diseases, Division of Bacterial and Mycotic Diseases 1998.
NHS choices
CDC Botulism: Control Measures Overview for Clinicians
University of California, Santa Cruz Environmental toxicology – Botulism Archived 9 May 2013 at the Wayback Machine
CDC Botulism FAQ
FDA Clostridium botulinum Bad Bug Book
USGS Avian Botulism Archived 20 October 2018 at the Wayback Machine |
Infantile hemangioma | An infantile hemangioma (IH), sometimes called a strawberry mark due to appearance, is a type of benign vascular tumor or anomaly that affects babies. Other names include capillary hemangioma, strawberry hemangioma,: 593 strawberry birthmark and strawberry nevus. and formerly known as a cavernous hemangioma. They appear as a red or blue raised lesion on the skin. Typically, they begin during the first four weeks of life, growing until about five months of life, and then shrinking in size and disappearing over the next few years. Often skin changes remain after they shrink. Complications may include pain, bleeding, ulcer formation, disfigurement, or heart failure. It is the most common tumor of orbit and periorbital areas in childhood. It may occur in the skin, subcutaneous tissues and mucous membranes of oral cavities and lips as well as in extracutaneous locations including the liver and gastrointestinal tract.
The underlying reason for their occurrence is not clear. In about 10% of cases they appear to run in families. A few cases are associated with other abnormalities such as PHACE syndrome. Diagnosis is generally based on the symptoms and appearance. Occasionally medical imaging can assist in the diagnosis.In most cases no treatment is needed, other than close observation. It may grow rapidly, before stopping and slowly fading. Some are gone by the age of 2, about 60% by 5 years, and 90–95% by 9 years. While this birthmark may be alarming in appearance, physicians generally counsel that it be left to disappear on its own, unless it is in the way of vision or blocking the nostrils. Certain cases, however, may result in problems and the use of medication such as propranolol or steroids are recommended. Occasionally surgery or laser treatment may be used.It is one of the most common benign tumors in babies, occurring in about 5-10% of all births.: 81 They occur more frequently in females, whites, preemies, and low birth weight babies. They can occur anywhere on the body, though 83% occur on the head or neck area. The word "hemangioma" comes from the Greek haima (αἷμα) meaning "blood"; angeion (ἀγγεῖον) meaning "vessel"; and -oma (-ωμα) meaning "tumor".
Signs and symptoms
Infantile hemangiomas typically develop in the first few weeks or months of life. They are more common in Caucasians, in premature children whose birth weight is less than 3 pounds (1.4 kg), in females, and in twin births. Early lesions may resemble a red scratch or patch, a white patch, or a bruise. The majority occurs on the head and neck, but they can occur almost anywhere. The appearance and color of the IH depends on its location and depth within the level of the skin.Superficial IHs are situated higher in the skin and have a bright red, erythematous to reddish-purple appearance. Superficial lesions can be flat and telangiectatic, composed of a macule or patch of small, varied branching, capillary blood vessels. They can also be raised and elevated from the skin, forming papules and confluent bright-red plaques like raised islands. Infantile hemangiomas have historically been referred to “strawberry marks" or "strawberry hemangiomas” in the past, as raised superficial hemangiomas can look like the side of a strawberry without seeds, and this remains a common lay term.Superficial IHs in certain locations, such as the posterior scalp, neck folds, and groin/perianal areas, are at potential risk of ulceration. Ulcerated hemangiomas can present as black crusted papules or plaques, or painful erosions or ulcers. Ulcerations are prone to secondary bacterial infections, which can present with yellow crusting, drainage, pain, or odor. Ulcerations are also at risk for bleeding, particularly deep lesions or in areas of friction. Multiple superficial hemangiomas, more than five, can be associated with extracutaneous hemangiomas, the most common being a liver (hepatic) hemangioma, and these infants warrant ultrasound examination.Deep IHs present as poorly defined, bluish macules that can proliferate into papules, nodules, or larger tumors. Proliferating lesions are often compressible, but fairly firm. Many deep hemangiomas may have a few superficial capillaries visible evident over the primary deep component or surrounding venous prominence. Deep hemangiomas have a tendency to develop a little later than superficial hemangiomas, and may have longer and later proliferative phases, as well. Deep hemangiomas rarely ulcerate, but can cause issues depending on their location, size, and growth. Deep hemangiomas near sensitive structures can cause compression of softer surrounding structures during the proliferative phase, such as the external ear canal and the eyelid. Mixed hemangiomas are simply a combination of superficial and deep hemangiomas, and may not be evident for several months. Patients may have any combination of superficial, deep, or mixed IHs.
IHs are often classified as focal/localized, segmental, or indeterminate. Focal IHs appear localized to a specific location and appear to arise from a solitary spot. Segmental hemangiomas are larger, and appear to encompass a region of the body. Larger or segmental hemangiomas that span a large area can sometimes have underlying anomalies that may require investigation, especially when located on the face, sacrum, or pelvis.
Unless ulceration occurs, an IH does not tend to bleed and is not painful. Discomfort may arise if it is bulky and blocks a vital orifice.
Complications
Almost no IHs are associated with complications. They may break down on the surface, called ulceration, which can be painful and problematic. If the ulceration is deep, significant bleeding and infection may occur in rare occasions. If a hemangioma develops in the larynx, breathing can be compromised. If located near the eye, a growing hemangioma may cause an occlusion or deviation of the eye that can lead to amblyopia. Very rarely, extremely large hemangiomas can cause high-output heart failure due to the amount of blood that must be pumped to excess blood vessels. Lesions adjacent to bone may cause erosion of the bone.The most frequent complaints about IHs stem from psychosocial complications. The condition can affect a persons appearance and provoke attention and malicious reactions from others. Particular problems occur if the lip or nose is involved, as distortions can be difficult to treat surgically. The potential for psychological injury develops from school age onward. Considering treatment before school begins is, therefore, important if adequate spontaneous improvement has not occurred. Large IHs can leave visible skin changes secondary to severe stretching that results in altered surface texture.
Large segmental hemangiomas of the head and neck can be associated with a disorder called PHACES syndrome. Large segmental hemangiomas over the lumbar spine can be associated with dysraphism, renal, and urogenital problems in association with a disorder called LUMBAR syndrome. Multiple cutaneous hemangiomas in infants may be an indicator for liver hemangiomas. Screening for liver involvement is often recommended in infants with five or more skin hemangiomas.
Causes
The cause of hemangioma is currently unknown, but several studies have suggested the importance of estrogen signaling in proliferation. Localized soft-tissue hypoxia coupled with increased circulating estrogen after birth may be the stimulus. Also, a hypothesis was presented by researchers that maternal placenta embolizes to the fetal dermis during gestation, resulting in hemangiomagenesis. However, another group of researchers conducted genetic analyses of single-nucleotide polymorphism in hemangioma tissue compared to the mothers DNA that contradicted this hypothesis. Other studies have revealed the role of increased angiogenesis and vasculogenesis in the etiology of hemangiomas.
Diagnosis
The majority of IHs can be diagnosed by history and physical examination. In rare cases, imaging (ultrasound with Doppler, magnetic resonance imaging), and/or cytology or histopathology are needed to confirm the diagnosis. IHs are usually absent at birth or a small area of pallor, telangiectasias, or duskiness may be seen. A fully formed mass at birth usually indicates a diagnosis other than IH. Superficial hemangiomas in the upper dermis have a bright-red strawberry color, whereas those in the deep dermis and subcutis, deep hemangiomas, may appear blue and be firm or rubbery on palpation. Mixed hemangiomas can have both features. A minimally proliferative IH is an uncommon type that presents with fine macular telangiectasias with an occasional bright-red, papular, proliferative component. Minimally proliferative IHs are more common in the lower body.A precise history of the growth characteristics of the IH can be very helpful in making the diagnosis. In the first 4 to 8 weeks of life, IHs grow rapidly with primarily volumetric rather than radial growth. This is usually followed by a period of slower growth that can last 6–9 months, with 80% of the growth completed by 3 months. Finally, IHs involute over a period of years. The exceptions to these growth characteristics include minimally proliferative His, which do not substantially proliferate and large, deep IHs in which noticeable growth starts later and lasts longer.
If the diagnosis is not clear based on physical examination and growth history (most often in deep hemangiomas with little cutaneous involvement), then either imaging or histopathology can help confirm the diagnosis. On Doppler ultrasound, an IH in the proliferative phase appears as a high-flow, soft-tissue mass usually without direct arteriovenous shunting. On MRI, IHs show a well-circumscribed lesion with intermediate and increased signal intensity on T1- and T2-weighted sequences, respectively, and strong enhancement after gadolinium injections, with fast-flow vessels. Tissue for diagnosis can be obtained via fine-needle aspiration, skin biopsy, or excisional biopsy. Under the microscope, IHs are unencapsulated aggregates of closely packed, thin-walled capillaries, usually with endothelial lining. Blood-filled vessels are separated by scant connective tissue. Their lumina may be thrombosed and organized. Hemosiderin pigment deposition due to vessel rupture may be observed. The GLUT-1 histochemical marker can be helpful in distinguishing IHs from other items on the differential diagnosis, such as vascular malformations.
Liver
Infantile haemangiomas in the liver are found in 16% of all liver haemangiomas. Its sizes are usually less than 1 to 2 cm in diameter. It may show a "flash-filling" phenomenon in which there is the fast enhancement of the contrast material in the lesion instead of slow, centripetal, nodular filling of the lesions in usual hemangiomas. On CT and MRI, it shows rapid filling during arterial phase, with contrast retention in venous and delayed phases.
Treatment
Most IHs disappear without treatment, leaving minimal to no visible marks. This may take many years, however, and a proportion of lesions may require some form of therapy. Multidisciplinary clinical practice guidelines for the management of infantile hemangiomas were recently published. Indications for treatment include functional impairment (i.e. visual or feeding compromise), bleeding, potentially life-threatening complications (airway, cardiac, or hepatic disease), and risk of long-term or permanent disfigurement. Large IHs can leave visible skin changes secondary to significant stretching of the skin or alteration of surface texture. When they interfere with vision, breathing, or threaten significant disfigurement (most notably facial lesions, and in particular, nose and lips), they are usually treated. Medical therapies are most effective when used during the period of most significant hemangioma growth, which corresponds to the first 5 months of life. Ulcerated hemangiomas, a subset of lesions requiring therapy, are usually treated by addressing wound care, pain, and hemangioma growth.
Medication
Treatment options for IHs include medical therapies (systemic, intralesional, and topical), surgery, and laser therapy. Prior to 2008, the mainstay of therapy for problematic hemangiomas was oral corticosteroids, which are effective and remain an option for patients in whom beta-blocker therapy is contraindicated or poorly tolerated. Following the serendipitous observation that propranolol, a nonselective beta blocker, is well tolerated and effective for treatment of hemangiomas, the agent was studied in a large, randomized, controlled trial and was approved by the U.S. Food and Drug Administration for this indication in 2014. Oral propranolol is more effective than placebo, observation without intervention, or oral corticosteroids. Propranolol has subsequently become the first-line systemic medical therapy for treatment of these lesions.Since that time, topical timolol maleate in addition to oral propranalol has become a common therapy for infantile hemangiomas. According to a 2018 Cochrane review, both of these therapies have demonstrated beneficial effects in terms of clearance of hemangiomas without an increase in harms. In addition, no difference was detected between these two agents and their ability to reduce hemangioma size; however, whether a difference in safety exists is not clear. All of these results were based on moderate- to low-quality evidence, thus further randomized, controlled trials with large populations of children are needed to further evaluate these therapies. This review concluded that for now, no evidence challenges oral propranalol as the standard systemic therapy for treatment of these lesions.
Other systemic therapies which may be effective for IH treatment include vincristine, interferon, and other agents with antiangiogenic properties. Vincristine, which requires central venous access for administration, is traditionally used as a chemotherapy agent, but has been demonstrated to have efficacy against hemangiomas and other childhood vascular tumors, such as kaposiform hemangioendothelioma and tufted angioma. Interferon-alpha 2a and 2b, given by subcutaneous injection, has shown efficacy against hemangiomas, but may result in spastic diplegia in up to 20% of treated children. These agents are rarely used now in the era of beta-blocker therapy.
Intralesional corticosteroid (usually triamcinolone) injection has been used for small, localized hemangiomas, where it has been demonstrated to be relatively safe and effective. Injection of upper eyelid hemangiomas is controversial, given the reported risk of retinal embolization, possibly related to high injection pressures. Topical timolol maleate, a nonselective beta blocker available in a gel-forming solution approved for the treatment of glaucoma, has been increasingly recognized as a safe and effective off-label alternative for treatment of small hemangiomas. It is generally applied two to three times daily.
Surgery
Surgical excision of hemangiomas is rarely indicated, and limited to lesions which fail medical therapy (or when it is contraindicated), which are anatomically distributed in a location which is amenable to resection, and in which resection would likely be necessary and the scar will be similar regardless of timing of the surgery. Surgery may also be useful for removal of residual fibrofatty tissue (following hemangioma involution) and reconstruction of damaged structures.
Laser
Laser therapy, most often the pulsed dye laser (PDL), plays a limited role in hemangioma management. PDL is most often used for treatment of ulcerated hemangiomas, often in conjunction with topical therapies and wound care, and may speed healing and diminish pain. Laser therapy may also be useful for early superficial IHs (although rapidly proliferating lesions may be more prone to ulceration following PDL treatment), and for the treatment of cutaneous telangiectasias which persist following involution.
Prognosis
In the involution phase, an IH finally begins to diminish in size. While IHs were previously thought to improve by about 10% each year, newer evidence suggests that maximal improvement and involution is typically reached by 3.5 years of age. Most IHs resolve by age 10, but in some patients, the hemangioma does not completely resolve. Residual redness may be noted and can be improved with laser therapy, most commonly PDL. Ablative fractional resurfacing may be considered for textural skin changes. Hemangiomas, especially those that have gotten very large during the growth phase, may leave behind stretched skin or fibrofatty tissue that may be disfiguring or require future surgical correction. Areas of prior ulceration may leave behind permanent scarring.
Additional long-term sequelae stem from the identification of extracutaneous manifestations in association with the IH. For example, a patient with a large facial hemangioma who is found to meet criteria for PHACE syndrome will require potentially ongoing neurologic, cardiac, and/or ophthalmologic monitoring. In cases of IHs that compromise of vital structures, symptoms may improve with involution of the hemangioma. For example, respiratory distress would improve with involution of a space-occupying IH involving the airway and high-output heart failure may lessen with involution of a hepatic hemangioma and ultimately treatment may be tapered or discontinued. In other cases, such as an untreated eyelid hemangioma, resultant amblyopia does not improve with involution of the cutaneous lesion. For these reasons, infants with infantile hemangiomas should be evaluated by an appropriate clinician during the early proliferative phase so that risk monitoring and treatment be individualized and outcomes can be optimized.
Terminology
The terminology used to define, describe, and categorize vascular tumors and malformations has changed over time. The term hemangioma was originally used to describe any vascular tumor-like structure, whether it was present at or around birth or appeared later in life. In 1982, Mulliken and Glowacki proposed a new classification system for vascular anomalies which has been widely accepted and adopted by the International Society for the Study of Vascular Anomalies. This classification system was recently updated in 2015. The classification of vascular anomalies is now based upon cellular features, natural history, and clinical behavior of the lesion. Vascular anomalies are divided into vascular tumors/neoplasms which include infantile hemangiomas, and vascular malformations that include entities with enlarged or abnormal vessels such as capillary malformations (port wine stains), venous malformations, and lymphatic malformations. In 2000, GLUT-1, a specific immunohistochemical marker, was found to be positive in IHs and negative in other vascular tumors or malformations. This marker has revolutionized the ability to distinguish between infantile hemangioma and other vascular anomalies.
See also
Hemangioma
List of cutaneous conditions
References
External links
Infantile Hemangiomas: About Strawberry Baby Birthmarks
ISSVA Classification of Vascular Anomalies
Hemangioma Investigator Group
Vascular Birthmarks Foundation |
Infection prevention and control | Infection prevention and control is the discipline concerned with preventing healthcare-associated infections; a practical rather than academic sub-discipline of epidemiology. In Northern Europe, infection prevention and control is expanded from healthcare into a component in public health, known as "infection protection" (smittevern, smittskydd, Infektionsschutz in the local languages). It is an essential part of the infrastructure of health care. Infection control and hospital epidemiology are akin to public health practice, practiced within the confines of a particular health-care delivery system rather than directed at society as a whole.Infection control addresses factors related to the spread of infections within the healthcare setting, whether among patients, from patients to staff, from staff to patients, or among staff. This includes preventive measures such as hand washing, cleaning, disinfecting, sterilizing, and vaccinating. Other aspects include surveillance, monitoring, and investigating and managing suspected outbreaks of infection within a healthcare setting.A subsidiary aspect of infection control involves preventing the spread of antimicrobial-resistant organisms such as MRSA. This in turn connects to the discipline of antimicrobial stewardship—limiting the use of antimicrobials to necessary cases, as increased usage inevitably results in the selection and dissemination of resistant organisms. Antimicrobial medications (aka antimicrobials or anti-infective agents) include antibiotics, antibacterials, antifungals, antivirals and antiprotozoals.The World Health Organization (WHO) has set up an Infection Prevention and Control (IPC) unit in its Service Delivery and Safety department that publishes related guidelines.
Infection prevention and control
Aseptic technique is a key component of all invasive medical procedures. Similar control measures are also recommended in any healthcare setting to prevent the spread of infection generally.
Hand hygiene
Hand hygiene is one of the basic, yet most important steps in IPC (Infection Prevention and Control). Hand hygiene reduces the chances of HAI (Healthcare Associated Infections) drastically at a floor-low cost. Hand hygiene consists of either hand wash(water based) or hand rubs(alcohol based). Hand wash is a solid 7-steps according to the WHO standards, wherein hand rubs are 5-steps.The American Nurses Association (ANA) and American Association of Nurse Anesthesiology (AANA) have set specific checkpoints for nurses to clean their hands; the checkpoints for nurses include, before patient contact, before putting on protective equipment, before doing procedures, after contact with patient’s skin and surroundings, after contamination of foreign substances, after contact with bodily fluids and wounds, after taking off protective equipment, and after using the restroom. To ensure all before and after checkpoints for hand washing are done, precautions such as hand sanitizer dispensers filled with sodium hypochlorite, alcohol, or hydrogen peroxide, which are three approved disinfectants that kill bacteria, are placed in certain points, and nurses carrying mini hand sanitizer dispensers help increase sanitation in the work field. In cases where equipment is being placed in a container or bin and picked back up, nurses and doctors are required to wash their hands or use alcohol sanitizer before going back to the container to use the same equipment.Independent studies by Ignaz Semmelweis in 1846 in Vienna and Oliver Wendell Holmes, Sr. in 1843 in Boston established a link between the hands of health care workers and the spread of hospital-acquired disease. The U.S. Centers for Disease Control and Prevention (CDC) state that "It is well documented that the most important measure for preventing the spread of pathogens is effective handwashing". In the developed world, hand washing is mandatory in most health care settings and required by many different regulators.In the United States, OSHA standards require that employers must provide readily accessible hand washing facilities, and must ensure that employees wash hands and any other skin with soap and water or flush mucous membranes with water as soon as feasible after contact with blood or other potentially infectious materials (OPIM).In the UK healthcare professionals have adopted the Ayliffe Technique, based on the 6 step method developed by Graham Ayliffe, JR Babb and AH Quoraishi.
Drying is an essential part of the hand hygiene process. In November 2008, a non-peer-reviewed study was presented to the European Tissue Symposium by the University of Westminster, London, comparing the bacteria levels present after the use of paper towels, warm air hand dryers, and modern jet-air hand dryers. Of those three methods, only paper towels reduced the total number of bacteria on hands, with "through-air dried" towels the most effective.The presenters also carried out tests to establish whether there was the potential for cross-contamination of other washroom users and the washroom environment as a result of each type of drying method. They found that:
the jet air dryer, which blows air out of the unit at claimed speeds of 400 mph, was capable of blowing micro-organisms from the hands and the unit and potentially contaminating other washroom users and the washroom environment up to 2 metres away
use of a warm air hand dryer spread micro-organisms up to 0.25 metres from the dryer
paper towels showed no significant spread of micro-organisms.In 2005, in a study conducted by TUV Produkt und Umwelt, different hand drying methods were evaluated. The following changes in the bacterial count after drying the hands were observed:
Cleaning, Disinfection, Sterilization
The field of infection prevention describes a hierarchy of removal of microorganisms from surfaces including medical equipment and instruments. Cleaning is the lowest level, accomplishing substantial removal. Disinfection involves the removal of all pathogens other than bacterial spores. Sterilization is defined as the removal or destruction of ALL microorganisms including bacterial spores.
Cleaning
Cleaning is the first and simplest step in preventing the spread of infection via surfaces and fomites. Cleaning reduces microbial burden by chemical deadsorption of organisms (loosening bioburden/organisms from surfaces via cleaning chemicals), simple mechanical removal (rinsing, wiping), as well as disinfection (killing of organisms by cleaning chemicals).In order to reduce their chances to contract an infection, individuals are recommended to maintain a good hygiene by washing their hands after every contact with questionable areas or bodily fluids and by disposing of garbage at regular intervals to prevent germs from growing.
Disinfection
Disinfection uses liquid chemicals on surfaces and at room temperature to kill disease causing microorganisms. Ultraviolet light has also been used to disinfect the rooms of patients infected with Clostridium difficile after discharge. Disinfection is less effective than sterilization because it does not kill bacterial endospores.Along with ensuring proper hand washing techniques are followed, another major component to decrease the spread of disease is sanitation of all medical equipment. The ANA and AANA set guidelines for sterilization and disinfection based on the Spaulding Disinfection and Sterilization Classification Scheme (SDSCS). The SDSCS classifies sterilization techniques into three categories: critical, semi-critical, and non-critical. For critical situations, or situations involving contact with sterile tissue or the vascular system, sterilize devices with sterilants that destroy all bacteria, rinse with sterile water, and use chemical germicides. In semi-critical situations, or situations with contact of mucous membranes or non-intact skin, high-level disinfectants are required. Cleaning and disinfecting devices with high-level disinfectants, rinsing with sterile water, and drying all equipment surfaces to prevent microorganism growth are methods nurses and doctors must follow. For non-critical situations, or situations involving electronic devices, stethoscopes, blood pressure cuffs, beds, monitors and other general hospital equipment, intermediate level disinfection is required. "Clean all equipment between patients with alcohol, use protective covering for non-critical surfaces that are difficult to clean, and hydrogen peroxide gas. . .for reusable items that are difficult to clean."
Sterilization
Sterilization is a process intended to kill all microorganisms and is the highest level of microbial kill that is possible.Sterilization, if performed properly, is an effective way of preventing Infections from spreading. It should be used for the cleaning of medical instruments and any type of medical item that comes into contact with the blood stream and sterile tissues.There are four main ways in which such items are usually sterilized: autoclave (by using high-pressure steam), dry heat (in an oven), by using chemical sterilants such as glutaraldehydes or formaldehyde solutions or by exposure to ionizing radiation. The first two are the most widely used methods of sterilization mainly because of their accessibility and availability. Steam sterilization is one of the most effective types of sterilizations, if done correctly which is often hard to achieve. Instruments that are used in health care facilities are usually sterilized with this method. The general rule in this case is that in order to perform an effective sterilization, the steam must get into contact with all the surfaces that are meant to be disinfected. On the other hand, dry heat sterilization, which is performed with the help of an oven, is also an accessible type of sterilization, although it can only be used to disinfect instruments that are made of metal or glass. The very high temperatures needed to perform sterilization in this way are able to melt the instruments that are not made of glass or metal.Effectiveness of the sterilizer, for example a steam autoclave is determined in three ways.
First, mechanical indicators and gauges on the machine itself indicate proper operation of the machine. Second heat sensitive indicators or tape on the sterilizing bags change color which indicate proper levels of heat or steam. And, third (most importantly) is biological testing in which a microorganism that is highly heat and chemical resistant (often the bacterial endospore) is selected as the standard challenge. If the process kills this microorganism, the sterilizer is considered to be effective.Steam sterilization is done at a temperature of 121 C (250 F) with a pressure of 209 kPa (~2atm). In these conditions, rubber items must be sterilized for 20 minutes, and wrapped items 134 C with pressure of 310 kPa for 7 minutes. The time is counted once the temperature that is needed has been reached. Steam sterilization requires four conditions in order to be efficient: adequate contact, sufficiently high temperature, correct time and sufficient moisture. Sterilization using steam can also be done at a temperature of 132 C (270 F), at a double pressure.Dry heat sterilization is performed at 170 C (340 F) for one hour or two hours at a temperature of 160 C (320 F). Dry heat sterilization can also be performed at 121 C, for at least 16 hours.Chemical sterilization, also referred to as cold sterilization, can be used to sterilize instruments that cannot normally be disinfected through the other two processes described above. The items sterilized with cold sterilization are usually those that can be damaged by regular sterilization. A variety of chemicals can be used including aldehydes, hydrogen peroxide, and peroxyacetic acid. Commonly, glutaraldehydes and formaldehyde are used in this process, but in different ways. When using the first type of disinfectant, the instruments are soaked in a 2–4% solution for at least 10 hours while a solution of 8% formaldehyde will sterilize the items in 24 hours or more. Chemical sterilization is generally more expensive than steam sterilization and therefore it is used for instruments that cannot be disinfected otherwise. After the instruments have been soaked in the chemical solutions, they must be rinsed with sterile water which will remove the residues from the disinfectants. This is the reason why needles and syringes are not sterilized in this way, as the residues left by the chemical solution that has been used to disinfect them cannot be washed off with water and they may interfere with the administered treatment. Although formaldehyde is less expensive than glutaraldehydes, it is also more irritating to the eyes, skin and respiratory tract and is classified as a potential carcinogen, so it is used much less commonly.
Ionizing radiation is typically used only for sterilizing items for which none of the above methods are practical, because of the risks involved in the process
Personal protective equipment
Personal protective equipment (PPE) is specialized clothing or equipment worn by a worker for protection against a hazard. The hazard in a health care setting is exposure to blood, saliva, or other bodily fluids or aerosols that may carry infectious materials such as Hepatitis C, HIV, or other blood borne or bodily fluid pathogen. PPE prevents contact with a potentially infectious material by creating a physical barrier between the potential infectious material and the healthcare worker.The United States Occupational Safety and Health Administration (OSHA) requires the use of personal protective equipment (PPE) by workers to guard against blood borne pathogens if there is a reasonably anticipated exposure to blood or other potentially infectious materials.Components of PPE include gloves, gowns, bonnets, shoe covers, face shields, CPR masks, goggles, surgical masks, and respirators. How many components are used and how the components are used is often determined by regulations or the infection control protocol of the facility in question, which in turn are derived from knowledge of the mechanism of transmission of the pathogen(s) of concern. Many or most of these items are disposable to avoid carrying infectious materials from one patient to another patient and to avoid difficult or costly disinfection. In the US, OSHA requires the immediate removal and disinfection or disposal of a workers PPE prior to leaving the work area where exposure to infectious material took place. For health care professionals who may come into contact with highly infectious bodily fluids, using personal protective coverings on exposed body parts improves protection. Breathable personal protective equipment improves user-satisfaction and may offer a similar level of protection. In addition, adding tabs and other modifications to the protective equipment may reduce the risk of contamination during donning and doffing (putting on and taking off the equipment). Implementing an evidence-based donning and doffing protocol such as a one-step glove and gown removal technique, giving oral instructions while donning and doffing, double gloving, and the use of glove disinfection may also improve protection for health care professionals.Guidelines set by the ANA and ANAA for proper use of disposable gloves include, removing and replacing gloves frequently and when they are contaminated, damaged, or in between treatment of multiple patients. When removing gloves, “grasp outer edge of glove near wrist, peel away from hand turning inside out, hold removed glove in opposite gloved hand, slide ungloved finger under wrist of gloved hand so finger is inside gloved area, peel off the glove from inside creating a ‘bag’ for both gloves, dispose of gloves in proper waste receptacle”.The inappropriate use of PPE equipment such as gloves, has been linked to an increase in rates of the transmission of infection, and the use of such must be compatible with the other particular hand hygiene agents used. Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of PPE for preventing the transmission of infectious diseases to healthcare workers. There is low quality evidence that supports making improvements or modifications to personal protective equipment in order to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is weak evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE.
Antimicrobial surfaces
Microorganisms are known to survive on non-antimicrobial inanimate touch surfaces (e.g., bedrails, over-the-bed trays, call buttons, bathroom hardware, etc.) for extended periods of time. This can be especially troublesome in hospital environments where patients with immunodeficiencies are at enhanced risk for contracting nosocomial infections.
Products made with antimicrobial copper alloy (brasses, bronzes, cupronickel, copper-nickel-zinc, and others) surfaces destroy a wide range of microorganisms in a short period of time.
The United States Environmental Protection Agency has approved the registration of 355 different antimicrobial copper alloys and one synthetic copper-infused hard surface that kill E. coli O157:H7, methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Enterobacter aerogenes, and Pseudomonas aeruginosa in less than 2 hours of contact. Other investigations have demonstrated the efficacy of antimicrobial copper alloys to destroy
Clostridium difficile, influenza A virus, adenovirus, and fungi. As a public hygienic measure in addition to regular cleaning, antimicrobial copper alloys are being installed in healthcare facilities in the UK, Ireland, Japan, Korea, France, Denmark, and Brazil. The synthetic hard surface is being installed in the United States as well as in Israel.
Vaccination of health care workers
Health care workers may be exposed to certain infections in the course of their work. Vaccines are available to provide some protection to workers in a healthcare setting. Depending on regulation, recommendation, the specific work function, or personal preference, healthcare workers or first responders may receive vaccinations for hepatitis B; influenza; measles, mumps and rubella; Tetanus, diphtheria, pertussis; N. meningitidis; and varicella.
Surveillance for infections
Surveillance is the act of infection investigation using the CDC definitions. Determining the presence of a hospital acquired infection requires an infection control practitioner (ICP) to review a patients chart and see if the patient had the signs and symptom of an infection. Surveillance definitions exist for infections of the bloodstream, urinary tract, pneumonia, surgical sites and gastroenteritis.
Surveillance traditionally involved significant manual data assessment and entry in order to assess preventative actions such as isolation of patients with an infectious disease. Increasingly, computerized software solutions are becoming available that assess incoming risk messages from microbiology and other online sources. By reducing the need for data entry, software can reduce the data workload of ICPs, freeing them to concentrate on clinical surveillance.
As of 1998, approximately one third of healthcare acquired infections were preventable. Surveillance and preventative activities are increasingly a priority for hospital staff. The Study on the Efficacy of Nosocomial Infection Control (SENIC) project by the U.S. CDC found in the 1970s that hospitals reduced their nosocomial infection rates by approximately 32 per cent by focusing on surveillance activities and prevention efforts.
Isolation and quarantine
In healthcare facilities, medical isolation refers to various physical measures taken to interrupt nosocomial spread of contagious diseases. Various forms of isolation exist, and are applied depending on the type of infection and agent involved, and its route of transmission, to address the likelihood of spread via airborne particles or droplets, by direct skin contact, or via contact with body fluids.In cases where infection is merely suspected, individuals may be quarantined until the incubation period has passed and the disease manifests itself or the person remains healthy. Groups may undergo quarantine, or in the case of communities, a cordon sanitaire may be imposed to prevent infection from spreading beyond the community, or in the case of protective sequestration, into a community. Public health authorities may implement other forms of social distancing, such as school closings, when needing to control an epidemic.
Barriers and facilitators of implementing infection prevention and control guidelines
Barriers to the ability of healthcare workers to follow PPE and infection control guidelines include communication of the guidelines, workplace support (manager support), the culture of use at the workplace, adequate training, the amount of physical space in the facility, access to PPE, and healthcare worker motivation to provide good patient care.Facilitators include the importance of including all the staff in a facility (healthcare workers and support staff) should be done when guidelines are implemented.
Outbreak investigation
When an unusual cluster of illness is noted, infection control teams undertake an investigation to determine whether there is a true disease outbreak, a pseudo-outbreak (a result of contamination within the diagnostic testing process), or just random fluctuation in the frequency of illness. If a true outbreak is discovered, infection control practitioners try to determine what permitted the outbreak to occur, and to rearrange the conditions to prevent ongoing propagation of the infection. Often, breaches in good practice are responsible, although sometimes other factors (such as construction) may be the source of the problem.Outbreaks investigations have more than a single purpose. These investigations are carried out in order to prevent additional cases in the current outbreak, prevent future outbreaks, learn about a new disease or learn something new about an old disease. Reassuring the public, minimizing the economic and social disruption as well as teaching epidemiology are some other obvious objectives of outbreak investigations.According to the WHO, outbreak investigations are meant to detect what is causing the outbreak, how the pathogenic agent is transmitted, where it all started from, what is the carrier, what is the population at risk of getting infected and what are the risk factors.
Training in infection control and health care epidemiology
Practitioners can come from several different educational streams. Many begin as nurses, some as medical technologists (particularly in clinical microbiology), and some as physicians (typically infectious disease specialists). Specialized training in infection control and health care epidemiology are offered by the professional organizations described below. Physicians who desire to become infection control practitioners often are trained in the context of an infectious disease fellowship. Training that is conducted "face to face", via a computer, or via video conferencing may help improve compliance and reduce errors when compared with "folder based" training (providing health care professionals with written information or instructions).In the United States, Certification Board of Infection Control and Epidemiology is a private company that certifies infection control practitioners based on their educational background and professional experience, in conjunction with testing their knowledge base with standardized exams. The credential awarded is CIC, Certification in Infection Control and Epidemiology. It is recommended that one has 2 years of Infection Control experience before applying for the exam. Certification must be renewed every five years.A course in hospital epidemiology (infection control in the hospital setting) is offered jointly each year by the Centers for Disease Control and Prevention (CDC) and the Society for Healthcare Epidemiology of America.
Standardization
Australia
In 2002, the Royal Australian College of General Practitioners published a revised standard for office-based infection control which covers the sections of managing immunisation, sterilisation and disease surveillance. However, the document on the personal hygiene of health workers is only limited to hand hygiene, waste and linen management, which may not be sufficient since some of the pathogens are air-borne and could be spread through air flow.Since 1 November 2019, the Australian Commission on Safety and Quality in Health Care has managed the Hand Hygiene initiative in Australia, an initiative focused on improving hand hygiene practices to reduce the incidence of healthcare-associated infections.
United States
Currently, the federal regulation that describes infection control standards, as related to occupational exposure to potentially infectious blood and other materials, is found at 29 CFR Part 1910.1030 Bloodborne pathogens.
See also
Pandemic prevention – Organization and management of preventive measures against pandemics
Wong, P., & Lim, W. Y. (2020). Aligning difficult airway guidelines with the anesthetic COVID-19 guidelines to develop a COVID-19 difficult airway strategy: A narrative review. Journal of Anesthesia, 34(6), 924-943. https://doi.org/10.1007/s00540-020-02819-2
Footnotes
External links
Association for Professionals in Infection Control and Epidemiology is primarily composed of infection prevention and control professionals with nursing or medical technology backgrounds
The Society for Healthcare Epidemiology of America is more heavily weighted towards practitioners who are physicians or doctoral-level epidemiologists.
Regional Infection Control Networks
The Certification Board of Infection Control and Epidemiology, Inc.
Association for Professionals in Infection Control and Epidemiology |
Intermittent claudication | Intermittent claudication, also known as vascular claudication, is a symptom that describes muscle pain on mild exertion (ache, cramp, numbness or sense of fatigue), classically in the calf muscle, which occurs during exercise, such as walking, and is relieved by a short period of rest. It is classically associated with early-stage peripheral artery disease, and can progress to critical limb ischemia unless treated or risk factors are modified and maintained.
Claudication derives from the Latin verb claudicare, "to limp".
Signs and symptoms
One of the hallmarks of arterial claudication is that it occurs intermittently. It disappears after a very brief rest and the patient can start walking again until the pain recurs.
The following signs are general signs of atherosclerosis of the lower extremity arteries:
cyanosis
atrophic changes like loss of hair, shiny skin
decreased temperature
decreased pulse
redness when limb is returned to a "dependent" position (part of Buergers test)The six "P"s of ischemia
Pain
Pallor (increased)
Pulse (decreased)
Perishing cold
Paraesthesia
Paralysis
Causes
Most commonly, intermittent (or vascular or arterial) claudication is due to peripheral arterial disease which implies significant atherosclerotic blockages resulting in arterial insufficiency. Other uncommon causes are coarctation of the aorta, Trousseau disease and Beurgers disease (Thromboangiitis obliterans), in which vasculitis occurs.
Raynauds phenomenon functional vasospasm. It is distinct from neurogenic claudication, which is associated with lumbar spinal stenosis. It is strongly associated with smoking, hypertension, and diabetes.
Diagnosis
Intermittent claudication is a symptom and is by definition diagnosed by a patient reporting a history of leg pain with walking relieved by rest. However, as other conditions such as sciatica can mimic intermittent claudication, testing is often performed to confirm the diagnosis of peripheral artery disease.Magnetic resonance angiography and duplex ultrasonography appear to be slightly more cost-effective in diagnosing peripheral artery disease among people with intermittent claudication than projectional angiography.
Treatment
Exercise can improve symptoms, as can revascularization. Both together may be better than one intervention of its own. In people with stable leg pain, exercise, such as strength training, polestriding and upper or lower limb exercises, compared to usual care or placebo improves maximum walking time, pain-free walking distance and maximum walking distance. Alternative exercise modes, such as cycling, strength training and upper-arm ergometry compared to supervised walking programmes showed no difference in maximum walking distance or pain-free walking distance for people with intermittent claudication.Pharmacological options exist, as well. Medicines that control lipid profile, diabetes, and hypertension may increase blood flow to the affected muscles and allow for increased activity levels. Angiotensin converting enzyme inhibitors, adrenergic agents such as alpha-1 blockers and beta-blockers and alpha-2 agonists, antiplatelet agents (aspirin and clopidogrel), naftidrofuryl, pentoxifylline, and cilostazol (selective PDE3 inhibitor) are used for the treatment of intermittent claudication. However, medications will not remove the blockages from the body. Instead, they simply improve blood flow to the affected area.Catheter-based intervention is also an option. Atherectomy, stenting, and angioplasty to remove or push aside the arterial blockages are the most common procedures for catheter-based intervention. These procedures can be performed by interventional radiologists, interventional cardiologists, vascular surgeons, and thoracic surgeons, among others.Surgery is the last resort; vascular surgeons can perform either endarterectomies on arterial blockages or perform an arterial bypass. However, open surgery poses a host of risks not present with catheter-based interventions.
Epidemiology
Atherosclerosis affects up to 10% of the Western population older than 65 years and for intermittent claudication this number is around 5%. Intermittent claudication most commonly manifests in men older than 50 years.One in five of the middle-aged (65–75 years) population of the United Kingdom have evidence of peripheral arterial disease on clinical examination, although only a quarter of them have symptoms. The most common symptom is muscle pain in the lower limbs on exercise—intermittent claudication.
See also
Peripheral artery disease
References
Further reading
Burns P, Gough S, Bradbury AW (March 2003). "Management of peripheral arterial disease in primary care". BMJ. 326 (7389): 584–8. doi:10.1136/bmj.326.7389.584. PMC 1125476. PMID 12637405.
Shammas NW (2007). "Epidemiology, classification, and modifiable risk factors of peripheral arterial disease". Vasc Health Risk Manag. 3 (2): 229–34. doi:10.2147/vhrm.2007.3.2.229. PMC 1994028. PMID 17580733.
External links
Cochrane Peripheral Vascular Diseases Review Group |
Intermittent explosive disorder | Intermittent explosive disorder (sometimes abbreviated as IED) is a behavioral disorder characterized by explosive outbursts of anger and/or violence, often to the point of rage, that are disproportionate to the situation at hand (e.g., impulsive shouting, screaming or excessive reprimanding triggered by relatively inconsequential events). Impulsive aggression is not premeditated, and is defined by a disproportionate reaction to any provocation, real or perceived. Some individuals have reported affective changes prior to an outburst, such as tension, mood changes, energy changes, etc.The disorder is currently categorized in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) under the "Disruptive, Impulse-Control, and Conduct Disorders" category. The disorder itself is not easily characterized and often exhibits comorbidity with other mood disorders, particularly bipolar disorder. Individuals diagnosed with IED report their outbursts as being brief (lasting less than an hour), with a variety of bodily symptoms (sweating, stuttering, chest tightness, twitching, palpitations) reported by a third of one sample. Aggressive acts are frequently reported to be accompanied by a sensation of relief and in some cases pleasure, but often followed by later remorse.
Pathophysiology
Impulsive behavior, and especially impulsive violence predisposition, have been correlated to a low brain serotonin turnover rate, indicated by a low concentration of 5-hydroxyindoleacetic acid (5-HIAA) in the cerebrospinal fluid (CSF). This substrate appears to act on the suprachiasmatic nucleus in the hypothalamus, which is the target for serotonergic output from the dorsal and median raphe nuclei playing a role in maintaining the circadian rhythm and regulation of blood sugar. A tendency towards low 5-HIAA may be hereditary. A putative hereditary component to low CSF 5-HIAA and concordantly possibly to impulsive violence has been proposed. Other traits that correlate with IED are low vagal tone and increased insulin secretion. A suggested explanation for IED is a polymorphism of the gene for tryptophan hydroxylase, which produces a serotonin precursor; this genotype is found more commonly in individuals with impulsive behavior.IED may also be associated with damage or lesions in the prefrontal cortex, with damage to these areas, including the amygdala and hippocampus, increasing the incidences of impulsive and aggressive behavior and the inability to predict the outcomes of an individuals own actions. Lesions in these areas are also associated with improper blood sugar control, leading to decreased brain function in these areas, which are associated with planning and decision making. A national sample in the United States estimated that 16 million Americans may fit the criteria for IED.
Diagnosis
DSM-5 diagnosis
The current DSM-5 criteria for Intermittent Explosive Disorder include:
Recurrent outbursts that demonstrate an inability to control impulses, including either of the following:
Verbal aggression (tantrums, verbal arguments, or fights) or physical aggression that occurs twice in a week-long period for at least three months and does not lead to the destruction of property or physical injury (Criterion A1)
Three outbursts that involve injury or destruction within a year-long period (Criterion A2)
Aggressive behavior is grossly disproportionate to the magnitude of the psychosocial stressors (Criterion B)
The outbursts are not premeditated and serve no premeditated purpose (Criterion C)
The outbursts cause distress or impairment of functioning or lead to financial or legal consequences (Criterion D)
The individual must be at least six years old (Criterion E)
The recurrent outbursts cannot be explained by another mental disorder and are not the result of another medical disorder or substance use (Criterion F)It is important to note that DSM-5 now includes two separate criteria for types of aggressive outbursts (A1 and A2) which have empirical support:
Criterion A1: Episodes of verbal and/or non-damaging, nondestructive, or non-injurious physical assault that occur, on average, twice weekly for three months. These could include temper tantrums, tirades, verbal arguments/fights, or assault without damage. This criterion includes high frequency/low-intensity outbursts.
Criterion A2: More severe destructive/assaultive episodes which are more infrequent and occur, on average, three times within a twelve-month period. These could be destroying an object without regard to value, assaulting an animal or individual. This criterion includes high-intensity/low-frequency outbursts.
DSM-IV diagnosis
The past DSM-IV criteria for IED were similar to the current criteria, however, verbal aggression was not considered as part of the diagnostic criteria. The DSM-IV diagnosis was characterized by the occurrence of discrete episodes of failure to resist aggressive impulses that result in violent assault or destruction of property. Additionally, the degree of aggressiveness expressed during an episode should be grossly disproportionate to provocation or precipitating psychosocial stressor, and, as previously stated, diagnosis is made when certain other mental disorders have been ruled out, e.g., a head injury, Alzheimers disease, etc., or due to substance use or medication. Diagnosis is made using a psychiatric interview to affective and behavioral symptoms to the criteria listed in the DSM-IV.The DSM-IV-TR was very specific in its definition of Intermittent Explosive Disorder which was defined, essentially, by the exclusion of other conditions. The diagnosis required:
several episodes of impulsive behavior that result in serious damage to either persons or property, wherein
the degree of the aggressiveness is grossly disproportionate to the circumstances or provocation, and
the episodic violence cannot be better accounted for by another mental or physical medical condition.
Differential diagnosis
Many psychiatric disorders and some substance use disorders are associated with increased aggression and are frequently comorbid with IED, often making differential diagnosis difficult. Individuals with IED are, on average, four times more likely to develop depression or anxiety disorders, and three times more likely to develop substance use disorders.
Bipolar disorder has been linked to increased agitation and aggressive behavior in some individuals, but for these individuals, aggressiveness is limited to manic and/or depressive episodes, whereas individuals with IED experience aggressive behavior even during periods with a neutral or positive mood.In one clinical study, the two disorders co-occurred 60% of the time. Patients report manic-like symptoms occurring just before outbursts and continuing throughout. According to a study, the average onset age of IED was around five years earlier than the onset age of bipolar disorder, indicating a possible correlation between the two.Similarly, alcohol and other substance use disorders may exhibit increased aggressiveness, but unless this aggression is experienced outside of periods of acute intoxication and withdrawal, no diagnosis of IED is given. For chronic disorders, such as PTSD, it is important to assess whether the level of aggression met IED criteria before the development of another disorder. In antisocial personality disorder, interpersonal aggression is usually instrumental in nature (i.e., motivated by tangible rewards), whereas IED is more of an impulsive, unpremeditated reaction to situational stress.
Treatment
Although there is no cure, treatment is attempted through cognitive behavioral therapy and psychotropic medication regimens, though the pharmaceutical options have shown limited success. Therapy aids in helping the patient recognize the impulses in hopes of achieving a level of awareness and control of the outbursts, along with treating the emotional stress that accompanies these episodes. Multiple drug regimens are frequently indicated for IED patients. Cognitive Relaxation and Coping Skills Therapy (CRCST) has shown preliminary success in both group and individual settings compared to waitlist control groups. This therapy consists of 12 sessions, the first three focusing on relaxation training, then cognitive restructuring, then exposure therapy. The final sessions focus on resisting aggressive impulses and other preventative measures.In France, antipsychotics such as cyamemazine, levomepromazine and loxapine are sometimes used.Tricyclic antidepressants and selective serotonin reuptake inhibitors (SSRIs, including fluoxetine, fluvoxamine, and sertraline) appear to alleviate some pathopsychological symptoms. GABAergic mood stabilizers and anticonvulsive drugs such as gabapentin, lithium, carbamazepine, and divalproex seem to aid in controlling the incidence of outbursts. Anxiolytics help alleviate tension and may help reduce explosive outbursts by increasing the provocative stimulus tolerance threshold, and are especially indicated in patients with comorbid obsessive-compulsive or other anxiety disorders. However, certain anxiolytics are known to increase anger and irritability in some individuals, especially benzodiazepines.
Epidemiology
Two epidemiological studies of community samples approximated the lifetime prevalence of IED to be 4–6%, depending on the criteria set used. A Ukrainian study found comparable rates of lifetime IED (4.2%), suggesting that a lifetime prevalence of IED of 4–6% is not limited to American samples. One-month and one-year point prevalences of IED in these studies were reported as 2.0% and 2.7%, respectively. Extrapolating to the national level, 16.2 million Americans would have IED during their lifetimes and as many as 10.5 million in any year and 6 million in any month.
Among a clinical population, a 2005 study found the lifetime prevalence of IED to be 6.3%.Prevalence appears to be higher in men than in women.Of US subjects with IED, 67.8% had engaged in direct interpersonal aggression, 20.9% in threatened interpersonal aggression, and 11.4% in aggression against objects. Subjects reported engaging in 27.8 high-severity aggressive acts during their worst year, with 2–3 outbursts requiring medical attention. Across the lifespan, the mean value of property damage due to aggressive outbursts was $1603.A study in the March 2016 Journal of Clinical Psychiatry suggests a relationship between infection with the parasite Toxoplasma gondii and psychiatric aggression such as IED.
History
In the first edition of the American Psychiatric Associations Diagnostic and Statistical Manual (DSM-I), a disorder of impulsive aggression was referred to as a passive-aggressive personality type (aggressive type). This construct was characterized by a "persistent reaction to frustration are "generally excitable, aggressive, and over-responsive to environmental pressures" with "gross outbursts of rage or of verbal or physical aggressiveness different from their usual behavior".In the third edition (DSM-III), this was for the first time codified as intermittent explosive disorder and assigned clinical disorder status under Axis I. However, some researchers saw the criteria as poorly operationalized. About 80% of individuals who would now be diagnosed with the disorder would have been excluded.In the DSM-IV, the criteria were improved but still lacked objective criteria for the intensity, frequency, and nature of aggressive acts to meet criteria for IED. This led some researchers to adopt alternate criteria set with which to conduct research, known as the IED-IR (Integrated Research). The severity and frequency of aggressive behavior required for the diagnosis were clearly operationalized, the aggressive acts were required to be impulsive in nature, subjective distress was required to precede the explosive outbursts, and the criteria allowed for comorbid diagnoses with borderline personality disorder and antisocial personality disorder. These research criteria became the basis for the DSM-5 diagnosis.
In the current version of the DSM (DSM-5), the disorder appears under the "Disruptive, Impulse-Control, and Conduct Disorders" category. In the DSM-IV, physical aggression was required to meet the criteria for the disorder, but these criteria were modified in the DSM-5 to include verbal aggression and non-destructive/noninjurious physical aggression. The listing was also updated to specify frequency criteria. Further, aggressive outbursts are now required to be impulsive in nature and must cause marked distress, impairment, or negative consequences for the individual. Individuals must be at least six years old to receive the diagnosis. The text also clarified the disorders relationship to other disorders such as ADHD and disruptive mood dysregulation disorder.
See also
Episodic dyscontrol syndrome
Passive–aggressive personality disorder
References
== External links == |
Interstitial cystitis | Interstitial cystitis (IC), a type of bladder pain syndrome (BPS), is chronic pain in the bladder and pelvic floor of unknown cause. It is the urologic chronic pelvic pain syndrome of women. Symptoms include feeling the need to urinate right away, needing to urinate often, and pain with sex. IC/BPS is associated with depression and lower quality of life. Many of those affected also have irritable bowel syndrome and fibromyalgia.The cause of interstitial cystitis is unknown. While it can, it does not typically run in a family. The diagnosis is usually based on the symptoms after ruling out other conditions. Typically the urine culture is negative. Ulceration or inflammation may be seen on cystoscopy. Other conditions which can produce similar symptoms include overactive bladder, urinary tract infection (UTI), sexually transmitted infections, prostatitis, endometriosis in females, and bladder cancer.There is no cure for interstitial cystitis and management of this condition can be challenging. Treatments that may improve symptoms include lifestyle changes, medications, or procedures. Lifestyle changes may include stopping smoking and reducing stress. Medications may include ibuprofen, pentosan polysulfate, or amitriptyline. Procedures may include bladder distention, nerve stimulation, or surgery. Pelvic floor exercises and long term antibiotics are not recommended.In the United States and Europe, it is estimated that around 0.5% of people are affected. Women are affected about five times as often as men. Onset is typically in middle age. The term "interstitial cystitis" first came into use in 1887.
Signs and symptoms
The most common symptoms of IC/BPS are suprapubic pain, urinary frequency, painful sexual intercourse, and waking up from sleep to urinate.In general, symptoms may include painful urination described as a burning sensation in the urethra during urination, pelvic pain that is worsened with the consumption of certain foods or drinks, urinary urgency, and pressure in the bladder or pelvis. Other frequently described symptoms are urinary hesitancy (needing to wait for the urinary stream to begin, often caused by pelvic floor dysfunction and tension), and discomfort and difficulty driving, working, exercising, or traveling. Pelvic pain experienced by those with IC typically worsens with filling of the urinary bladder and may improve with urination.During cystoscopy, 5–10% of people with IC are found to have Hunners ulcers. A person with IC may have discomfort only in the urethra, while another might struggle with pain in the entire pelvis. Interstitial cystitis symptoms usually fall into one of two patterns: significant suprapubic pain with little frequency or a lesser amount of suprapubic pain but with increased urinary frequency.
Association with other conditions
Some people with IC/BPS have been diagnosed with other conditions such as irritable bowel syndrome (IBS), fibromyalgia, chronic fatigue syndrome, allergies, Sjögren syndrome, which raises the possibility that interstitial cystitis may be caused by mechanisms that cause these other conditions. There is also some evidence of an association between urologic pain syndromes, such as IC/BPS and CP/CPPS, with non-celiac gluten sensitivity in some people.In addition, men with IC/PBS are frequently diagnosed as having chronic nonbacterial prostatitis, and there is an extensive overlap of symptoms and treatment between the two conditions, leading researchers to posit that the conditions may share the same cause and pathology.
Causes
The cause of IC/BPS is not known. However, several explanations have been proposed and include the following: autoimmune theory, nerve theory, mast cell theory, leaky lining theory, infection theory, and a theory of production of a toxic substance in the urine. Other suggested etiological causes are neurologic, allergic, genetic, and stress-psychological. In addition, recent research shows that those with IC may have a substance in the urine that inhibits the growth of cells in the bladder epithelium. An infection may then predispose those people to develop IC. Evidence from clinical and laboratory studies confirms that mast cells play a central role in IC/BPS possibly due to their ability to release histamine and cause pain, swelling, scarring, and interfere with healing. Research has shown a proliferation of nerve fibers is present in the bladders of people with IC which is absent in the bladders of people who have not been diagnosed with IC.Regardless of the origin, most people with IC/BPS struggle with a damaged urothelium, or bladder lining. When the surface glycosaminoglycan (GAG) layer is damaged (via a urinary tract infection (UTI), excessive consumption of coffee or sodas, traumatic injury, etc.), urinary chemicals can "leak" into surrounding tissues, causing pain, inflammation, and urinary symptoms. Oral medications like pentosan polysulfate and medications placed directly into the bladder via a catheter sometimes work to repair and rebuild this damaged/wounded lining, allowing for a reduction in symptoms. Most literature supports the belief that ICs symptoms are associated with a defect in the bladder epithelium lining, allowing irritating substances in the urine to penetrate into the bladder—a breakdown of the bladder lining (also known as the adherence theory). Deficiency in this glycosaminoglycan layer on the surface of the bladder results in increased permeability of the underlying submucosal tissues.GP51 has been identified as a possible urinary biomarker for IC with significant variations in GP51 levels in those with IC when compared to individuals without interstitial cystitis.Numerous studies have noted the link between IC, anxiety, stress, hyper-responsiveness, and panic. Another proposed cause for interstitial cystitis is that the bodys immune system attacks the bladder. Biopsies on the bladder walls of people with IC usually contain mast cells. Mast cells containing histamine packets gather when an allergic reaction is occurring. The body identifies the bladder wall as a foreign agent, and the histamine packets burst open and attack. The body attacks itself, which is the basis of autoimmune disorders. Additionally, IC may be triggered by an unknown toxin or stimulus which causes nerves in the bladder wall to fire uncontrollably. When they fire, they release substances called neuropeptides that induce a cascade of reactions that cause pain in the bladder wall.
Genes
Some genetic subtypes, in some people, have been linked to the disorder.
An antiproliferative factor is secreted by the bladders of people with IC/BPS which inhibits bladder cell proliferation, thus possibly causing the missing bladder lining.
PAND, at gene map locus 13q22–q32, is associated with a constellation of disorders (a "pleiotropic syndrome") including IC/BPS and other bladder and kidney problems, thyroid diseases, serious headaches/migraines, panic disorder, and mitral valve prolapse.
Diagnosis
A diagnosis of IC/BPS is one of exclusion, as well as a review of clinical symptoms. The American Urological Association Guidelines recommend starting with a careful history of the person, physical examination and laboratory tests to assess and document symptoms of interstitial cytitis, as well as other potential disorders.
The KCl test, also known as the potassium sensitivity test, is no longer recommended. The test uses a mild potassium solution to evaluate the integrity of the bladder wall. Though the latter is not specific for IC/BPS, it has been determined to be helpful in predicting the use of compounds, such as pentosan polysulphate, which are designed to help repair the GAG layer.For complicated cases, the use of hydrodistention with cystoscopy may be helpful. Researchers, however, determined that this visual examination of the bladder wall after stretching the bladder was not specific for IC/BPS and that the test, itself, can contribute to the development of small glomerulations (petechial hemorrhages) often found in IC/BPS. Thus, a diagnosis of IC/BPS is one of exclusion, as well as a review of clinical symptoms.
In 2006, the ESSIC society proposed more rigorous and demanding diagnostic methods with specific classification criteria so that it cannot be confused with other, similar conditions. Specifically, they require that a person must have pain associated with the bladder, accompanied by one other urinary symptom. Thus, a person with just frequency or urgency would be excluded from a diagnosis. Secondly, they strongly encourage the exclusion of confusable diseases through an extensive and expensive series of tests including (A) a medical history and physical exam, (B) a dipstick urinalysis, various urine cultures, and a serum PSA in men over 40, (C) flowmetry and post-void residual urine volume by ultrasound scanning and (D) cystoscopy. A diagnosis of IC/BPS would be confirmed with a hydrodistention during cystoscopy with biopsy.They also propose a ranking system based upon the physical findings in the bladder. People would receive a numeric and letter based score based upon the severity of their disease as found during the hydrodistention. A score of 1–3 would relate to the severity of the disease and a rating of A–C represents biopsy findings. Thus, a person with 1A would have very mild symptoms and disease while a person with 3C would have the worst possible symptoms. Widely recognized scoring systems such as the OLeary Sant symptom and problem score have emerged to evaluate the severity of IC symptoms such as pain and urinary symptoms.
Differential diagnosis
The symptoms of IC/BPS are often misdiagnosed as a urinary tract infection. However, IC/BPS has not been shown to be caused by a bacterial infection and antibiotics are an ineffective treatment. IC/BPS is commonly misdiagnosed as chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) in men, and endometriosis and uterine fibroids (in women).
Treatment
In 2011, the American Urological Association released consensus-based guideline for the diagnosis and treatment of interstitial cystitis.They include treatments ranging from conservative to more invasive:
First-line treatments — education, self care (diet modification), stress management
Second-line treatments — physical therapy, oral medications (amitriptyline, cimetidine or hydroxyzine, pentosan polysulfate), bladder instillations (DMSO, heparin, or lidocaine)
Third-line treatments — treatment of Hunners lesions (laser, fulguration or triamcinolone injection), hydrodistention (low pressure, short duration)
Fourth-line treatments — neuromodulation (sacral or pudendal nerve)
Fifth-line treatments — cyclosporine A, botulinum toxin (BTX-A)
Sixth-line treatments — surgical intervention (urinary diversion, augmentation, cystectomy)The American Urological Association guidelines also listed several discontinued treatments, including long-term oral antibiotics, intravesical bacillus Calmette Guerin, intravesical resiniferatoxin), high-pressure and long-duration hydrodistention, and systemic glucocorticoids.
Bladder distension
Bladder distension while under general anesthesia, also known as hydrodistention (a procedure which stretches the bladder capacity), has shown some success in reducing urinary frequency and giving short-term pain relief to those with IC. However, it is unknown exactly how this procedure causes pain relief. Recent studies show pressure on pelvic trigger points can relieve symptoms. The relief achieved by bladder distensions is only temporary (weeks or months), so is not viable as a long-term treatment for IC/BPS. The proportion of people with IC/BPS who experience relief from hydrodistention is currently unknown and evidence for this modality is limited by a lack of properly controlled studies. Bladder rupture and sepsis may be associated with prolonged, high-pressure hydrodistention.
Bladder instillations
Bladder instillation of medication is one of the main forms of treatment of interstitial cystitis, but evidence for its effectiveness is currently limited. Advantages of this treatment approach include direct contact of the medication with the bladder and low systemic side effects due to poor absorption of the medication. Single medications or a mixture of medications are commonly used in bladder instillation preparations. Dimethyl sulfoxide (DMSO) is the only approved bladder instillation for IC/BPS yet it is much less frequently used in urology clinics.A 50% solution of DMSO had the potential to create irreversible muscle contraction. However, a lesser solution of 25% was found to be reversible. Long-term use of DMSO is questionable, as its mechanism of action is not fully understood though DMSO is thought to inhibit mast cells and may have anti-inflammatory, muscle-relaxing, and analgesic effects. Other agents used for bladder instillations to treat interstitial cystitis include: heparin, lidocaine, chondroitin sulfate, hyaluronic acid, pentosan polysulfate, oxybutynin, and botulinum toxin A. Preliminary evidence suggests these agents are efficacious in reducing symptoms of interstitial cystitis, but further study with larger, randomized, controlled clinical trials is needed.
Diet
Diet modification is often recommended as a first-line method of self-treatment for interstitial cystitis, though rigorous controlled studies examining the impact diet has on interstitial cystitis signs and symptoms are currently lacking. An increase in fiber intake may alleviate symptoms. Individuals with interstitial cystitis often experience an increase in symptoms when they consume certain foods and beverages. Avoidance of these potential trigger foods and beverages such as caffeine-containing beverages including coffee, tea, and soda, alcoholic beverages, chocolate, citrus fruits, hot peppers, and artificial sweeteners may be helpful in alleviating symptoms. Diet triggers vary between individuals with IC; the best way for a person to discover his or her own triggers is to use an elimination diet. Sensitivity to trigger foods may be reduced if calcium glycerophosphate and/or sodium bicarbonate is consumed. The foundation of therapy is a modification of diet to help people avoid those foods which can further irritate the damaged bladder wall.The mechanism by which dietary modification benefits people with IC is unclear. Integration of neural signals from pelvic organs may mediate the effects of diet on symptoms of IC.
Medications
The antihistamine hydroxyzine failed to demonstrate superiority over placebo in treatment of people with IC in a randomized, controlled, clinical trial.Amitriptyline has been shown to be effective in reducing symptoms such as chronic pelvic pain and nocturia in many people with IC/BPS with a median dose of 75 mg daily. In one study, the antidepressant duloxetine was found to be ineffective as a treatment, although a patent exists for use of duloxetine in the context of IC, and is known to relieve neuropathic pain. The calcineurin inhibitor cyclosporine A has been studied as a treatment for interstitial cystitis due to its immunosuppressive properties. A prospective randomized study found cyclosporine A to be more effective at treating IC symptoms than pentosan polysulfate, but also had more adverse effects.Oral pentosan polysulfate is believed to repair the protective glycosaminoglycan coating of the bladder, but studies have encountered mixed results when attempting to determine if the effect is statistically significant compared to placebo.
Pelvic floor treatments
Urologic pelvic pain syndromes, such as IC/BPS and CP/CPPS, are characterized by pelvic muscle tenderness, and symptoms may be reduced with pelvic myofascial physical therapy.This may leave the pelvic area in a sensitized condition, resulting in a loop of muscle tension and heightened neurological feedback (neural wind-up), a form of myofascial pain syndrome. Current protocols, such as the Wise–Anderson Protocol, largely focus on stretches to release overtensed muscles in the pelvic or anal area (commonly referred to as trigger points), physical therapy to the area, and progressive relaxation therapy to reduce causative stress.Pelvic floor dysfunction is a fairly new area of specialty for physical therapists worldwide. The goal of therapy is to relax and lengthen the pelvic floor muscles, rather than to tighten and/or strengthen them as is the goal of therapy for people with urinary incontinence. Thus, traditional exercises such as Kegel exercises, which are used to strengthen pelvic muscles, can provoke pain and additional muscle tension. A specially trained physical therapist can provide direct, hands on evaluation of the muscles, both externally and internally.A therapeutic wand can also be used to perform pelvic floor muscle myofascial release to provide relief.
Surgery
Surgery is rarely used for IC/BPS. Surgical intervention is very unpredictable, and is considered a treatment of last resort for severe refractory cases of interstitial cystitis. Some people who opt for surgical intervention continue to experience pain after surgery. Typical surgical interventions for refractory cases of IC/BPS include: bladder augmentation, urinary diversion, transurethral fulguration and resection of ulcers, and bladder removal (cystectomy).Neuromodulation can be successful in treating IC/BPS symptoms, including pain. One electronic pain-killing option is TENS. Percutaneous tibial nerve stimulation stimulators have also been used, with varying degrees of success. Percutaneous sacral nerve root stimulation was able to produce statistically significant improvements in several parameters, including pain.
Alternative medicine
There is little evidence looking at the effects of alternative medicine though their use is common. There is tentative evidence that acupuncture may help pain associated with IC/BPS as part of other treatments. Despite a scarcity of controlled studies on alternative medicine and IC/BPS, "rather good results have been obtained" when acupuncture is combined with other treatments.Biofeedback, a relaxation technique aimed at helping people control functions of the autonomic nervous system, has shown some benefit in controlling pain associated with IC/BPS as part of a multimodal approach that may also include medication or hydrodistention of the bladder.
Prognosis
IC/BPS has a profound impact on quality of life. A 2007 Finnish epidemiologic study showed that two-thirds of women at moderate to high risk of having interstitial cystitis reported impairment in their quality of life and 35% of people with IC reported an impact on their sexual life. A 2012 survey showed that among a group of adult women with symptoms of interstitial cystitis, 11% reported suicidal thoughts in the past two weeks. Other research has shown that the impact of IC/BPS on quality of life is severe and may be comparable to the quality of life experienced in end-stage kidney disease or rheumatoid arthritis.International recognition of interstitial cystitis has grown and international urology conferences to address the heterogeneity in diagnostic criteria have recently been held. IC/PBS is now recognized with an official disability code in the United States of America.
Epidemiology
IC/BPS affects men and women of all cultures, socioeconomic backgrounds, and ages. Although the disease was previously believed to be a condition of menopausal women, growing numbers of men and women are being diagnosed in their twenties and younger. IC/BPS is not a rare condition. Early research suggested that the number of IC/BPS cases ranged from 1 in 100,000 to 5.1 in 1,000 of the general population. In recent years, the scientific community has achieved a much deeper understanding of the epidemiology of interstitial cystitis. Recent studies have revealed that between 2.7 and 6.53 million women in the USA have symptoms of IC and up to 12% of women may have early symptoms of IC/BPS. Further study has estimated that the condition is far more prevalent in men than previously thought ranging from 1.8 to 4.2 million men having symptoms of interstitial cystitis.The condition is officially recognized as a disability in the United States.
History
Philadelphia surgeon Joseph Parrish published the earliest record of interstitial cystitis in 1836 describing three cases of severe lower urinary tract symptoms without the presence of a bladder stone. The term "interstitial cystitis" was coined by Dr. Alexander Skene in 1887 to describe the disease. In 2002, the United States amended the Social Security Act to include interstitial cystitis as a disability. The first guideline for diagnosis and treatment of interstitial cystitis is released by a Japanese research team in 2009. The American Urological Association released the first American clinical practice guideline for diagnosing and treating IC/BPS in 2011.
Names
Originally called interstitial cystitis, this disorder was renamed to interstitial cystitis/bladder pain syndrome (IC/BPS) in the 2002–2010 timeframe. In 2007, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) began using the umbrella term urologic chronic pelvic pain syndrome (UCPPS) to refer to pelvic pain syndromes associated with the bladder (e.g., interstitial cystitis/bladder pain syndrome) and with the prostate gland or pelvis (e.g., chronic prostatitis/chronic pelvic pain syndrome).In 2008, terms currently in use in addition to IC/BPS include painful bladder syndrome, bladder pain syndrome and hypersensitive bladder syndrome, alone and in a variety of combinations. These different terms are being used in different parts of the world. The term "interstitial cystitis" is the primary term used in ICD-10 and MeSH. Grover et al. said, "The International Continence Society named the disease interstitial cystitis/painful bladder syndrome (IC/PBS) in 2002 [Abrams et al. 2002], while the Multinational Interstitial Cystitis Association have labeled it as painful bladder syndrome/interstitial cystitis (PBS/IC) [Hanno et al. 2005]. Recently, the European Society for the study of Interstitial Cystitis (ESSIC) proposed the moniker, bladder pain syndrome (BPS) [van de Merwe et al. 2008]."
See also
Chronic prostatitis/chronic pelvic pain syndrome—women have vestigial prostate glands that may cause IC/BPS-like symptoms. Men with IC/BPS may have prostatitis, and vice versa.
Overactive bladder
Trigger point—a key to myofascial pain syndrome.
References
External links
Interstitial cystitis at Curlie
Parsons, J. Kellogg; Parsons, C. Lowell (2004). "The Historical Origins of Interstitial Cystitis". The Journal of Urology. 171 (1): 20–2. doi:10.1097/01.ju.0000099890.35040.8d. PMID 14665834.
The National Kidney and Urologic Diseases Information Clearinghouse (NKUDIC)
Homma, Yukio; Ueda, Tomohiro; Tomoe, Hikaru; Lin, Alex TL; Kuo, Hann-Chorng; Lee, Ming-Huei; Lee, Jeong Gu; Kim, Duk Yoon; Lee, Kyu-Sung (2009). "Clinical guidelines for interstitial cystitis and hypersensitive bladder syndrome". International Journal of Urology. 16 (7): 597–615. doi:10.1111/j.1442-2042.2009.02326.x. PMID 19548999. S2CID 20796904.
European Urology |
Iron poisoning | Iron poisoning typically occurs from ingestion of excess iron that results in acute toxicity. Mild symptoms which occur within hours include vomiting, diarrhea, abdominal pain, and drowsiness. In more severe cases, symptoms can include tachypnea, low blood pressure, seizures, or coma. If left untreated, iron poisoning can lead to multi-organ failure resulting in permanent organ damage or death.Iron is available over the counter as a single entity supplement in an iron salt form or in combination with vitamin supplements and is commonly used in the treatment of anemias. Overdoses on iron can be categorized as unintentional ingestion which is predominantly associated with children or intentional ingestion involving suicide attempts in adults. Unintentional ingestion of iron containing drug products are a major cause of mortality in children under the age of 6 years old in the United States. As a response, in 1997 the US Food and Drug Administration (FDA) implemented a regulation requiring warning labels and unit dose packaging for products containing more than 30 mg of elemental iron per dose.The diagnosis of iron poisoning is based on clinical presentation including laboratory tests for serum iron concentrations and metabolic acidosis along with physical examination. Treatment for iron poisoning involves providing fluid replacement, gastrointestinal decontamination, administering deferoxamine intravenously, liver transplants, and monitoring the patients condition. The degree of intervention required depends on whether the patient is at risk for serious toxicity.
Signs and symptoms
Manifestation of iron poisoning may vary depending on the amount of iron ingested by the individual and is further classified by five stages based on timing of signs and symptoms. In mild to moderate cases, individuals may be asymptomatic or only experience mild gastrointestinal symptoms that resolve within six hours. In serious cases, individuals may present with systemic signs and symptoms and require treatment. Clinical presentation of iron poisoning in the absence of treatment progresses in five stages: the gastrointestinal phase, latent phase, metabolic acidosis and shock phase, hepatotoxicity phase, and bowel obstruction due to scarring.
The first indication of iron poisoning occurs within the first six hours post-ingestion and involves gastrointestinal symptoms including abdominal pain accompanied by nausea and vomiting with or without blood. Due to the disintegration of iron tablets, the stool may appear as black or dark green or gray. After the first stage, gastrointestinal symptoms appear to resolve in the latent phase and individuals may show signs of improvement. Following this stage, the iron begins to affect the cells of the bodys organs which manifests as numerous systemic signs and symptoms developing after 6 to 72 hours, in the metabolic acidosis phase. Individuals may present with signs of cardiogenic shock indicated by low blood pressure, rapid heart rate and severe shortness of breath. Hypovolemic shock occurs due to loss of blood from the gastrointestinal bleeding caused by the iron. During this phase, metabolic acidosis may also develop damaging internal organs such as the brain and liver. In the fourth stage taking place 12 to 96 hours after ingestion, liver toxicity and failure occurs as the cells begin to die. In the last stage of iron poisoning following 2 to 8 weeks after ingestion, scarring of the gastrointestinal mucosal lining resulting in bowel obstruction.
Cause
Pathophysiology
Iron is essential for the production of hemoglobin in red blood cells which is responsible for transporting oxygen throughout the body. In normal physiologic conditions, nonionic forms of iron (Fe°) are converted into ferrous iron (Fe2+) by gastric acid in the stomach. Ferrous iron is then absorbed in the small intestine where it is oxidized into its ferric iron (Fe3+) form before being released into the bloodstream. Free iron in the blood is toxic to the body as it disrupts normal cell function, damaging organs such as the liver, stomach, and cardiovascular system. The human body has protective mechanisms in place to prevent excess free ferric iron from circulating the body. When being transported throughout the body, iron is bound to an iron transporting protein called transferrin to prevent iron from being absorbed into different cells. Any excess iron is stored as ferritin in the liver. In the event of iron overdose, iron stores become oversaturated and the bodys protective mechanisms fail resulting in excess free circulating iron.
Toxic Dose
Iron poisoning can occur when doses of 20 to 60 mg/kg or more of elemental iron is ingested with most cases reporting primarily gastrointestinal symptoms. Systemic signs and symptoms shown in serious toxicity occur at higher doses exceeding 60 mg/kg. Ingesting above 120 mg/kg may be fatal. The therapeutic dose for iron deficiency anemia is 3–6 mg/kg/day. Individuals who have ingested less than 20 mg/kg of elemental iron typically do not exhibit symptoms. It is unlikely to get iron poisoning from diet alone with iron supplements being the cause of overdose. The amount of elemental iron in an iron supplement can be calculate based on the percentage it constitutes for per tablet. For example, a 300 mg tablet of ferrous fumarate will contain 100 mg of elemental iron or 33%.
Ferrous sulfate contains 20% elemental iron per mg of mineral salt
Ferrous gluconate contains 12% elemental iron per mg of mineral salt
Ferrous fumarate contains 33% elemental iron per mg of mineral salt
Diagnosis
Iron toxicity is primarily a clinical diagnosis that involves getting a detailed patient history and physical examination of the individuals signs and symptoms. Information such as how much iron was ingested and the timing should be gathered to assess the level of toxicity. Signs for severe iron poisoning should be evaluated such as any confusion or extreme lethargy, increased heart rates, low blood pressure for adults. In children, signs of shock can be noted with behavioral changes such as decreased responsiveness, crying, and inability to focus. Persistent vomiting is often associated with iron poisoning and also used to determine severity of iron poisoning. Laboratory tests such as measuring the peak serum iron level after 4 to 6 hours of ingestion can be useful in determining the severity of iron toxicity. In general, levels below 350mcg/dL are associated with more mild iron poisoning while upper levels above 500mcg/dL are associated with more severe iron poisoning. Measuring electrolyte levels, kidney function, serum glucose, liver function tests (enzymes and bilirubin), complete blood count, clotting time via prothrombin and partial thromboplastin time, anion gap for metabolic acidosis, should be conducted for clinical monitoring and confirmation of iron poisoning.The deferoxamine challenge test is a diagnostic test for confirming iron poisoning, however it is no longer recommended for diagnostic purposes due to concerns regarding the accuracy. Deferoxamine can be administered intramuscularly as a single dose where it then binds to free iron in the blood and is excreted into the urine turning it to a "brick orange" or pink/red/orange color. Radiographs are no longer used for diagnosis due to the lack of connection between severity of iron toxicity and the presence of radiopaque iron tablets in the stomach on X-rays. This method also requires that the ingested tablet to be radiopaque which most iron preparations are not.
Treatment
Management of acute iron poisoning involves providing a patient with respiratory support and intravenous deferoxamine. Patients exhibiting severe symptoms in the gastrointestinal phase should receive volume resuscitation to prevent hypovolemic shock from the loss of blood volume. Normal saline is administered intravenously to maintain adequate volume of fluid in the body. Deferoxamine is a drug that is used in cases of serious iron poisoning. It is a chelating agent and binds to free iron in the body in order to be eliminated by the kidneys into urine. Dosing of deferoxamine should be determined through consultation with a toxicologist but is typically continuously infused at 15 mg/kg to 35 mg/kg per hour and not exceeding the maximum daily dose of 6 grams for adults. In pediatric patients, doses should not exceed 15 mg/kg per hour. recommended duration of treatment is until symptoms have resolved which is usually 24 hours. In non-fatal cases of iron poisoning where there is liver failure, liver transplantation may be necessary.Treatment of iron poisoning should be based on clinical presentation, peak serum iron levels and other laboratory results. As a general guideline, patients who have ingested lower doses of elemental iron, have a peak serum iron level less than 500mcg/dL and are asymptomatic or only exhibit mild gastrointestinal symptoms typically do not require treatment and should be monitored for 6 hours after ingestion. In cases where high doses of elemental iron have been ingested and the patient is exhibiting signs and symptoms of severe systemic iron poisoning, supportive care measures like volume resuscitation and deferoxamine should be initiated immediately. A quick response to iron poisoning can significantly improve clinical outcomes.
See also
Overnutrition
Iron overload
References
External links
Iron poisoning in General Practice Notebook
Iron Poisoning at WebMD
Iron Poisoning Merck Manual |
Iron supplement | Iron supplements, also known as iron salts and iron pills, are a number of iron formulations used to treat and prevent iron deficiency including iron deficiency anemia. For prevention they are only recommended in those with poor absorption, heavy menstrual periods, pregnancy, hemodialysis, or a diet low in iron. Prevention may also be used in low birth weight babies. They are taken by mouth, injection into a vein, or injection into a muscle. While benefits may be seen in days, up to two months may be required until iron levels return to normal.Common side effects include constipation, abdominal pain, dark stools, and diarrhea. Other side effects, which may occur with excessive use, include iron overload and iron toxicity. Ferrous salts used as supplements by mouth include ferrous fumarate, ferrous gluconate, ferrous succinate, and ferrous sulfate. Injectable forms include iron dextran and iron sucrose. They work by providing the iron needed for making red blood cells.Iron pills have been used medically since at least 1681, with an easy-to-use formulation being created in 1832. Ferrous salt is on the World Health Organizations List of Essential Medicines. Ferrous salts are available as a generic medication and over the counter. Slow release formulations, while available, are not recommended. In 2017, ferrous sulfate was the 92nd most commonly prescribed medication in the United States, with more than eight million prescriptions.
Medical uses
Iron supplements are used to treat iron deficiency and iron-deficiency anemia; parenteral irons can also be used to treat functional iron deficiency, where requirements for iron are greater than the bodys ability to supply iron such as in inflammatory states. The main criterion is that other causes of anemia have also been investigated, such as vitamin B12 or folate deficiency, drug induced or due to other poisons such as lead, as often the anemia has more than one underlying cause.
Iron deficiency anemia is classically a microcytic, hypochromic anemia. Generally, in the UK oral preparations are trialled before using parenteral delivery, unless there is the requirement for a rapid response, previous intolerance to oral iron or likely failure to respond. Intravenous iron may decrease the need for blood transfusions however it increases the risk of infections when compared to oral iron. A 2015 Cochrane Collaboration review found that daily oral supplementation of iron during pregnancy reduces the risk of maternal anemia and that effects on infant and on other maternal outcomes are not clear. Another review found tentative evidence that intermittent iron supplements by mouth for mothers and babies is similar to daily supplementation with fewer side effects. Supplements by mouth should be taken on an empty stomach, optionally with a small amount of food to reduce discomfort.
Athletes
Athletes may be at elevated risk of iron deficiency and so benefit from supplementation, but the circumstances vary between individuals and dosage should be based on tested ferritin levels, since in some cases supplementation may be harmful.
Frequent blood donors
Frequent blood donors may be advised to take iron supplements. Canadian Blood Services recommends discussing "taking iron supplements with your doctor or pharmacist" as "the amount of iron in most multivitamins may not meet your needs and iron supplements may be necessary". The American Red Cross recommends "taking a multivitamin with 18 mg of iron or an iron supplement with 18-38 mg of elemental iron for 60 days after each blood donation, for 120 days after each power red donation or after frequent platelet donations". A 2014 Cochrane Review found that blood donors were less likely to be deferred for low hemoglobin levels if they were taking oral iron supplements, although 29% of those who took them experienced side effects in contrast to the 17% that took a placebo. It is unknown what the long-term effects of iron supplementation for blood donors may be.
Side effects
Side effects of therapy with oral iron are most often diarrhea or constipation and epigastric abdominal discomfort. Taken after a meal, side effects decrease, but there is an increased risk of interaction with other substances. Side effects are dose-dependent, and the dose may be adjusted.
The patient may notice that their stools become black. This is completely harmless, but patients must be warned about this to avoid unnecessary concern. When iron supplements are given in a liquid form, teeth may reversibly discolor (this can be avoided through the use of a straw). Intramuscular injection can be painful, and brown discoloration may be noticed.
Treatments with iron(II) sulfate have higher incidence of adverse events than iron(III)-hydroxide polymaltose complex (IPC) or iron bis-glycinate chelate.Iron overdose has been one of the leading causes of death caused by toxicological agents in children younger than 6 years.Iron poisoning may result in mortality or short-term and long-term morbidity.
Infection risk
Because one of the functions of elevated ferritin (an acute phase reaction protein) in acute infections is thought to be to sequester iron from bacteria, it is generally thought that iron supplementation (which circumvents this mechanism) should be avoided in patients who have active bacterial infections. Replacement of iron stores is seldom such an emergency situation that it cannot wait for any such acute infection to be treated.
Some studies have found that iron supplementation can lead to an increase in infectious disease morbidity in areas where bacterial infections are common. For example, children receiving iron-enriched foods have demonstrated an increased rate in diarrhea overall and enteropathogen shedding. Iron deficiency protects against infection by creating an unfavorable environment for bacterial growth. Nevertheless, while iron deficiency might lessen infections by certain pathogenic diseases, it also leads to a reduction in resistance to other strains of viral or bacterial infections, such as Salmonella typhimurium or Entamoeba histolytica. Overall, it is sometimes difficult to decide whether iron supplementation will be beneficial or harmful to an individual in an environment that is prone to many infectious diseases; however this is a different question than the question of supplementation in individuals who are already ill with a bacterial infection.Children living in areas prone for malarial infections are also at risk of developing anemia. It was thought that iron supplementation given to such children could increase the risk of malarial infection in them. A Cochrane systematic review published in 2016 found high quality evidence that iron supplementation does not increase the risk of clinical malaria in children.
Contraindications
Contraindications often depend on the substance in question. Documented hypersensitivity to any ingredients and anemias without proper work-up (i.e., documentation of iron deficiency) is true of all preparations. Some can be used in iron deficiency, others require iron deficiency anaemia to be present. Some are also contraindicated in rheumatoid arthritis.
Hemochromatosis
Individuals may be genetically predisposed to excessive iron absorption, as is the case with those with HFE hereditary hemochromatosis. Within the general population, 1 out of 400 people has the homozygous form of this genetic trait, and 1 out of every 10 people has its heterozygous form. Neither individuals with the homozygous or heterozygous form should take iron supplements.
Interactions
Non-heme iron forms an insoluble complex with several other drugs, resulting in decreased absorption of both iron and the other drug. Examples include tetracycline, penicillamine, methyldopa, levodopa, bisphosphonates and quinolones. The same can occur with elements in food, such as calcium, which impacts both heme and non-heme iron absorption. Absorption of iron is better at a low pH (i.e. an acidic environment), and absorption is decreased if there is a simultaneous intake of antacids.
Many other substances decrease the rate of non-heme iron absorption. One example is tannins from foods such as tea and phytic acid. Because iron from plant sources is less easily absorbed than the heme-bound iron of animal sources, vegetarians and vegans should have a somewhat higher total daily iron intake than those who eat meat, fish or poultry.Taken after a meal, there are fewer side effects but there is also less absorption because of interaction and pH alteration. Generally, an interval of 2–3 hours between the iron intake and that of other drugs seems advisable, but is less convenient for patients and can impact on compliance.
History
The first pills were commonly known as Blauds pills, which were named after P. Blaud of Beaucaire, the French physician who introduced and started the use of these medications as a treatment for patients with anemia.
Administration
By mouth
Iron can be supplemented by mouth using various forms, such as iron(II) sulfate. This is the most common and well studied soluble iron salt sold under brand names such as Feratab, Fer-Iron, and Slow-FE. It is in complex with gluconate, dextran, carbonyl iron, and other salts. Ascorbic acid, vitamin C, increases the absorption of non-heme sources of iron.Heme iron polypeptide (HIP) (e.g. Proferrin ES and Proferrin Forte) can be used when regular iron supplements such as ferrous sulfate or ferrous fumarate are not tolerated or absorbed. A clinical study demonstrated that HIP increased serum iron levels 23 times greater than ferrous fumarate on a milligram-per-milligram basis.Another alternative is ferrous glycine sulfate or ferroglycine sulfate, has less gastrointestinal side-effects than standard preparations such as iron fumarate. It is unusual among oral preparations of iron supplements in that the iron in this preparation has very high oral bioavailability, especially in the liquid formulation. This option should be evaluated before resorting to parenteral therapy. It is especially useful in iron deficiency anemia associated with autoimmune gastritis and Helicobacter pylori gastritis, where it generally has satisfactory effect.Since iron stores in the body are generally depleted, and there is a limit to what the body can process (about 2–6 mg/kg of body mass per day; i.e. for a 100 kg/220 lb man this is equal to a maximum dose of 200–600 mg/per day) without iron poisoning, this is a chronic therapy which may take 3–6 months.Due to the frequent intolerance of oral iron and the slow improvement, parenteral iron is recommended in many indications.
By injection
Iron therapy (intravenously or intramuscular) is given when therapy by mouth has failed (not tolerated), oral absorption is seriously compromised (by illnesses, or when the person cannot swallow), benefit from oral therapy cannot be expected, or fast improvement is required (for example, prior to elective surgery). Parenteral therapy is more expensive than oral iron preparations and is not suitable during the first trimester of pregnancy.There are cases where parenteral iron is preferable over oral iron. These are cases where oral iron is not tolerated, where the haemoglobin needs to be increased quickly (e.g. post partum, post operatively, post transfusion), where there is an underlying inflammatory condition (e.g. inflammatory bowel disease) or renal patients, the benefits of parenteral iron far outweigh the risks. In many cases, use of intravenous iron such as ferric carboxymaltose has lower risks of adverse events than a blood transfusion and as long as the person is stable is a better alternative. Ultimately this always remains a clinical decision based on local guidelines, although National Guidelines are increasingly stipulating IV iron in certain groups of patients.Soluble iron salts have a significant risk of adverse effects and can cause toxicity due to damage to cellular macromolecules. Delivering iron parenterally has utilised various different molecules to limit this. This has included dextrans, sucrose, carboxymaltose and more recently Isomaltoside 1000.One formulation of parenteral iron is iron dextran which covers the old high molecular weight (trade name DexFerrum) and the much safer low molecular iron dextrans (tradenames including Cosmofer and Infed).Iron sucrose has an occurrence of allergic reactions of less than 1 in 1000. A common side effect is taste changes, especially a metallic taste, occurring in between 1 in 10 and 1 in 100 treated patients. It has a maximum dose of 200 mg on each occasion according to the SPC, but it has been given in doses of 500 mg. Doses can be given up to 3 times a week.Iron carboxymaltose is marketed as Ferinject, Injectafer, and Iroprem in various countries. The most common side effects are headaches which occur in 3.3%, and hypophosphatemia, which occurs in more than 35%.Iron Isomaltoside 1000 (Trade name Monofer) is a newer formulation of parenteral iron that has a matrix structure that results in very low levels of free iron and labile iron. It can be given at high doses – 20 mg/kg in a single visit – no upper dose limit. This formulation has the benefit of giving a full iron correction in a single visit.
Follow-up
Follow-up is needed to ensure compliance and to detect adequate response to therapy. The interval of follow up can widely depend on both the method of administration, and the underlying pathology. For parenteral irons it is recommended that there be a period of 4 weeks before repeating blood test to allow the body to utilise the iron. For oral iron, this can take considerably longer, so waiting three months may be appropriate.
See also
Geritol
Human iron metabolism
Lucky iron fish
== References == |
Juvenile myoclonic epilepsy | Juvenile myoclonic epilepsy (JME), also known as Janz syndrome, is a common form of genetic generalized epilepsy (previously known as idiopathic generalized epilepsy), representing 5-10% of all epilepsy cases. This disorder typically first presents between the ages of 12 and 18 with myoclonic seizure manifesting as sudden brief involuntary single or multiple episodes of muscle(s) contractions caused by an abnormal excessive or synchronous neuronal activity in the brain. These events typically occur after awakening from sleep, during the evening or upon sleep deprivation. JME is also characterized by generalized tonic-clonic seizures and a minority also have absence seizures. The genetics of JME are complex and rapidly evolving as over 20 chromosomal loci and multiple genes have been identified thus far. Given the genetic and clinical heterogeneity of JME some authors have suggested that it should be thought of as a spectrum disorder.
Epidemiology
The prevalence of JME is approximately 0.1-0.2 per 100,000 and constitutes approximately 5-10% of all epilepsies. Some studies suggest that JME is slightly more common in females than males. The onset of symptoms ranges between the ages of 8 and 36 years and has a peak between the ages of 12 and 18 years. Approximately 15% of children with childhood absence epilepsy and juvenile absence epilepsy subsequently develop JME. In most cases, myoclonic jerks precede the first generalized tonic-clonic seizure by a mean of 3.3 years. A long-term population-based study suggested that at 25 years from seizure onset all seizure types in JME resolved in 17% and in 13% only myoclonus remained despite discontinuing medication. Thus, disabling seizures resolve in around one-third of patients.
Signs and symptoms
There are three principle seizures types which may occur in JME: myoclonus, generalized tonic-clonic seizures and absence seizures. Approximately one-third of patients have all three seizure types. The majority of patients (58.2%) have frequent myoclonic jerks and uncommon generalized tonic-clonic seizures. Absence seizures are believed to be the least common with studies estimating a prevalence of 10% to as high as 38%. Myoclonic status epilepticus may occur as a complication but it is uncommon.
Patient’s typically initially present to medical providers following their first generalized tonic-clonic seizure. It is often subsequently reported that the patient was having myoclonus for several years prior. The first generalized tonic-clonic seizure usually occurs in the context of a particular provoking factor such as sleep deprivation, stress or alcohol consumption. There are other potential provoking factors such as praxis induction which refers to the precipitation of seizures or epileptiform discharges in the context of a complex cognitive tasks. Patients with JME tend to perform worse on neuropsychological assessments in multiple cognitive domains and are also more likely to have psychiatric comorbidities such as depression and anxiety when compared to control populations. The majority of patients with JME report satisfaction with their health, work, friendships and social life.
Cause
JME is believed to be most often caused by a heterogeneous and complex interaction of multiple genes rather than an unidentified single genetic cause. Thus far seven genes and over 20 chromosomal loci have been implicated in the pathogenesis of JME. A minority of cases are caused by single genes and are inherited in an autosomal dominant fashion. The majority of the genes which have been associated with JME encode for ion channel subunits. More recently, variants in intestinal cell kinase which is encoded by a gene on chromosome 6p12 was found to be associated with JME. This gene is involved in mitosis, cell-cycle exit and radial neuroblast migration as well as apoptosis. Another gene that is associated with JME called EFHC1 has similar functions. These findings may explain subtle structural and functional brain abnormalities that are seen in patients with JME.JME is distinct from other forms of genetic generalized epilepsies due to the prominence of myoclonus. There is evidence that patients with JME have hyperexcitable motor cortexes that is most pronounced in the morning and after sleep deprivation. In addition, there is evidence that patients with JME have hyperexcitable and hyperconnected cortical networks that are involved in ictogenesis.
Genetics
CACNB4
CACNB4 is a gene that encodes the calcium channel β subunit protein. β subunits are important regulators of calcium channel current amplitude, voltage dependence, and also regulate channel trafficking. In mice, a naturally occurring null mutation leads to the "lethargic" phenotype. This is characterized by ataxia and lethargic behavior at early stages of development followed within days by the onset of both focal motor seizures as well as episodes of behavioral immobility which correlates with patterns of cortical spike and wave discharges at the EEG A premature-termination mutation R482X was identified in a patient with JME while an additional missense mutation C104F was identified in a German family with generalized epilepsy and praxis – induced seizures.The R482X mutation results in increased current amplitudes and an accelerated fast time constant of inactivation. Whether these modest functional differences may be in charge of JME remains to be established. Calcium channel β4 subunit (CACNB4) is not strictly considered a putative JME gene because its mutation did not segregate in affected family members, and it was found in only one member of a JME family from Germany, and it has not been replicated.
GABRA1
GABRA1 is a gene that encodes for an α subunit of the GABA A receptor protein, which encodes one of the major inhibitory neurotransmitter receptors. There is one known mutation in this gene that is associated with JME, A322D, which is located in the third segment of the protein. This missense mutation results in channels with reduced peak GABA-evoked currents. Furthermore, the presence of such mutation alters the composition and reduces the expression of wild-type GABAA receptors.
GABRD
GABRD encodes the δ subunit of the GABA receptor, which is an important constituent of the GABAA receptor mediating tonic inhibition in neurons (extrasynaptic GABA receptors, i.e. receptors that are localized outside of the synapse). Among the mutations that have been reported in this in this gene, one (R220H) has been identified in a small family with JME. This mutation affects GABAergic transmission by altering the surface expression of the receptor as well as reducing the channel – opening duration.
Myoclonin1/EFHC1
The final known associated gene is EFHC1. Myoclonin1/EFHC1 encodes for a protein that has been known to play a wide range of wild-type cell division, neuroblast migration and synapse/dendrite formation. EFHC1 is expressed in many tissues, including the brain, where it is localized to the soma and dendrites of neurons, particularly the hippocampal CA1 region, pyramidal neurons in the cerebral cortex, and Purkinje cells in the cerebellum.There are four JME-causing mutations discovered (D210N, R221H, F229L and D253Y). The mutations do not seem to alter the ability of the protein to colocalize with centrosomes and mitotic spindles but induce mitotic spindle defects. Moreover, the mutations impact radial and tangential migration during brain development. As such a theory has been put forward that JME may be the result of a brain developmental disorder.
Other loci
Three SNP alleles in BRD2, Cx-36 and ME2 and microdeletions in 15q13.3, 15q11.2 and 16p.13.11 also contribute risk to JME.
Diagnosis
Diagnosis is typically made based on patient history. The physical examination is usually normal. The primary diagnosis for JME is a good knowledge of patient history and the neurologists familiarity with the myoclonic jerks, which are the hallmark of the syndrome. Additionally, an electroencephalogram (EEG), will indicate a characteristic pattern of waves and spikes associated with the syndrome such as generalized 4–6 Hz polyspike and slow wave discharges. These discharges may be evoked by photic stimulation (blinking lights) and/or hyperventilation.
Both a magnetic resonance imaging scan (MRI) and computed tomography scan (CT scan) generally appear normal in JME patients. However a number of quantitative MRI studies have reported focal or regional abnormalities of the subcortical and cortical grey matter, particularly the thalamus and frontal cortex, in JME patients. Positron emission tomography reports in some patients may indicate local deviations in many transmitter systems.
Management
The most effective anti-epileptic medication for JME is valproic acid (Depakote).Due to valproic acids high incidence of fetal malformations, women of child-bearing age are started on alternative medications such as Lamotrigine, levetiracetam. Carbamazepine may aggravate genetic generalized epilepsies and as such its use should be avoided in JME. Treatment is lifelong. However, recent follow-up researches on a subgroup of patients showed them becoming seizure-free and off anti-epileptic drugs in due course of time. This makes this dogma questionable. Patients should be warned to avoid sleep deprivation.
History
The first citation of JME was made in 1857 when Théodore Herpin described a 13-year-old boy with myoclonic jerks, which progressed to tonic-clonic seizures three months later. In 1957, Janz and Christian published a journal article describing several patients with JME. The name Juvenile Myoclonic Epilepsy was proposed in 1975 and adopted by the International League Against Epilepsy.
Culture
Stand-up comedian Maisie Adam has JME and her award-winning show "Vague" (2018) discussed it.The 2018 documentary film Separating The Strains dealt with the use of CBD oil to treat symptoms of JME.
Currently, no scientific evidence exist to support the use of CBD oil to treat symptoms of JME.
See also
Progressive myoclonus epilepsies
Spinal muscular atrophy with progressive myoclonic epilepsy
== References == |
Kaposis sarcoma | Kaposis sarcoma (KS) is a type of cancer that can form masses in the skin, in lymph nodes, in the mouth, or in other organs. The skin lesions are usually painless, purple and may be flat or raised. Lesions can occur singly, multiply in a limited area, or may be widespread. Depending on the sub-type of disease and level of immune suppression, KS may worsen either gradually or quickly. KS is caused by a combination of immune suppression (such as due to HIV/AIDS) and infection by Human herpesvirus 8 (HHV8 – also called KS-associated herpesvirus (KSHV)).Four sub-types are described: classic, endemic, immunosuppression therapy-related (also called iatrogenic), and epidemic (also called AIDS-related). Classic KS tends to affect older men in regions where KSHV is highly prevalent (Mediterranean, Eastern Europe, Middle East), is usually slow-growing, and most often affects only the legs. Endemic KS is most common in Sub-Saharan Africa and is more aggressive in children, while older adults present similarly to classic KS. Immunosuppression therapy-related KS generally occurs in people following organ transplantation and mostly affects the skin. Epidemic KS occurs in people with AIDS and many parts of the body can be affected. KS is diagnosed by tissue biopsy, while the extent of disease may be determined by medical imaging.Treatment is based on the sub-type, whether the condition is localized or widespread, and the persons immune function. Localized skin lesions may be treated by surgery, injections of chemotherapy into the lesion, or radiation therapy. Widespread disease may be treated with chemotherapy or biologic therapy. In those with HIV/AIDS, highly active antiretroviral therapy (HAART) prevents and often treats KS. In certain cases the addition of chemotherapy may be required. With widespread disease, death may occur.The condition is relatively common in people with HIV/AIDS and following organ transplant. Over 35% of people with AIDS may be affected. KS was first described by Moritz Kaposi in 1872. It became more widely known as one of the AIDS-defining illnesses in the 1980s. KSHV was discovered as a causative agent in 1994.
Signs and symptoms
KS lesions are nodules or blotches that may be red, purple, brown, or black, and are usually papular.They are typically found on the skin, but spread elsewhere is common, especially the mouth, gastrointestinal tract and respiratory tract. Growth can range from very slow to explosively fast, and is associated with significant mortality and morbidity.The lesions are painless, but become cosmetically disfiguring or interruptive to organs.
Skin
Commonly affected areas include the lower limbs, back, face, mouth, and genitalia. The lesions are usually as described above, but may occasionally be plaque-like (often on the soles of the feet) or even involved in skin breakdown with resulting fungating lesions.
Associated swelling may be from either local inflammation or lymphoedema (obstruction of local lymphatic vessels by the lesion). Skin lesions may be quite disfiguring for the patient, and a cause of much psychosocial pathology.
Mouth
The mouth is involved in about 30% of cases, and is the initial site in 15% of AIDS-related KS. In the mouth, the hard palate is most frequently affected, followed by the gums. Lesions in the mouth may be easily damaged by chewing and bleed or develop secondary infection, and even interfere with eating or speaking.
Gastrointestinal tract
Involvement can be common in those with transplant-related or AIDS-related KS, and it may occur in the absence of skin involvement. The gastrointestinal lesions may be silent or cause weight loss, pain, nausea/vomiting, diarrhea, bleeding (either vomiting blood or passing it with bowel movements), malabsorption, or intestinal obstruction.
Respiratory tract
Involvement of the airway can present with shortness of breath, fever, cough, coughing up blood or chest pain, or as an incidental finding on chest x-ray. The diagnosis is usually confirmed by bronchoscopy, when the lesions are directly seen and often biopsied. Kaposis sarcoma of the lung has a poor prognosis.
Cause
Kaposis sarcoma-associated herpesvirus (KSHV), also called HHV-8, is present in almost 100% of Kaposi sarcoma lesions, whether HIV-related, classic, endemic, or iatrogenic. KSHV encodes oncogenes, microRNAs and circular RNAs that promote cancer cell proliferation and escape from the immune system.
Transmission
In Europe and North America, KSHV is transmitted through saliva. Thus, kissing is a risk factor for transmission. Higher rates of transmission among gay and bisexual men have been attributed to "deep kissing" sexual partners with KSHV. Another alternative theory suggests that use of saliva as a sexual lubricant might be a major mode for transmission. Prudent advice is to use commercial lubricants when needed and avoid deep kissing with partners with KSHV infection or whose status is unknown.KSHV is also transmissible via organ transplantation and blood transfusion. Testing for the virus before these procedures is likely to effectively limit iatrogenic transmission.
Pathology
Despite its name, in general it is not considered a true sarcoma, which is a tumor arising from mesenchymal tissue. The histogenesis of KS remains controversial. KS may arise as a cancer of lymphatic endothelium and forms vascular channels that fill with blood cells, giving the tumor its characteristic bruise-like appearance. KSHV proteins are uniformly detected in KS cancer cells.KS lesions contain tumor cells with a characteristic abnormal elongated shape, called spindle cells. The most typical feature of Kaposi sarcoma is the presence of spindle cells forming slits containing red blood cells. Mitotic activity is only moderate and pleomorphism is usually absent. The tumor is highly vascular, containing abnormally dense and irregular blood vessels, which leak red blood cells into the surrounding tissue and give the tumor its dark color. Inflammation around the tumor may produce swelling and pain. Variously sized PAS positive hyaline bodies are often seen in the cytoplasm or sometimes extracellularly.The spindle cells of Kaposi sarcoma differentiate toward endothelial cells, probably of lymph vessel rather than blood vessel origin. The consistent immunoreactivity for podoplanin supports the lymphatic nature of the lesion.
Diagnosis
Although KS may be suspected from the appearance of lesions and the patients risk factors, a definite diagnosis can be made only by biopsy and microscopic examination. Detection of the KSHV protein LANA in tumor cells confirms the diagnosis.In differential diagnosis, arteriovenous malformations, pyogenic granuloma and other vascular proliferations can be microscopically confused with KS.
Differential diagnosis of Kaposis sarcoma
Source:
Naevus
Histiocytoma
Cryptococcosis
Histoplasmosis
Leishmaniasis
Pneumocystis lesions
Dermatophytosis
Angioma
Bacillary angiomatosis
Pyogenic granuloma
Melanoma
Classification
HHV-8 is responsible for all varieties of KS. Since Moritz Kaposi first described the cancer, the disease has been reported in five separate clinical settings, with different presentations, epidemiology, and prognoses.: 599 All of the forms are infected with KSHV and are different manifestations of the same disease but have differences in clinical aggressiveness, prognosis, and treatment.
Classic Kaposi sarcoma most commonly appears early on the toes and soles as reddish, violaceous, or bluish-black macules and patches that spread and coalesce to form nodules or plaques.: 599 A small percentage of these patients may have visceral lesions. In most cases the treatment involves surgical removal of the lesion. The condition tends to be indolent and chronic, affecting elderly men from the Mediterranean region, Arab countries, or of Eastern European descent. Israeli Jews have a higher rate of KSHV/HHV-8 infection than European peoples.
Endemic KS, which has two types. Although this may be present worldwide, it has been originally described later in young African peoples, mainly those from sub-Saharan Africa. This variant is not related to HIV infection and is a more aggressive disease that infiltrates the skin extensively.African lymphadenopathic Kaposi sarcoma is aggressive, occurring in children under 10 years of age, presenting with lymph node involvement, with or without skin lesions.: 599
African cutaneous Kaposi sarcoma presents with nodular, infiltrative, vascular masses on the extremities, mostly in men between the ages of 20 and 50, and is endemic in tropical Africa.: 599
Immunosuppression-associated Kaposi sarcoma had been described, but only rarely until the advent of calcineurin inhibitors (such as ciclosporines, which are inhibitors of T-cell function) for transplant patients in the 1980s, when its incidence grew rapidly. The tumor arises either when an HHV 8-infected organ is transplanted into someone who has not been exposed to the virus or when the transplant recipient already harbors pre-existing HHV 8 infection. Unlike classic Kaposi sarcoma, the site of presentation is more variable.: 600
AIDS-associated Kaposi sarcoma typically presents with cutaneous lesions that begin as one or several red to purple-red macules, rapidly progressing to papules, nodules, and plaques, with a predilection for the head, back, neck, trunk, and mucous membranes. In more advanced cases, lesions can be found in the stomach and intestines, the lymph nodes, and the lungs.: 599 Compared to other forms of KS, KS-AIDS stimulated more interest in KS research, as it was one of the first illnesses associated with AIDS and first described in 1981. This form of KS is over 300 times more common in AIDS patients than in renal transplant recipients. In this case, HHV 8 is sexually transmitted among people also at risk for sexually transmitted HIV infection.
Prevention
Blood tests to detect antibodies against KSHV have been developed and can be used to determine whether a person is at risk for transmitting the infection to their sexual partner, or whether an organ is infected before transplantation. However, these tests are not available except as research tools, and, thus, there is little screening for persons at risk for becoming infected with KSHV, such as people following a transplant.
Treatment
Kaposi sarcoma is not curable, but it can often be treatable for many years. In KS associated with immunodeficiency or immunosuppression, treating the cause of the immune system dysfunction can slow or stop the progression of KS. In 40% or more of patients with AIDS-associated Kaposi sarcoma, the Kaposi lesions will shrink upon first starting highly active antiretroviral therapy (HAART). Therefore, HAART is considered the cornerstone of therapy in AIDS-associated Kaposi sarcoma. However, in a certain percentage of such people, Kaposi sarcoma may recur after many years on HAART, especially if HIV is not completely suppressed.
People with a few local lesions can often be treated with local measures such as radiation therapy or cryosurgery. Weak evidence suggests that antiretroviral therapy in combination with chemotherapy is more effective than either of those two therapies individually. Limited basic and clinical evidence suggest that topical beta-blockers, such as timolol, may induce regression of localized lesions in classic as well as HIV-associated Kaposi sarcoma. In general, surgery is not recommended, as Kaposi sarcoma can appear in wound edges. In general, more widespread disease, or disease affecting internal organs, is treated with systemic therapy with interferon alpha, liposomal anthracyclines (such as liposomal doxorubicin or daunorubicin), thalidomide, or paclitaxel.Alitretinoin, applied to the lesion, may be used when the lesion is not getting better with standard treatment of HIV/AIDS and chemotherapy or radiation therapy cannot be used.
Epidemiology
With the decrease in the death rate among people with HIV/AIDS receiving new treatments in the 1990s, the rates and severity of epidemic KS also decreased. However, the number of people living with HIV/AIDS is increasing in the United States, and it is possible that the number of people with AIDS-associated Kaposi sarcoma will again rise as these people live longer with HIV infection.
Society
Because of their highly visible nature, external lesions are sometimes the presenting symptom of AIDS. Kaposi sarcoma entered the awareness of the general public with the release of the film Philadelphia, in which the main character was fired after his employers found out he was HIV-positive due to visible lesions. By the time KS lesions appear, likely, the immune system has already been severely weakened. It has been reported that only 6% of men who have sex with men are aware that KS is caused by a virus different from HIV. Thus, there is little community effort to prevent KSHV infection. Likewise, no systematic screening of organ donations is in place.
In people with AIDS, Kaposi sarcoma is considered an opportunistic infection, a disease that can gain a foothold in the body because the immune system has been weakened. With the rise of HIV/AIDS in Africa, where KSHV is widespread, KS has become the most frequently reported cancer in some countries.
References
External links
Kaposi sarcoma photo library at Dermnet |
Keloid | Keloid, also known as keloid disorder and keloidal scar,
is the formation of a type of scar which, depending on its maturity, is composed mainly of either type III (early) or type I (late) collagen. It is a result of an overgrowth of granulation tissue (collagen type 3) at the site of a healed skin injury which is then slowly replaced by collagen type 1. Keloids are firm, rubbery lesions or shiny, fibrous nodules, and can vary from pink to the color of the persons skin or red to dark brown in color. A keloid scar is benign and not contagious, but sometimes accompanied by severe itchiness, pain, and changes in texture. In severe cases, it can affect movement of skin. Worldwide, men and women of African, Asian, Hispanic and European descent can develop these raised scars. In the United States keloid scars are seen 15 times more frequently in people of sub-Saharan African descent than in people of European descent. There is a higher tendency to develop a keloid among those with a family history of keloids and people between the ages of 10 and 30 years.
Keloids should not be confused with hypertrophic scars, which are raised scars that do not grow beyond the boundaries of the original wound.
Signs and symptoms
Keloids expand in claw-like growths over normal skin. They have the capability to hurt with a needle-like pain or to itch, the degree of sensation varying from person to person.Keloids form within scar tissue. Collagen, used in wound repair, tends to overgrow in this area, sometimes producing a lump many times larger than that of the original scar. They can also range in color from pink to red. Although they usually occur at the site of an injury, keloids can also arise spontaneously. They can occur at the site of a piercing and even from something as simple as a pimple or scratch. They can occur as a result of severe acne or chickenpox scarring, infection at a wound site, repeated trauma to an area, excessive skin tension during wound closure or a foreign body in a wound. Keloids can sometimes be sensitive to chlorine. If a keloid appears when someone is still growing, the keloid can continue to grow as well.
Location
Keloids can develop in any place where skin trauma has occurred. They can be the result of pimples, insect bites, scratching, burns, or other skin injury. Keloid scars can develop after surgery.
They are more common in some sites, such as the central chest (from a sternotomy), the back and shoulders (usually resulting from acne), and the ear lobes (from ear piercings). They can also occur on body piercings.The most common spots are earlobes, arms, pelvic region, and over the collar bone.
Cause
Most skin injury types can contribute to scarring. This includes burns, acne scars, chickenpox scars, ear piercing, scratches, surgical incisions, and vaccination sites.
According to the (US) National Center for Biotechnology Information, keloid scarring is common in young people between the ages of 10 and 20. Studies have shown that those with darker complexions are at a higher risk of keloid scarring as a result of skin trauma. They occur in 15–20% of individuals with sub-Saharan African, Asian or Latino ancestry, significantly less in those of a Caucasian background. Although it was previously believed that people with albinism did not get keloids, a recent report described the incidence of keloids in Africans with albinism. Keloids tend to have a genetic component, which means one is more likely to have keloids if one or both of their parents has them. However, no single gene has yet been identified which is a causing factor in keloid scarring but several susceptibility loci have been discovered, most notably in Chromosome 15.
Genetics
People who have ancestry from Sub-Saharan Africa, Asia, or Latin America are more likely to develop a keloid. Among ethnic Chinese in Asia, the keloid is the most common skin condition. In the United States, keloids are more common in African Americans and Hispanic Americans than European Americans. Those who have a family history of keloids are also susceptible since about 1/3 of people who get keloids have a first-degree blood relative (mother, father, sister, brother, or child) who also gets keloids. This family trait is most common in people of African and/or Asian descent.
Development of keloids among twins also lends credibility to existence of a genetic susceptibility to develop keloids. Marneros et al. (1) reported four sets of identical twins with keloids; Ramakrishnan et al. also described a pair of twins who developed keloids at the same time after vaccination. Case series have reported clinically severe forms of keloids in individuals with a positive family history and black African ethnic origin.
Pathology
Histologically, keloids are fibrotic tumors characterized by a collection of atypical fibroblasts with excessive deposition of extracellular matrix components, especially collagen, fibronectin, elastin, and proteoglycans. Generally, they contain relatively acellular centers and thick, abundant collagen bundles that form nodules in the deep dermal portion of the lesion. Keloids present a therapeutic challenge that must be addressed, as these lesions can cause significant pain, pruritus (itching), and physical disfigurement. They may not improve in appearance over time and can limit mobility if located over a joint.Keloids affect both sexes equally, although the incidence in young female patients has been reported to be higher than in young males, probably reflecting the greater frequency of earlobe piercing among women.
The frequency of occurrence is 15 times higher in highly pigmented people. People of African descent have increased risk of keloid occurrences.
Treatments
Prevention of keloid scars in patients with a known predisposition to them includes preventing unnecessary trauma or surgery (such as ear piercing and elective mole removal) whenever possible. Any skin problems in predisposed individuals (e.g., acne, infections) should be treated as early as possible to minimize areas of inflammation.
Treatments (both preventive and therapeutic) available are pressure therapy, silicone gel sheeting, intra-lesional triamcinolone acetonide (TAC), cryosurgery (freezing), radiation, laser therapy (PDL), IFN, 5-FU and surgical excision as well as a multitude of extracts and topical agents. Appropriate treatment of a keloid scar is age-dependent: radiotherapy, anti-metabolites and corticosteroids would not be recommended to be used in children, in order to avoid harmful side effects, like growth abnormalities.In adults, corticosteroids combined with 5-FU and PDL in a triple therapy, enhance results and diminish side effects.Cryotherapy (or cryosurgery) refers to the application of extreme cold to treat keloids. This treatment method is easy to perform, effective and safe and has the least chance of recurrence.Surgical excision is currently still the most common treatment for a significant amount of keloid lesions. However, when used as the solitary form of treatment there is a large recurrence rate of between 70 and 100%. It has also been known to cause a larger lesion formation on recurrence. While not always successful alone, surgical excision when combined with other therapies dramatically decreases the recurrence rate. Examples of these therapies include but are not limited to radiation therapy, pressure therapy and laser ablation. Pressure therapy following surgical excision has shown promising results, especially in keloids of the ear and earlobe. The mechanism of how exactly pressure therapy works is unknown at present, but many patients with keloid scars and lesions have benefited from it.Intralesional injection with a corticosteroid such as Kenalog (triamcinolone acetonide) does appear to aid in the reduction of fibroblast activity, inflammation and pruritus.Tea tree oil, salt or other topical oil has no effect on keloid lesions.A 2022 systematic review included multiple studies on laser therapy for treating keloid scars. There was not enough evidence for the review authors to determine if laser therapy was more effective than other treatments. They were also unable to conclude if laser therapy leads to more harm than benefits compared with no treatment or different kinds of treatment.
Epidemiology
Persons of any age can develop a keloid. Children under 10 are less likely to develop keloids, even from ear piercing. Keloids may also develop from Pseudofolliculitis barbae; continued shaving when one has razor bumps will cause irritation to the bumps, infection, and over time keloids will form. Persons with razor bumps are advised to stop shaving in order for the skin to repair itself before undertaking any form of hair removal. The tendency to form keloids is speculated to be hereditary. Keloids can tend to appear to grow over time without even piercing the skin, almost acting out a slow tumorous growth; the reason for this tendency is unknown.
Extensive burns, either thermal or radiological, can lead to unusually large keloids; these are especially common in firebombing casualties, and were a signature effect of the atomic bombings of Hiroshima and Nagasaki.
True incidence and prevalence of keloid in United States is not known. Indeed, there has never been a population study to assess the epidemiology of this disorder. In his 2001 publication, Marneros stated that “reported incidence of keloids in the general population ranges from a high of 16% among the adults in the Democratic Republic of the Congo to a low of 0.09% in England,” quoting from Blooms 1956 publication on heredity of keloids. Clinical observations show that the disorder is more common among sub-Saharan Africans, African Americans and Asians, with unreliable and very wide estimated prevalence rates ranging from 4.5 to 16%.
History
Keloids were described by Egyptian surgeons around 1700 BC, recorded in the Smith papyrus, regarding surgical techniques. Baron Jean-Louis Alibert (1768–1837) identified the keloid as an entity in 1806. He called them cancroïde, later changing the name to chéloïde to avoid confusion with cancer. The word is derived from the Ancient Greek χηλή, chele, meaning "crab pincers", and the suffix -oid, meaning "like".
The famous American Civil War-era photograph "Whipped Peter" depicts an escaped former slave with extensive keloid scarring as a result of numerous brutal beatings from his former overseer.
Intralesional corticosteroid injections was introduced as a treatment in mid-1960s as a method to attenuate scaring.Pressure therapy has been use for prophylaxis and treatment of keloids since the 1970s.Topical silicone gel sheeting was introduced as a treatment in the early 1980s.
References
Further reading
Roßmann, Nico (2005). Beitrag zur Pathogenese des Keloids und seine Beeinflussbarkeit durch Steroidinjektionen [Contribution to the pathogenesis of the keloid and its influence by steroid injections] (PhD Thesis) (in German). OCLC 179740918.
Ogawa, Rei; Mitsuhashi, Kiyoshi; Hyakusoku, Hiko; Miyashita, Tuguhiro (2003). "Postoperative Electron-Beam Irradiation Therapy for Keloids and Hypertrophic Scars: Retrospective Study of 147 Cases Followed for More Than 18 Months". Plastic and Reconstructive Surgery. 111 (2): 547–53, discussion 554–5. doi:10.1097/01.PRS.0000040466.55214.35. PMID 12560675. S2CID 8411788.
Okada, Emi; Maruyama, Yu (2007). "Are Keloids and Hypertrophic Scars Caused by Fungal Infection?". Plastic and Reconstructive Surgery. 120 (3): 814–5. doi:10.1097/01.prs.0000278813.23244.3f. PMID 17700144.
== External links == |
Keratoconus | Keratoconus (KC) is a disorder of the eye that results in progressive thinning of the cornea. This may result in blurry vision, double vision, nearsightedness, irregular astigmatism, and light sensitivity leading to poor quality-of-life. Usually both eyes are affected. In more severe cases a scarring or a circle may be seen within the cornea.While the cause is unknown, it is believed to occur due to a combination of genetic, environmental, and hormonal factors. Patients with a parent, sibling, or child who has keratoconus have 15 to 67 times higher risk in developing corneal ectasia compared to patients with no affected relatives. Proposed environmental factors include rubbing the eyes and allergies. The underlying mechanism involves changes of the cornea to a cone shape. Diagnosis is most often by topography. Topography measures the curvature of the cornea and creates a colored "map" of the cornea. Keratoconus causes very distinctive changes in the appearance of these maps, which allows doctors to make the diagnosis.
Initially the condition can typically be corrected with glasses or soft contact lenses. As the disease progresses, special contact lenses (such as scleral contact lenses) may be required. In most people the disease stabilizes after a few years without severe vision problems. In 2016, the FDA approved corneal collagen cross-linking to halt the progression of keratoconus. In some cases, when the cornea becomes dangerously thin or when sufficient vision can no longer be achieved by contact lenses due to steepening of the cornea, scarring or lens intolerance, corneal cross-linking is not an option and a corneal transplant may be required.
Keratoconus affects about 1 in 2,000 people. However, some estimates suggest that the incidence may be as high as 1 in 400 individuals. It occurs most commonly in late childhood to early adulthood. While it occurs in all populations it may be more frequent in certain ethnic groups such as those of Asian descent. The word is from the Greek kéras meaning cornea and the Latin cōnus meaning cone.
Signs and symptoms
People with early keratoconus often notice a minor blurring or distortion of their vision, as well as an increased sensitivity to light, and visit their clinician seeking corrective lenses for reading or driving. At early stages, the symptoms of keratoconus may be no different from those of any other refractive defect of the eye. As the disease progresses, vision deteriorates, sometimes rapidly due to irregular astigmatism. Visual acuity becomes impaired at all distances, and night vision is often poor. Some individuals have vision in one eye that is markedly worse than the other eye. The disease is often bilateral, though asymmetrical. Some develop photophobia (sensitivity to bright light), eye strain from squinting in order to read, or itching in the eye, but there is normally little or no sensation of pain. It may cause luminous objects to appear as cylindrical pipes with the same intensity at all points.
The classic symptom of keratoconus is the perception of multiple "ghost" images, known as monocular polyopia. This effect is most clearly seen with a high contrast field, such as a point of light on a dark background. Instead of seeing just one point, a person with keratoconus sees many images of the point, spread out in a chaotic pattern. This pattern does not typically change from day to day, but over time, it often takes on new forms. People also commonly notice streaking and flaring distortion around light sources. Some even notice the images moving relative to one another in time with their heartbeat.
The predominant optical aberration of the eye in keratoconus is coma. The visual distortion experienced by the person comes from two sources, one being the irregular deformation of the surface of the cornea, and the other being scarring that occurs on its exposed highpoints. These factors act to form regions on the cornea that map an image to different locations on the retina. The effect can worsen in low light conditions, as the dark-adapted pupil dilates to expose more of the irregular surface of the cornea.
Genetics
Six genes have been found to be associated with the condition. These genes include BANP-ZNF469, COL4A4, FOXO1, FNDC3B, IMMP2L and RXRA-COL5A1. Others likely also exist.Patients with a parent, sibling, or child who has keratoconus have 15 to 67 times higher risk in developing corneal ectasia compared to patients with no affected relatives.
Pathophysiology
Despite considerable research, the cause of keratoconus remains unclear. Several sources suggest that keratoconus likely arises from a number of different factors: genetic, environmental or cellular, any of which may form the trigger for the onset of the disease. Once initiated, the disease normally develops by progressive dissolution of Bowmans layer, which lies between the corneal epithelium and stroma. As the two come into contact, cellular and structural changes in the cornea adversely affect its integrity and lead to the bulging and scarring characteristic of the disorder. Within any individual keratoconic cornea, regions of degenerative thinning coexisting with regions undergoing wound healing may be found. Scarring appears to be an aspect of the corneal degradation; however, a recent, large, multicenter study suggests abrasion by contact lenses may increase the likelihood of this finding by a factor over two.A number of studies have indicated keratoconic corneas show signs of increased activity by proteases, a class of enzymes that break some of the collagen cross-linkages in the stroma, with a simultaneous reduced expression of protease inhibitors. Other studies have suggested that reduced activity by the enzyme aldehyde dehydrogenase may be responsible for a build-up of free radicals and oxidising species in the cornea. Whatever the pathogenetical process, the damage caused by activity within the cornea likely results in a reduction in its thickness and biomechanical strength. At an ultrastructural level the weakening of the corneal tissue is associated with a disruption of the regular arrangement of the collagen layers and collagen fibril orientation. While keratoconus is considered a noninflammatory disorder, one study shows wearing rigid contact lenses by people leads to overexpression of proinflammatory cytokines, such as IL-6, TNF-alpha, ICAM-1, and VCAM-1 in the tear fluid.A genetic predisposition to keratoconus has been observed, with the disease running in certain families, and incidences reported of concordance in identical twins. The frequency of occurrence in close family members is not clearly defined, though it is known to be considerably higher than that in the general population, and studies have obtained estimates ranging between 6% and 19%. Two studies involving isolated, largely homogenetic communities have contrarily mapped putative gene locations to chromosomes 16q and 20q. Most genetic studies agree on an autosomal dominant model of inheritance. A rare, autosomal dominant form of severe keratoconus with anterior polar cataract is caused by a mutation in the seed region of mir-184, a microRNA that is highly expressed in the cornea and anterior lens. Keratoconus is diagnosed more often in people with Downs syndrome, though the reasons for this link have not yet been determined.Keratoconus has been associated with atopic diseases, which include asthma, allergies, and eczema, and it is not uncommon for several or all of these diseases to affect one person. Keratoconus is also associated with Alport syndrome, Down syndrome and Marfan syndrome. A number of studies suggest vigorous eye rubbing contributes to the progression of keratoconus, and people should be discouraged from the practice. Keratoconus differs from ectasia, which is caused by LASIK eye surgery. Post-LASIK Ectasia has been associated with the excessive removal of the eyes stromal bed tissue during surgery.
Diagnosis
Prior to any physical examination, the diagnosis of keratoconus frequently begins with an ophthalmologists or optometrists assessment of the persons medical history, particularly the chief complaint and other visual symptoms, the presence of any history of ocular disease or injury that might affect vision, and the presence of any family history of ocular disease. An eye chart, such as a standard Snellen chart of progressively smaller letters, is then used to determine the persons visual acuity. The eye examination may proceed to measurement of the localized curvature of the cornea with a manual keratometer, with detection of irregular astigmatism suggesting a possibility of keratoconus. Severe cases can exceed the instruments measuring ability. A further indication can be provided by retinoscopy, in which a light beam is focused on the persons retina and the reflection, or reflex, observed as the examiner tilts the light source back and forth. Keratoconus is amongst the ophthalmic conditions that exhibit a scissor reflex action of two bands moving toward and away from each other like the blades of a pair of scissors.If keratoconus is suspected, the ophthalmologist or optometrist will search for other characteristic findings of the disease by means of slit lamp examination of the cornea. An advanced case is usually readily apparent to the examiner, and can provide for an unambiguous diagnosis prior to more specialized testing. Under close examination, a ring of yellow-brown to olive-green pigmentation known as a Fleischer ring can be observed in around half of keratoconic eyes. The Fleischer ring, caused by deposition of the iron oxide hemosiderin within the corneal epithelium, is subtle and may not be readily detectable in all cases, but becomes more evident when viewed under a cobalt blue filter. Similarly, around 50% of subjects exhibit Vogts striae, fine stress lines within the cornea caused by stretching and thinning. The striae temporarily disappear while slight pressure is applied to the eyeball. A highly pronounced cone can create a V-shaped indentation in the lower eyelid when the persons gaze is directed downwards, known as Munsons sign. Other clinical signs of keratoconus will normally have presented themselves long before Munsons sign becomes apparent, and so this finding, though a classic sign of the disease, tends not to be of primary diagnostic importance.
A handheld keratoscope, sometimes known as "Placidos disk", can provide a simple noninvasive visualization of the surface of the cornea by projecting a series of concentric rings of light onto the cornea. A more definitive diagnosis can be obtained using corneal topography, in which an automated instrument projects the illuminated pattern onto the cornea and determines its topography from analysis of the digital image. The topographical map indicates any distortions or scarring in the cornea, with keratoconus revealed by a characteristic steepening of curvature that is usually below the centerline of the eye. The technique can record a snapshot of the degree and extent of the deformation as a benchmark for assessing its rate of progression. It is of particular value in detecting the disorder in its early stages when other signs have not yet presented.
Stages
Once keratoconus has been diagnosed, its degree may be classified by several metrics:
The steepness of greatest curvature from mild (< 45 D), advanced (up to 52 D) or severe (> 52 D);
The morphology of the cone: nipple (small: 5 mm and near-central), oval (larger, below-center and often sagging), or globus (more than 75% of cornea affected);
The corneal thickness from mild (> 506 μm) to advanced (< 446 μm).Increasing use of corneal topography has led to a decline in use of these terms.
Treatment
Lenses
In early stages of keratoconus, glasses or soft contact lenses can suffice to correct for the mild astigmatism. As the condition progresses, these may no longer provide the person with a satisfactory degree of visual acuity, and most practitioners will move to manage the condition with rigid contact lenses, known as rigid, gas-permeable, (RGP) lenses. RGP lenses provide a good level of visual correction, but do not arrest progression of the condition.In people with keratoconus, rigid contact lenses improve vision by means of tear fluid filling the gap between the irregular corneal surface and the smooth regular inner surface of the lens, thereby creating the effect of a smoother cornea. Many specialized types of contact lenses have been developed for keratoconus, and affected people may seek out both doctors specialized in conditions of the cornea, and contact lens fitters who have experience managing people with keratoconus. The irregular cone presents a challenge and the fitter will endeavor to produce a lens with the optimal contact, stability and steepness. Some trial-and-error fitting may prove necessary.
Hybrid lenses
Traditionally, contact lenses for keratoconus have been the hard or RGP variety, although manufacturers have also produced specialized soft or hydrophilic lenses and, most recently, silicone hydrogel lenses. A soft lens has a tendency to conform to the conical shape of the cornea, thus diminishing its effect. To counter this, hybrid lenses have been developed that are hard in the centre and encompassed by a soft skirt. However, soft or earlier generation hybrid lenses did not prove effective for every person. Early generation lenses have been discontinued. The fourth generation of hybrid lens technology has improved, giving more people an option that combines the comfort of a soft lens with the visual acuity of an RGP lens.
Scleral lenses
Scleral lenses are sometimes prescribed for cases of advanced or very irregular keratoconus; these lenses cover a greater proportion of the surface of the eye and hence can offer improved stability. Easier handling can find favor with people with reduced dexterity, such as the elderly.
Piggybacking
Some people find good vision correction and comfort with a "piggyback" lens combination, in which RGP lenses are worn over soft lenses, both providing a degree of vision correction. One form of piggyback lens makes use of a soft lens with a countersunk central area to accept the rigid lens. Fitting a piggyback lens combination requires experience on the part of the lens fitter, and tolerance on the part of the person with keratoconus.
Surgery
Corneal transplant
Between 11% and 27% of cases of keratoconus will progress to a point where vision correction is no longer possible, thinning of the cornea becomes excessive, or scarring as a result of contact lens wear causes problems of its own, and a corneal transplantation or penetrating keratoplasty becomes required. Keratoconus is the most common grounds for conducting a penetrating keratoplasty, generally accounting for around a quarter of such procedures. The corneal transplant surgeon trephines a lenticule of corneal tissue and then grafts the donor cornea to the existing eye tissue, usually using a combination of running and individual sutures. The cornea does not have a direct blood supply, so the donor tissue is not required to be blood type matched. Eye banks check the donor corneas for any disease or cellular irregularities.
The acute recovery period can take four to six weeks, and full postoperative vision stabilization often takes a year or more, but most transplants are very stable in the long term. The National Keratoconus Foundation reports that penetrating keratoplasty has the most successful outcome of all transplant procedures, and when performed for keratoconus in an otherwise healthy eye, its success rate can be 95% or greater. The sutures used usually dissolve over a period of three to five years, but individual sutures can be removed during the healing process if they are causing irritation to the person.
In the US, corneal transplants (also known as corneal grafts) for keratoconus are usually performed under sedation as outpatient surgery. In other countries, such as Australia and the UK, the operation is commonly performed with the person undergoing a general anaesthetic. All cases require a careful follow-up with an eye doctor (ophthalmologist or optometrist) for a number of years. Frequently, vision is greatly improved after the surgery, but even if the actual visual acuity does not improve, because the cornea is a more normal shape after the healing is completed, people can more easily be fitted with corrective lenses. Complications of corneal transplants are mostly related to vascularization of the corneal tissue and rejection of the donor cornea. Vision loss is very rare, though difficult-to-correct vision is possible. When rejection is severe, repeat transplants are often attempted, and are frequently successful. Keratoconus will not normally reoccur in the transplanted cornea; incidences of this have been observed, but are usually attributed to incomplete excision of the original cornea or inadequate screening of the donor tissue. The long-term outlook for corneal transplants performed for keratoconus is usually favorable once the initial healing period is completed and a few years have elapsed without problems.
One way of reducing the risk of rejection is to use a technique called deep anterior lamellar keratoplasty (DALK). In a DALK graft, only the outermost epithelium and the main bulk of the cornea, the stroma, are replaced; the persons rearmost endothelium layer and the Descemets membrane are left, giving some additional structural integrity to the postgraft cornea. Furthermore, it is possible to transplant freeze-dried donor tissue. The freeze-drying process ensures this tissue is dead, so there is no chance of rejection. Research from two trials in Iran provide low to moderate evidence that graft rejection is more likely to occur in penetrating keratoplasty than in DALK, though the likelihood for graft failure were similar with both procedures.
Epikeratophakia
Rarely, a nonpenetrating keratoplasty known as an epikeratophakia (or epikeratoplasty) may be performed in cases of keratoconus. The corneal epithelium is removed and a lenticule of donor cornea is grafted on top of it. The procedure requires a greater level of skill on the part of the surgeon, and is less frequently performed than a penetrating keratoplasty, as the outcome is generally less favorable. However, it may be seen as an option in a number of cases, particularly for young people.
Corneal ring implants
A possible surgical alternative to corneal transplant is the insertion of intrastromal corneal ring segments. A small incision is made in the periphery of the cornea and two thin arcs of polymethyl methacrylate are slid between the layers of the stroma on either side of the pupil before the incision is closed by a suture. The segments push out against the curvature of the cornea, flattening the peak of the cone and returning it to a more natural shape. The procedure offers the benefit of being reversible and even potentially exchangeable as it involves no removal of eye tissue.Corneal intrastromal implantation surgery involving the implantation of a full ring is also available as a treatment option for keratoconus. Evidence supports that the full-ring implant improves vision outcomes for at least a year.
Cross-linking
Corneal collagen cross-linking is a developing treatment that aims to strengthen the cornea, however, according to a 2015 Cochrane review, there is insufficient evidence to determine if it is useful in keratoconus. In 2016, however, the FDA approved cross-linking surgery as a treatment for keratoconus and recommended that a registry system should be set-up to evaluate the long-term treatment effect. The Save Sight Keratoconus Registry is an international database of keratoconus patients that is tracking outcomes of cross-linking in patients with keratoconus.
Radial keratotomy
Radial keratotomy is a refractive surgery procedure where the surgeon makes a spoke-like pattern of incisions into the cornea to modify its shape. This early surgical option for myopia has been largely superseded by LASIK and other similar procedures. LASIK is absolutely contraindicated in keratoconus and other corneal thinning conditions as removal of corneal stromal tissue will further damage an already thin and weak cornea. For similar reasons, radial keratotomy has also generally not been used for people with keratoconus.
Prognosis
Patients with keratoconus typically present initially with mild astigmatism and myopia, commonly at the onset of puberty, and are diagnosed by the late teenage years or early 20s. The disease can, however, present or progress at any age; in rare cases, keratoconus can present in children or not until later adulthood. A diagnosis of the disease at an early age may indicate a greater risk of severity in later life. Patients vision will seem to fluctuate over a period of months, driving them to change lens prescriptions frequently, but as the condition worsens, contact lenses are required in the majority of cases. The course of the disorder can be quite variable, with some patients remaining stable for years or indefinitely, while others progress rapidly or experience occasional exacerbations over a long and otherwise steady course. Most commonly, keratoconus progresses for a period of 10 to 20 years before the course of the disease generally ceases in the third and fourth decades of life.
Corneal hydrops
In advanced cases, bulging of the cornea can result in a localized rupture of Descemets membrane, an inner layer of the cornea. Aqueous humor from the eyes anterior chamber seeps into the cornea before Descemets membrane reseals. The patient experiences pain and a sudden severe clouding of vision, with the cornea taking on a translucent milky-white appearance known as a corneal hydrops.Although disconcerting to the patient, the effect is normally temporary and after a period of six to eight weeks, the cornea usually returns to its former transparency. The recovery can be aided nonsurgically by bandaging with an osmotic saline solution. Although a hydrops usually causes increased scarring of the cornea, occasionally it will benefit a patient by creating a flatter cone, aiding the fitting of contact lenses. Corneal transplantation is not usually indicated during corneal hydrops.
Epidemiology
The National Eye Institute reports keratoconus is the most common corneal dystrophy in the United States, affecting about one in 2,000 Americans, but some reports place the figure as high as one in 500. The inconsistency may be due to variations in diagnostic criteria, with some cases of severe astigmatism interpreted as those of keratoconus, and vice versa. A long-term study found a mean incidence rate of 2.0 new cases per 100,000 population per year. Some studies have suggested a higher prevalence amongst females, or that people of South Asian ethnicity are 4.4 times as likely to develop keratoconus as Caucasians, and are also more likely to be affected with the condition earlier.Keratoconus is normally bilateral (affecting both eyes) although the distortion is usually asymmetric and is rarely completely identical in both corneas. Unilateral cases tend to be uncommon, and may in fact be very rare if a very mild condition in the better eye is simply below the limit of clinical detection. It is common for keratoconus to be diagnosed first in one eye and not until later in the other. As the condition then progresses in both eyes, the vision in the earlier-diagnosed eye will often remain poorer than that in its fellow.
History
The German oculist Burchard Mauchart provided an early description in a 1748 doctoral dissertation of a case of keratoconus, which he called staphyloma diaphanum. However, it was not until 1854 that British physician John Nottingham (1801–1856) clearly described keratoconus and distinguished it from other ectasias of the cornea. Nottingham reported the cases of "conical cornea" that had come to his attention, and described several classic features of the disease, including polyopia, weakness of the cornea, and difficulty matching corrective lenses to the patients vision. In 1859, British surgeon William Bowman used an ophthalmoscope (recently invented by Hermann von Helmholtz) to diagnose keratoconus, and described how to angle the instruments mirror so as to best see the conical shape of the cornea. Bowman also attempted to restore vision by pulling on the iris with a fine hook inserted through the cornea and stretching the pupil into a vertical slit, like that of a cat. He reported that he had had a measure of success with the technique, restoring vision to an 18-year-old woman who had previously been unable to count fingers at a distance of 8 inches (20 cm).
By 1869, when the pioneering Swiss ophthalmologist Johann Horner wrote a thesis entitled On the treatment of keratoconus, the disorder had acquired its current name. The treatment at that time, endorsed by the leading German ophthalmologist Albrecht von Graefe, was an attempt to physically reshape the cornea by chemical cauterization with a silver nitrate solution and application of a miosis-causing agent with a pressure dressing. In 1888, the treatment of keratoconus became one of the first practical applications of the then newly invented contact lens, when the French physician Eugène Kalt manufactured a glass scleral shell that improved vision by compressing the cornea into a more regular shape. Since the start of the 20th century, research on keratoconus has both improved understanding of the disease and greatly expanded the range of treatment options. The first successful corneal transplantation to treat keratoconus was done in 1936 by Ramón Castroviejo.
Society and culture
According to the findings of the Collaborative Longitudinal Evaluation of Keratoconus (CLEK), people who have keratoconus could be expected to pay more than $25,000 over their lifetime post-diagnosis, with a standard deviation of $19,396. There is limited evidence on the costs of corneal cross-linking, a cost-effectiveness study estimated the costs of the total treatment for one person as £928 ($1,392 U.S.) in the UK National Health Service, but this may be as high as $6,500 per eye in other countries. A 2013 cost-benefit analysis by the Lewin Group for Eye Bank Association of America, estimated an average cost of $16,500 for each corneal transplant.
Related disorders
Several other corneal ectatic disorders also cause thinning of the cornea:
Keratoglobus is a very rare condition that causes corneal thinning primarily at the margins, resulting in a spherical, slightly enlarged eye. It may be genetically related to keratoconus.
Pellucid marginal degeneration causes thinning of a narrow (1–2 mm) band of the cornea, usually along the inferior corneal margin. It causes irregular astigmatism that, in the early stages of the disease can be corrected by spectacles. Differential diagnosis may be made by slit-lamp examination.
Posterior keratoconus, a distinct disorder despite its similar name, is a rare abnormality, usually congenital, which causes a nonprogressive thinning of the inner surface of the cornea, while the curvature of the anterior surface remains normal. Usually only a single eye is affected.
Post-LASIK ectasia is a complication of LASIK eye surgery.
References
External links
Keratoconus at Curlie |
Keratosis follicularis | Keratosis follicularis may refer to:
Dariers disease
Focal palmoplantar keratoderma with oral mucosal hyperkeratosisSee also:
Isolated dyskeratosis follicularis
Keratosis follicularis spinulosa decalvans |
Labor induction | Labor induction is the process or treatment that stimulates childbirth and delivery. Inducing (starting) labor can be accomplished with pharmaceutical or non-pharmaceutical methods. In Western countries, it is estimated that one-quarter of pregnant women have their labor medically induced with drug treatment. Inductions are most often performed either with prostaglandin drug treatment alone, or with a combination of prostaglandin and intravenous oxytocin treatment.
Medical uses
Commonly accepted medical reasons for induction include:
Postterm pregnancy, i.e. if the pregnancy has gone past the end of the 42nd week.
Intrauterine fetal growth restriction (IUGR).
There are health risks to the woman in continuing the pregnancy (e.g. she has pre-eclampsia).
Premature rupture of the membranes (PROM); this is when the membranes have ruptured, but labor does not start within a specific amount of time.
Premature termination of the pregnancy (abortion).
Fetal death in utero and previous history of stillbirth.
Twin pregnancy continuing beyond 38 weeks.
Previous health conditions that puts risk on the woman and/or her child such as diabetes, high blood pressure
High BMIInduction of labor in those who are either at or after term improves outcomes for newborns and decreases the number of C-sections performed.
Methods of induction
Methods of inducing labor include both pharmacological medication and mechanical or physical approaches.Mechanical and physical approaches can include artificial rupture of membranes or membrane sweeping. Membrane sweeping may lead to more women spontaneously going into labor (and fewer women having labor induction) but it may make little difference to the risk of maternal or neonatal death, or to the number of women having c-sections or spontaneous vaginal births.The use of intrauterine catheters are also indicated. These work by compressing the cervix mechanically to generate release on prostaglandins in local tissues. There is no direct effect on the uterus.Results from a 2021 systematic review found no differences in cesarean delivery nor neonatal outcomes in women with low-risk pregnancies between inpatient nor outpatient cervical ripening.
Medication
Intravaginal, endocervical or extra-amniotic administration of prostaglandin, such as dinoprostone or misoprostol. Prostaglandin E2 is the most studied compound and with most evidence behind it. A range of different dosage forms are available with a variety of routes possible. The use of misoprostol has been extensively studied but normally in small, poorly defined studies. Only a very few countries have approved misoprostol for use in induction of labor.
Intravenous (IV) administration of synthetic oxytocin preparations is used to artificially induce labor if it is deemed medically necessary. A high dose of oxytocin does not seem to have greater benefits than a standard dose. There are risks associated with IV oxytocin induced labor. Risks include the women having induced contractions that are too vigorous, too close together (frequent), or that last too long, which may lead to added stress on the baby (changes in babys heart rate) and may require the mother to have an emergency caesarean section. There is no high quality evidence to indicate if IV oxytocin should be stopped once a woman reaches active labor in order to reduce the incidence of women requiring caesarean sections.
Use of mifepristone has been described but is rarely used in practice.
Relaxin has been investigated, but is not currently commonly used.
mnemonic; ARNOP: Antiprogesterone, relaxin, nitric oxide donors, oxytocin, prostaglandins
Non-pharmaceutical
Membrane sweep, also known as membrane stripping, Hamilton maneuver, or "stretch and sweep". The procedure is carried out by a midwife or doctor as part of an internal vaginal examination. The midwife or doctor puts a couple of lubricated, gloved fingers into the womens vagina and inserts their index finger into the opening of the cervix or neck of the womb. They then use a circular movement to try to separate the membranes of the amniotic sac, containing the baby, from the cervix. This action, which releases hormones called prostaglandins, may prepare the cervix for birth and may initiate labour.
Artificial rupture of the membranes (AROM or ARM) ("breaking the waters")
Extra-amniotic saline infusion (EASI), in which a Foley catheter is inserted into the cervix and the distal portion expanded to dilate it and to release prostaglandins.
Cook Medical Double Balloon known as the Cervical Ripening Balloon with Stylet for assisted placement is FDA approved. The Double balloon provides one balloon to be inflated with saline on one side of the Uterine side of the cervix and the second balloon to be inflated with saline on the vaginal side of the cervix.
When to induce
The American Congress of Obstetricians and Gynecologists has recommended against elective induction before 39 weeks if there is no medical indication and the cervix is unfavorable. One recent study indicates that labor induction at term (41 weeks) or post-term reduces the rate of caesarean section by 12 per cent, and also reduces fetal death.
Some observational/retrospective studies have shown that non-indicated, elective inductions before the 41st week of gestation are associated with an increased risk of requiring a caesarean section. Randomized clinical trials have not addressed this question. However, researchers have found that multiparous women who undergo labor induction without medical indicators are not predisposed to caesarean sections. Doctors and pregnant women should have a discussion of risks and benefits when considering an induction of labor in the absence of an accepted medical indication. There is insufficient evidence to determine if inducing a womens labor at home is a safe and effective approach for both the women and the baby.Studies have shown a slight increase in risk of infant mortality for births in the 41st and particularly 42nd week of gestation, as well as a higher risk of injury to the mother and child. Due to the increasing risks of advanced gestation, induction appears to reduce the risk for caesarean delivery after 41 weeks gestation and possibly earlier. Inducing labour after 41 weeks of completed gestion is likely to reduce the risk of perinatal death and stillbirth compared with waiting for labour to start spontaneously.Inducing labor before 39 weeks in the absence of a medical indication (such as hypertension, IUGR, or pre-eclampsia) increases the risk of complications of prematurity including difficulties with respiration, infection, feeding, jaundice, neonatal intensive care unit admissions, and perinatal death.Inducing labour after 34 weeks and before 37 weeks in women with hypertensive disorders (pre-eclampsia, eclampsia, pregnancy-induced hypertension) may lead to better outcomes for the woman but does not improve or worsen outcomes for the baby. More research is needed to produce more certain results. If waters break (membranes rupture) between 24 and 37 weeks gestation, waiting for the labour to start naturally with careful monitoring of the woman and baby is more likely to lead to healthier outcomes. For women over 37 weeks pregnant whose babies are suspected of not coping well in the womb, it is not yet clear from research whether it is best to have an induction or caesarean immediately, or to wait until labour happens by itself. Similarly, there is not yet enough research to show whether it is best to deliver babies prematurely if they are not coping in the womb or whether to wait so that they are less premature when they are born.Clinicians assess the odds of having a vaginal delivery after labor induction by a "Bishop score". However, recent research has questioned the relationship between the Bishop score and a successful induction, finding that a poor Bishop score actually may improve the chance for a vaginal delivery after induction. A Bishop Score is done to assess the progression of the cervix prior to an induction. In order to do this, the cervix must be checked to see how much it has effaced, thinned out, and how far dilated it is. The score goes by a points system depending on five factors. Each factor is scored on a scale of either 0–2 or 0–3, any total score less than 5 holds a higher risk of delivering by caesarean section.Sometimes when a womans waters break after 37 weeks she is induced instead of waiting for labour to start naturally. This may decrease the risks of infection for the woman and baby but more research is needed to find out whether inducing is good for women and babies longer term.Women who have had a caesarean section for a previous pregnancy are at risk of having a uterine rupture, when their caesarean scar re-opens. Uterine rupture is very serious for the woman and the baby, and induction of labour increases this risk further. There is not yet enough research to determine which method of induction is safest for a woman who has had a caesarean section before. There is also no research to say whether it is better for these women and their babies to have an elective caesarean section instead of being induced.
Criticisms of induction
Induced labor may be more painful for the woman as one of the side effects of intravenous oxytocin is increased contraction pains, mainly due to the rigid onset. This may lead to the increased use of analgesics and other pain-relieving pharmaceuticals. These interventions may also lead to an increased likelihood of caesarean section delivery for the baby. However, studies into this matter show differing results. One study indicated that while overall caesarean section rates from 1990 to 1997 remained at or below 20 per cent, elective induction was associated with a doubling of the rate of Caesarean section. Another study showed that elective induction in women who were not post-term increased a womans chance of a C-section by two to three times. A more recent study indicated that induction may increase the risk of caesarean section if performed before the 40th week of gestation, but it has no effect or actually lowers the risk if performed after the 40th week.A 2014 systematic review and meta analysis on the subject of induction and its effect on cesarean section indicate that after 41 weeks of gestation there is a reduction of cesarean deliveries when the labour is induced.The Institute for Safe Medication Practices labeled pitocin a "high-alert medication" because of the high likelihood of "significant patient harm when it is used in error."
See also
Tocolytic, labor suppressant
References
External links
Harman, Kim (1999). "Current Trends in Cervical Ripening and Labor Induction". American Family Physician. 60 (2): 477–84. PMID 10465223.
Inducing Labor – WebMD.com
Induction of labour. Clinical guideline, UK National Institute for Health and Clinical Excellence, June 2001.
Josie L. Tenore: Methods for cervical ripening and induction of labor Archived 2008-05-16 at the Wayback Machine. American Family Physician, 15 May 2003.
"Catecholamines – blood ." National Library of Medicine . N.p., n.d. Web. 28 Mar. 2011. <https://www.nlm.nih.gov/medlineplus>. |
Childbirth | Childbirth, also known as labour and delivery, is the ending of pregnancy where one or more babies exits the internal environment of the mother via vaginal delivery or Caesarean section. In 2019, there were about 140.11 million births globally. In the developed world most deliveries occur in hospitals, while in the developing world most were at home births.The most common childbirth method is vaginal delivery. It involves four stages of labour: the shortening and opening of the cervix during the first stage, descent and birth of the baby during the second, the delivery of the placenta during the third, and the recovery of the mother and infant during the fourth stage, which is referred to as the postpartum. The first stage is characterized by abdominal cramping or back pain that typically lasts half a minute and occurs every 10 to 30 minutes. Contractions gradually becomes stronger and closer together. Since the pain of childbirth correlates with contractions, the pain becomes more frequent and strong as the labor progresses. The second stage ends when the infant is fully expelled. The third stage is the delivery of the placenta. The fourth stage of labour involves the recovery of the mother, delayed clamping of the umbilical cord, and monitoring of the neonate. As of 2014, all major health organizations advise that immediately following a live birth, regardless of the delivery method, that the infant be placed on the mothers chest (termed skin-to-skin contact), and to delay neonate procedures for at least one to two hours or until the baby has had its first breastfeeding.A vaginal delivery is recommended over a cesarean section due to increased risk for complications of a cesarean section and natural benefits of a vaginal delivery in both mother and baby. Various methods may help with pain, such as relaxation techniques, opioids, and spinal blocks. It is best practice to limit the amount of interventions that occur during labour and delivery such as an elective cesarean section, however in some cases a scheduled cesarean section must be planned for a successful delivery and recovery of the mother. An emergency cesarean section may be recommended if unexpected complications occur or little to no progression through the birthing canal is observed in a vaginal delivery.
Each year, complications from pregnancy and childbirth result in about 500,000 birthing deaths, seven million women have serious long-term problems, and 50 million women giving birth have negative health outcomes following delivery, most of which occur in the developing world. Complications in the mother include obstructed labour, postpartum bleeding, eclampsia, and postpartum infection. Complications in the baby include lack of oxygen at birth, birth trauma, and prematurity.
Signs and symptoms
The most prominent sign of labour is strong repetitive uterine contractions. Pain in contractions has been described as feeling similar to very strong menstrual cramps. Women giving birth are often encouraged to refrain from screaming. However, moaning and grunting may be encouraged to help lessen pain. Crowning may be experienced as an intense stretching and burning.
Back labour is a term for specific pain occurring in the lower back, just above the tailbone, during childbirth.Another prominent sign of labour is the rupture of membranes, commonly known as "water breaking". This is the leaking of fluid from the amniotic sac that surrounds a fetus in the uterus and helps provide cushion and thermoregulation. However, it is common for water to break long before contractions begin and in which case it is not a sign of immediate labor and hospitalization is generally required for monitoring the fetus and prevention of preterm birth.
Psychological
During the later stages of gestation there is an increase in abundance of oxytocin, a hormone that is known to evoke feelings of contentment, reductions in anxiety, and feelings of calmness and security around the mate. Oxytocin is further released during labour when the fetus stimulates the cervix and vagina, and it is believed that it plays a major role in the bonding of a mother to her infant and in the establishment of maternal behavior. The act of nursing a child also causes a release of oxytocin to help the baby get milk more easily from the nipple.
Vaginal birth
Station refers to the relationship of the fetal presenting part to the level of the ischial spines. When the presenting part is at the ischial spines the station is 0 (synonymous with engagement). If the presenting fetal part is above the spines, the distance is measured and described as minus stations, which range from −1 to −4 cm. If the presenting part is below the ischial spines, the distance is stated as plus stations ( +1 to +4 cm). At +3 and +4 the presenting part is at the perineum and can be seen.The fetal head may temporarily change shape (becoming more elongated or cone shaped) as it moves through the birth canal. This change in the shape of the fetal head is called molding and is much more prominent in women having their first vaginal delivery.Cervical ripening is the physical and chemical changes in the cervix to prepare it for the stretching that will take place as the fetus moves out of the uterus and into the birth canal. A scoring system called a Bishop score can be used to judge the degree of cervical ripening in order to predict the timing of labour and delivery of the infant or for women at risk for preterm labour. It is also used to judge when a woman will respond to induction of labour for a postdate pregnancy or other medical reasons. There are several methods of inducing cervical ripening which will allow the uterine contractions to effectively dilate the cervix.Vaginal delivery involves four stages of labour: the shortening and opening of the cervix during the first stage, descent and birth of the baby during the second, the delivery of the placenta during the third, and the 4th stage of recovery which lasts until two hours after the delivery. The first stage is characterized by abdominal cramping or back pain that typically lasts around half a minute and occurs every 10 to 30 minutes. The contractions (and pain) gradually becomes stronger and closer together. The second stage ends when the infant is fully expelled. In the third stage, the delivery of the placenta. The fourth stage of labour involves recovery, the uterus beginning to contract to pre-pregnancy state, delayed clamping of the umbilical cord, and monitoring of the neonatal tone and vitals. As of 2014, all major health organizations advise that immediately following a live birth, regardless of the delivery method, that the infant be placed on the mothers chest, termed skin-to-skin contact, and delaying routine procedures for at least one to two hours or until the baby has had its first breastfeeding.
Onset of labour
Definitions of the onset of labour include:
Regular uterine contractions at least every six minutes with evidence of change in cervical dilation or cervical effacement between consecutive digital examinations.
Regular contractions occurring less than 10 minutes apart and progressive cervical dilation or cervical effacement.
At least three painful regular uterine contractions during a 10-minute period, each lasting more than 45 seconds.Many women are known to experience what has been termed the "nesting instinct". Women report a spurt of energy shortly before going into labour. Common signs that labour is about to begin may include what is known as lightening, which is the process of the baby moving down from the rib cage with the head of the baby engaging deep in the pelvis. The pregnant woman may then find breathing easier, since her lungs have more room for expansion, but pressure on her bladder may cause more frequent need to void (urinate). Lightening may occur a few weeks or a few hours before labour begins, or even not until labour has begun. Some women also experience an increase in vaginal discharge several days before labour begins when the "mucus plug", a thick plug of mucus that blocks the opening to the uterus, is pushed out into the vagina. The mucus plug may become dislodged days before labour begins or not until the start of labour.While inside the uterus the baby is enclosed in a fluid-filled membrane called the amniotic sac. Shortly before, at the beginning of, or during labour the sac ruptures. Once the sac ruptures, termed "the water breaks", the baby is at risk for infection and the mothers medical team will assess the need to induce labour if it has not started within the time they believe to be safe for the infant.The first stage of labour is divided into latent and active phases, where the latent phase is sometimes included in the definition of labour, and sometimes not.
First stage: latent phase
The latent phase is generally defined as beginning at the point at which the woman perceives regular uterine contractions. In contrast, Braxton Hicks contractions, which are contractions that may start around 26 weeks gestation and are sometimes called "false labour", are infrequent, irregular, and involve only mild cramping.Cervical effacement, which is the thinning and stretching of the cervix, and cervical dilation occur during the closing weeks of pregnancy. Effacement is usually complete or near-complete and dilation is about 5 cm by the end of the latent phase. The degree of cervical effacement and dilation may be felt during a vaginal examination.
First stage: active phase
The active stage of labour (or "active phase of first stage" if the previous phase is termed "latent phase of first stage") has geographically differing definitions. The World Health Organization describes the active first stage as "a period of time characterized by regular painful uterine contractions, a substantial degree of cervical effacement and more rapid cervical dilatation from 5 cm until full dilatation for first and subsequent labours. In the US, the definition of active labour was changed from 3 to 4 cm, to 5 cm of cervical dilation for multiparous women, mothers who had given birth previously, and at 6 cm for nulliparous women, those who had not given birth before. This was done in an effort to increase the rates of vaginal delivery.Health care providers may assess a labouring mothers progress in labour by performing a cervical exam to evaluate the cervical dilation, effacement, and station. These factors form the Bishop score. The Bishop score can also be used as a means to predict the success of an induction of labour.
During effacement, the cervix becomes incorporated into the lower segment of the uterus. During a contraction, uterine muscles contract causing shortening of the upper segment and drawing upwards of the lower segment, in a gradual expulsive motion. The presenting fetal part then is permitted to descend. Full dilation is reached when the cervix has widened enough to allow passage of the babys head, around 10 cm dilation for a term baby.
A standard duration of the latent first stage has not been established and can vary widely from one woman to another. However, the duration of active first stage (from 5 cm until full cervical dilatation) usually does not extend beyond 12 hours in the first labour("primiparae"), and usually does not extend beyond 10 hours in subsequent labours ("multiparae").Dystocia of labour, also called "dysfunctional labour" or "failure to progress", is difficult labour or abnormally slow progress of labour, involving progressive cervical dilatation or lack of descent of the fetus. Friedmans Curve, developed in 1955, was for many years used to determine labour dystocia. However, more recent medical research suggests that the Friedman curve may not be currently applicable.
Second stage: fetal expulsion
The expulsion stage begins when the cervix is fully dilated, and ends when the baby is born. As pressure on the cervix increases, a sensation of pelvic pressure is experienced, and, with it, an urge to begin pushing. At the beginning of the normal second stage, the head is fully engaged in the pelvis; the widest diameter of the head has passed below the level of the pelvic inlet. The fetal head then continues descent into the pelvis, below the pubic arch and out through the vaginal introitus (opening). This is assisted by the additional maternal efforts of "bearing down" or pushing, similar to defecation. The appearance of the fetal head at the vaginal orifice is termed the "crowning". At this point, the mother will feel an intense burning or stinging sensation.
When the amniotic sac has not ruptured during labour or pushing, the infant can be born with the membranes intact. This is referred to as "delivery en caul".
Complete expulsion of the baby signals the successful completion of the second stage of labour. Some babies, especially preterm infants, are born covered with a waxy or cheese-like white substance called vernix. It is thought to have some protective roles during fetal development and for a few hours after birth.
The second stage varies from one woman to another. In first labours, birth is usually completed within three hours whereas in subsequent
labours, birth is usually completed within two hours. Second-stage labours longer than three hours are associated with declining rates of spontaneous vaginal delivery and increasing rates of infection, perineal tears, and obstetric haemorrhage, as well as the need for intensive care of the neonate.
Third stage: placenta delivery
The period from just after the fetus is expelled until just after the placenta is expelled is called the third stage of labour or the involution stage. Placental expulsion begins as a physiological separation from the wall of the uterus. The average time from delivery of the baby until complete expulsion of the placenta is estimated to be 10–12 minutes dependent on whether active or expectant management is employed. In as many as 3% of all vaginal deliveries, the duration of the third stage is longer than 30 minutes and raises concern for retained placenta.Placental expulsion can be managed actively or it can be managed expectantly, allowing the placenta to be expelled without medical assistance. Active management is the administration of a uterotonic drug within one minute of fetal delivery, controlled traction of the umbilical cord and fundal massage after delivery of the placenta, followed by performance of uterine massage every 15 minutes for two hours. In a joint statement, World Health Organization, the International Federation of Gynaecology and Obstetrics and the International Confederation of Midwives recommend active management of the third stage of labour in all vaginal deliveries to help to prevent postpartum haemorrhage.Delaying the clamping of the umbilical cord for at least one minute or until it ceases to pulsate, which may take several minutes, improves outcomes as long as there is the ability to treat jaundice if it occurs. For many years it was believed that late cord cutting led to a mothers risk of experiencing significant bleeding after giving birth, called postpartum bleeding. However a recent review found that delayed cord cutting in healthy full-term infants resulted in early haemoglobin concentration and higher birthweight and increased iron reserves up to six months after birth with no change in the rate of postpartum bleeding.
Fourth stage
The "fourth stage of labour" is the period beginning immediately after the birth of a child and extending for about six weeks. The terms postpartum and postnatal are often used for this period. The womans body, including hormone levels and uterus size, return to a non-pregnant state and the newborn adjusts to life outside the mothers body. The World Health Organization (WHO) describes the postnatal period as the most critical and yet the most neglected phase in the lives of mothers and babies; most deaths occur during the postnatal period.Following the birth, if the mother had an episiotomy or a tearing of the perineum, it is stitched. This is also an optimal time for uptake of long-acting reversible contraception (LARC), such as the contraceptive implant or intrauterine device (IUD), both of which can be inserted immediately after delivery while the woman is still in the delivery room. The mother has regular assessments for uterine contraction and fundal height, vaginal bleeding, heart rate and blood pressure, and temperature, for the first 24 hours after birth. Some women may experience an uncontrolled episode of shivering or postpartum chills following the birth. The first passing of urine should be documented within six hours. Afterpains (pains similar to menstrual cramps), contractions of the uterus to prevent excessive blood flow, continue for several days. Vaginal discharge, termed "lochia", can be expected to continue for several weeks; initially bright red, it gradually becomes pink, changing to brown, and finally to yellow or white.At one time babies born in hospitals were removed from their mothers shortly after birth and brought to the mother only at feeding times. Mothers were told that their newborn would be safer in the nursery and that the separation would offer the mother more time to rest. As attitudes began to change, some hospitals offered a "rooming in" option wherein after a period of routine hospital procedures and observation, the infant could be allowed to share the mothers room. As of 2020, rooming in has increasingly become standard practice in maternity wards.
Cardinal Movements of birth
Humans are bipedal with an erect stance. The erect posture causes the weight of the abdominal contents to thrust on the pelvic floor, a complex structure which must not only support this weight but allow, in women, three channels to pass through it: the urethra, the vagina and the rectum. The infants head and shoulders must go through a specific sequence of maneuvers in order to pass through the ring of the mothers pelvis. Range of motion and ambulation are typically unaffected during labour and it is encouraged that the mother move to help facilitate progression of labour. The vagina is called a birth canal when the baby enters this passage. Six phases of a typical vertex or cephalic (head-first presentation) delivery:
Engagement of the fetal head in the transverse position. The babys head is facing across the pelvis at one or other of the mothers hips.
Descent and flexion of the fetal head. The babies head moves down the birthing canal and tucks its chin on its chest so that the back or crown of its head leads the way through the birth canal.
Internal rotation. The fetal head rotates 90 degrees to the occipito-anterior position so that the babys face is towards the mothers rectum.
Delivery by extension. The back of the neck presses against the pubic bone and its chin leaves its chest, extending the neck – as if to look up, and the rest of its head passes out of the birth canal.
Restitution. The fetal head turns through 45 degrees to restore its normal relationship with the shoulders, which are still at an angle.
External rotation. The shoulders repeat the corkscrew movements of the head, which can be seen in the final movements of the fetal head.Failure to complete the cardinal movements of birth in the correct order may result in complications of labour and birth injuries.
Early skin-to-skin contact
Skin-to-skin contact (SSC), sometimes also called kangaroo care, is a technique of newborn care where babies are kept chest-to-chest and skin-to-skin with a parent, typically their mother, though more recently (2022) their father as well. This means without the shirt or undergarments on the chest of both the baby and parent. A 2011 medical review found that early skin-to-skin contact resulted in a decrease in infant crying, improved cardio-respiratory stability and blood glucose levels, and improved breastfeeding duration. A 2016 Cochrane review also found that SSC at birth promotes the likelihood and effectiveness of breastfeeding.As of 2014, early postpartum SSC is endorsed by all major organizations that are responsible for the well-being of infants, including the American Academy of Pediatrics. The World Health Organization (WHO) states that "the process of
childbirth is not finished until the baby has safely transferred from placental to mammary nutrition." It is advised that the newborn be placed skin-to-skin with the mother following vaginal birth, or as soon as the mother is alert and responsive after a Caesarean section, postponing any routine procedures for at least one to two hours. The babys father or other support person may also choose to hold the baby SSC until the mother recovers from the anesthetic.The WHO suggests that any initial observations of the infant can be done while the infant remains close to the mother, saying that even a brief separation before the baby has had its first feed can disturb the bonding process. They further advise frequent skin-to-skin contact as much as possible during the first days after delivery, especially if it was interrupted for some reason after the delivery.La Leche League advises a women to have a delivery team which includes a support person who will advocate to assure that:
The mother and her baby are not separated unnecessarily
The baby will receive only her milk
The baby will receive no supplementation without a medical reason
All testing, bathing or other procedures are done in the parents roomIt has long been known that a mothers level of the hormone oxytocin, sometimes called "the love hormone", elevates in a mother when she interacts with her infant. In 2019, a large review of the effects of oxytocin found that the oxytocin level in fathers that engage in SSC is increased as well. Two studies found that "when the infant is clothed only in a diaper and placed in between the mother or fathers breasts, chest-to-chest [elevated paternal oxytocin levels were] shown to reduce stress and anxiety in parents after interaction."
Discharge
For births that occur in hospitals the WHO recommends a hospital stay of at least 24 hours following an uncomplicated vaginal delivery and 96 hours for a Cesarean section. Looking at length of stay (in 2016) for an uncomplicated delivery around the world shows an average of less that 1 day in Egypt to 6 days in (pre-war) Ukraine. Averages for Australia are 2.8 days and 1.5 days in the UK. While this number is low, two-thirds of women in the UK have midwife-assisted births and in some cases the mother may choose a hospital setting for birth to be closer to the wide range of assistance available for an emergency situation. However, women with midwife care may leave the hospital shortly after birth and her midwife will continue her care at her home.
In the U.S. the average length of stay has gradually dropped from 4.1 days in 1970 to a current stay of 2 days. The CDC attributed the drop to the rise in health care costs, saying people could not afford to stay in the hospital any longer. To keep it from dropping any lower, in 1996 congress passed the Newborns and Mothers Health Protection Act that requires insurers to cover at least 48 hours for uncomplicated delivery.
Labour induction and Caesarean section
In many cases and with increasing frequency, childbirth is achieved through labour induction or caesarean section. Labour induction is the process or treatment that stimulates childbirth and delivery. Inducing labour can be accomplished with pharmaceutical or non-pharmaceutical methods. Inductions are most often performed either with prostaglandin drug treatment alone, or with a combination of prostaglandin and intravenous oxytocin treatment.
Caesarean section is the removal of the neonate through a surgical incision in the abdomen, rather than through vaginal birth. Childbirth by C-sections increased 50% in the US from 1996 to 2006. In 2012, about 23 million deliveries occurred by Caesarean section. Induced births and elective cesarean before 39 weeks can be harmful to the neonate as well as harmful or without benefit to the mother. Therefore, many guidelines recommend against non-medically required induced births and elective cesarean before 39 weeks. The 2012 rate of labour induction in the United States was 23.3 per cent, and had more than doubled from 1990 to 2010.
The American Congress of Obstetricians and Gynecologists (ACOG) guidelines recommend a full evaluation of the maternal-fetal status, the status of the cervix, and at least a 39 completed weeks (full term) of gestation for optimal health of the newborn when considering elective induction of labour. Per these guidelines, indications for induction may include:
Abruptio placentae
Chorioamnionitis
Fetal compromise such as isoimmunisation leading to haemolytic disease of the newborn or oligohydramnios
Fetal demise
Gestational hypertension
Maternal conditions such as gestational diabetes or chronic kidney disease
Preeclampsia or eclampsia
Premature rupture of membranes
Post-term pregnancyInduction is also considered for logistical reasons, such as the distance from hospital or psychosocial conditions, but in these instances gestational age confirmation must be done, and the maturity of the fetal lung must be confirmed by testing. The ACOG also note that contraindications for induced labour are the same as for spontaneous vaginal delivery, including vasa previa, complete placenta praevia, umbilical cord prolapse or active genital herpes simplex infection.A Caesarean section, also called a C section, can be the safest option for delivery in some pregnancies. During a C section, the patient is usually numbed with an epidural or a spinal block, but general anesthesia can be used as well. A cut is made in the patient’s abdomen and then in the uterus to remove the baby. A C section may be the best option when the small size or shape of the mothers pelvis makes delivery of the baby impossible, or the lie or presentation of the baby as it prepares to enter the birth canal is dangerous. Other medical reasons for C section are placenta previa (the placenta blocks the baby’s path to the birth canal), uterine rupture, or fetal distress, like due to endangerment of the baby’s oxygen supply. Before the 1970s, once a patient delivered one baby via C section, it was recommended that all of her future babies be delivered by C section, but that recommendation has changed. Unless there is some other indication, mothers can attempt a trial of labor and most are able to have a vaginal birth after C section (VBAC).Like any procedure, a C section is not without risks. Having a C section puts the mother at greater risk for uterine rupture and abnormal attachment of the placenta to the uterus in future pregnancies (placenta accreta spectrum). The rate of deliveries occurring via C section instead of vaginal deliveries has been increasing since the 1970s. The WHO recommends a C section rate of between 10 to 15 percent because C sections rates higher than 10 percent are not associated with a decrease in morbidity and mortality.
Management
Obstetric care frequently subjects women to institutional routines, which may have adverse effects on the progress of labour. Supportive care during labour may involve emotional support, comfort measures, and information and advocacy which may promote the physical process of labour as well as womens feelings of control and competence, thus reducing the need for obstetric intervention. The continuous support may be provided either by hospital staff such as nurses or midwives, doulas, or by companions of the womans choice from her social network.There is increasing evidence to show that the participation of the childs father in the birth leads to a better birth and also post-birth outcomes, providing the father does not exhibit excessive anxiety.Continuous labour support may help women to give birth spontaneously, that is, without caesarean or vacuum or forceps, with slightly shorter labours, and to have more positive feelings regarding their experience of giving birth. Continuous labour support may also reduce womens use of pain medication during labour and reduce the risk of babies having low five-minute Agpar scores.
Preparation
Eating or drinking during labour is an area of ongoing debate. While some have argued that eating in labour has no harmful effects on outcomes, others continue to have concern regarding the increased possibility of an aspiration event (choking on recently eaten foods) in the event of an emergency delivery due to the increased relaxation of the oesophagus in pregnancy, upward pressure of the uterus on the stomach, and the possibility of general anaesthetic in the event of an emergency cesarean. A 2013 Cochrane review found that with good obstetrical anaesthesia there is no change in harms from allowing eating and drinking during labour in those who are unlikely to need surgery. They additionally acknowledge that not eating does not mean there is an empty stomach or that its contents are not as acidic. They therefore conclude that "women should be free to eat and drink in labour, or not, as they |
Childbirth | wish."At one time shaving of the area around the vagina, was common practice due to the belief that hair removal reduced the risk of infection, made an episiotomy (a surgical cut to enlarge the vaginal entrance) easier, and helped with instrumental deliveries. It is currently less common, though it is still a routine procedure in some countries even though a systematic review found no evidence to recommend shaving. Side effects appear later, including irritation, redness, and multiple superficial scratches from the razor. Another effort to prevent infection has been the use of the antiseptic chlorhexidine or providone-iodine solution in the vagina. Evidence of benefit with chlorhexidine is lacking. A decreased risk is found with providone-iodine when a cesarean section is to be performed.
Forceps or vacuum assisted delivery
An assisted delivery is used in about 1 in 8 births, and may be needed if either mother or infant appears to be at risk during a vaginal delivery. The methods used are termed obstetrical forceps extraction and vacuum extraction, also called ventouse extraction. Done properly, they are both safe with some preference for forceps rather than vacuum, and both are seen as preferable to an unexpected C-section. While considered safe, some risks for the mother include vaginal tearing, including a higher chance of having a more major vaginal tear that involves the muscle or wall of the anus or rectum. For women undergoing operative vaginal delivery with vacuum extraction or forceps, there is strong evidence that prophylactic antibiotics help to reduce the risk of infection. There is a higher risk of blood clots forming in the legs or pelvis – anti-clot stockings or medication may be ordered to avoid clots. Urinary incontinence is not unusual after childbirth but it is more common after an instrument delivery. Certain exercises and physiotherapy will help the condition to improve.
Pain control
Non pharmaceutical
Some women prefer to avoid analgesic medication during childbirth. Psychological preparation may be beneficial. Relaxation techniques, immersion in water, massage, and acupuncture may provide pain relief. Acupuncture and relaxation were found to decrease the number of caesarean sections required. Immersion in water has been found to relieve pain during the first stage of labour and to reduce the need for anaesthesia and shorten the duration of labour, however the safety and efficacy of immersion during birth, water birth, has not been established or associated with maternal or fetal benefit.Most women like to have someone to support them during labour and birth; such as a midwife, nurse, or doula; or a lay person such as the father of the baby, a family member, or a close friend. Studies have found that continuous support during labour and delivery reduce the need for medication and a caesarean or operative vaginal delivery, and result in an improved Apgar score for the infant.
Pharmaceutical
Different measures for pain control have varying degrees of success and side effects to the woman and her baby. In some countries of Europe, doctors commonly prescribe inhaled nitrous oxide gas for pain control, especially as 53% nitrous oxide, 47% oxygen, known as Entonox; in the UK, midwives may use this gas without a doctors prescription. Opioids such as fentanyl may be used, but if given too close to birth there is a risk of respiratory depression in the infant.Popular medical pain control in hospitals include the regional anaesthetics epidurals (EDA), and spinal anaesthesia. Epidural analgesia is a generally safe and effective method of relieving pain in labour, but has been associated with longer labour, more operative intervention (particularly instrument delivery), and increases in cost. However, a more recent (2017) Cochrane review suggests that the new epidural techniques have no effect on labour time and the use of instruments or the need for C-section deliveries. Generally, pain and stress hormones rise throughout labour for women without epidurals, while pain, fear, and stress hormones decrease upon administration of epidural analgesia, but rise again later.
Medicine administered via epidural can cross the placenta and enter the bloodstream of the fetus. Epidural analgesia has no statistically significant impact on the risk of caesarean section, and does not appear to have an immediate effect on neonatal status as determined by Apgar scores.
Augmentation
Augmentation is the process of stimulating the uterus to increase the intensity and duration of contractions after labour has begun. Several methods of augmentation are commonly been used to treat slow progress of labour (dystocia) when uterine contractions are assessed to be too weak. Oxytocin is the most common method used to increase the rate of vaginal delivery. The World Health Organization recommends its use either alone or with amniotomy (rupture of the amniotic membrane) but advises that it must be used only after it has been correctly confirmed that labour is not proceeding properly if harm is to be avoided. The WHO does not recommend the use of antispasmodic agents for prevention of delay in labour.
Episiotomy
For years an episiotomy was thought to help prevent more extensive vaginal tears and heal better than a natural tear. Perineal tears can occur at the vaginal opening as the babys head passes through, especially if the baby descends quickly. Tears can involve the perineal skin or extend to the muscles and the anal sphincter and anus. Once common, they are now recognised as generally not needed. When needed, the midwife or obstetrician makes a surgical cut in the perineum to prevent severe tears that can be difficult to repair. A 2017 Cochrane review compared episiotomy as needed (restrictive) with routine episiotomy to determine the possible benefits and harms for mother and baby. The review found that restrictive episiotomy policies appeared to give a number of benefits compared with using routine episiotomy. Women experienced less severe perineal trauma, less posterior perineal trauma, less suturing and fewer healing complications at seven days with no difference in occurrence of pain, urinary incontinence, painful sex or severe vaginal/perineal trauma after birth.
Multiple births
In cases of a head first-presenting first twin, twins can often be delivered vaginally. In some cases twin delivery is done in a larger delivery room or in an operating theatre, in the event of complication e.g.
Both twins born vaginally – this can occur both presented head first or where one comes head first and the other is breech and/or helped by a forceps/ventouse delivery
One twin born vaginally and the other by caesarean section.
If the twins are joined at any part of the body – called conjoined twins, delivery is mostly by caesarean section.
Fetal monitoring
For external monitoring of the fetus during childbirth, a simple pinard stethoscope or doppler fetal monitor ("doptone") can be used.
A method of external (noninvasive) fetal monitoring (EFM) during childbirth is cardiotocography (CTG), using a cardiotocograph that consists of two sensors: The heart (cardio) sensor is an ultrasonic sensor, similar to a Doppler fetal monitor, that continuously emits ultrasound and detects motion of the fetal heart by the characteristic of the reflected sound. The pressure-sensitive contraction transducer, called a tocodynamometer (toco) has a flat area that is fixated to the skin by a band around the belly. The pressure required to flatten a section of the wall correlates with the internal pressure, thereby providing an estimate of contraction.
Monitoring with a cardiotocograph can either be intermittent or continuous. The World Health Organization (WHO) advises that for healthy women undergoing spontaneous labour continuous cardiotocography is not recommended for assessment of fetal well-being. The WHO states: "In countries and settings where continuous CTG is used defensively to protect against litigation, all stakeholders should be made aware that this practice is not evidence-based and does not improve birth outcomes."A mothers water has to break before internal (invasive) monitoring can be used. More invasive monitoring can involve a fetal scalp electrode to give an additional measure of fetal heart activity, and/or intrauterine pressure catheter (IUPC). It can also involve fetal scalp pH testing.
Complications
Per figures retrieved in 2015, since 1990 there has been a 44 per cent decline in the maternal death rate. However, according to 2015 figures 830 women die every day from causes related to pregnancy or childbirth and for every woman who dies, 20 or 30 encounter injuries, infections or disabilities. Most of these deaths and injuries are preventable.In 2008, noting that each year more than 100,000 women die of complications of pregnancy and childbirth and at least seven million experience serious health problems while 50 million more have adverse health consequences after childbirth, the World Health Organization (WHO) has urged midwife training to strengthen maternal and newborn health services. To support the upgrading of midwifery skills the WHO established a midwife training program, Action for Safe Motherhood.The rising maternal death rate in the US is of concern. In 1990 the US ranked 12th of the 14 developed countries that were analysed. However, since that time the rates of every country have steadily continued to improve while the US rate has spiked dramatically. While every other developed nation of the 14 analysed in 1990 shows a 2017 death rate of less than 10 deaths per every 100,000 live births, the US rate has risen to 26.4. By comparison, the United Kingdom ranks second highest at 9.2 and Finland is the safest at 3.8. Furthermore, for every one of the 700 to 900 US woman who die each year during pregnancy or childbirth, 70 experience significant complications such as haemorrhage and organ failure, totalling more than one per cent of all births.Compared to other developed nations, the United States also has high infant mortality rates. The Trust for Americas Health reports that as of 2011, about one-third of American births have some complications; many are directly related to the mothers health including increasing rates of obesity, type 2 diabetes, and physical inactivity. The U.S. Centers for Disease Control and Prevention (CDC) has led an initiative to improve womans health previous to conception in an effort to improve both neonatal and maternal death rates.
Labour and delivery complications
Obstructed labour
The second stage of labour may be delayed or lengthy due to poor or uncoordinated uterine action, an abnormal uterine position such as breech or shoulder dystocia, and cephalopelvic disproportion (a small pelvis or large infant). Prolonged labour may result in maternal exhaustion, fetal distress, and other complications including obstetric fistula.
Eclampsia
Eclampsia is the onset of seizures (convulsions) in a woman with pre-eclampsia. Pre-eclampsia is a disorder of pregnancy in which there is high blood pressure and either large amounts of protein in the urine or other organ dysfunction. Pre-eclampsia is routinely screened for during prenatal care. Onset may be before, during, or rarely, after delivery. Around one per cent of women with eclampsia die.
Maternal complications
A puerperal disorder or postpartum disorder is a complication which presents primarily during the puerperium, or postpartum period. The postpartum period can be divided into three distinct stages; the initial or acute phase, six to 12 hours after childbirth; subacute postpartum period, which lasts two to six weeks, and the delayed postpartum period, which can last up to six months. In the subacute postpartum period, 87% to 94% of women report at least one health problem. Long-term health problems (persisting after the delayed postpartum period) are reported by 31 per cent of women.
Postpartum bleeding
According to the WHO, hemorrhage is the leading cause of maternal death worldwide accounting for approximately 27.1% of maternal deaths. Within maternal deaths due to hemorrhage, two-thirds are caused by postpartum hemorrhage. The causes of postpartum hemorrhage can be separated into four main categories: Tone, Trauma, Tissue, and Thrombin. Tone represents uterine atony, the failure of the uterus to contract adequately following delivery. Trauma includes lacerations or uterine rupture. Tissue includes conditions that can lead to a retained placenta. Thrombin, which is a molecule used in the human body’s blood clotting system, represents all coagulopathies.
Postpartum infections
Postpartum infections, also historically known as childbed fever and medically as puerperal fever, are any bacterial infections of the reproductive tract following childbirth or miscarriage. Signs and symptoms usually include a fever greater than 38.0 °C (100.4 °F), chills, lower abdominal pain, and possibly bad-smelling vaginal discharge. The infection usually occurs after the first 24 hours and within the first ten days following delivery. Infection remains a major cause of maternal deaths and morbidity in the developing world. The work of Ignaz Semmelweis was seminal in the pathophysiology and treatment of childbed fever and his work saved many lives.
Psychological complications
Childbirth can be an intense event and strong emotions, both positive and negative, can be brought to the surface. Abnormal and persistent fear of childbirth is known as tokophobia. The prevalence of fear of childbirth around the world ranges between 4–25%, with 3–7% of pregnant women having clinical fear of childbirth.Most new mothers may experience mild feelings of unhappiness and worry after giving birth. Babies require a lot of care, so it is normal for mothers to be worried about, or tired from, providing that care. The feelings, often termed the "baby blues", affect up to 80 per cent of mothers. They are somewhat mild, last a week or two, and usually go away on their own.Postpartum depression is different from the "baby blues". With postpartum depression, feelings of sadness and anxiety can be extreme and might interfere with a womans ability to care for herself or her family. Because of the severity of the symptoms, postpartum depression usually requires treatment. The condition, which occurs in nearly 15 percent of births, may begin shortly before or any time after childbirth, but commonly begins between a week and a month after delivery.Childbirth-related posttraumatic stress disorder is a psychological disorder that can develop in women who have recently given birth. Causes include issues such as an emergency C-section, preterm labour, inadequate care during labour,
lack of social support following childbirth, and others. Examples of symptoms include intrusive symptoms, flashbacks and nightmares, as well as symptoms of avoidance (including amnesia for the whole or parts of the event), problems in developing a mother-child attachment, and others similar to those commonly experienced in posttraumatic stress disorder (PTSD). Many women who are experiencing symptoms of PTSD after childbirth are misdiagnosed with postpartum depression or adjustment disorders. These diagnoses can lead to inadequate treatment.Postpartum psychosis is a rare psychiatric emergency in which symptoms of high mood and racing thoughts (mania), depression, severe confusion, loss of inhibition, paranoia, hallucinations and delusions set in, beginning suddenly in the first two weeks after childbirth. The symptoms vary and can change quickly. It usually requires hospitalisation. The most severe symptoms last from two to 12 weeks, and recovery takes six months to a year.
Fetal complications
Five causes make up about 80 per cent of newborn deaths globally: prematurity, low-birth-weight, infections, lack of oxygen at birth, and trauma during birth.
Stillbirth
Stillbirth is typically defined as fetal death at or after 20 to 28 weeks of pregnancy. It results in a baby born without signs of life.Worldwide prevention of most stillbirths is possible with improved health systems. About half of stillbirths occur during childbirth, and stillbirth is more common in the developing than developed world. Otherwise depending on how far along the pregnancy is, medications may be used to start labour or a type of surgery known as dilation and evacuation may be carried out. Following a stillbirth, women are at higher risk of another one; however, most subsequent pregnancies do not have similar problems.Worldwide in 2019 there were about 2 million stillbirths that occurred after 28 weeks of pregnancy, this equates to 1 in 72 total births or one every 16 seconds. Still births are more common in South Asia and Sub-Saharan Africa. Stillbirth rates have declined, though more slowly since the 2000s.
Preterm birth
Preterm birth is the birth of an infant at fewer than 37 weeks gestational age. Globally, about 15 million infants were born before 37 weeks of gestation. Premature birth is the leading cause of death in children under five years of age though many that survive experience disabilities including learning defects and visual and hearing problems. Causes for early birth may be unknown or may be related to certain chronic conditions such as diabetes, infections, and other known causes. The World Health Organization has developed guidelines with recommendations to improve the chances of survival and health outcomes for preterm infants.If a pregnant woman enters preterm labor, delivery can be delayed by giving medications called tocolytics. Tocolytics delay labor by inhibiting contractions of the uterine muscles that progress labor. The most widely used tocolytics include beta agonists, calcium channel blockers, and magnesium sulfate. The goal of administering tocolytics is not to delay delivery to the point that the child can be delivered at term, but instead to postponing delivery long enough for the administration of glucocorticoids which can help the fetal lungs to mature enough to reduce morbidity and mortality from infant respiratory distress syndrome.
Post-term birth
The term postterm pregnancy is used to discribe a condition in which a woman has not yet delivered her baby after 42 weeks of gestation, two weeks beyond the usual 40-week duration of pregnancy. Postmature births carry risks for both the mother and the baby, including meconium aspiration syndrome, fetal malnutrition, and stillbirths. The placenta, which supplies the baby with oxygen and nutrients, begins to age and will eventually fail after the 42nd week of gestation. Induced labor is indicated for postterm pregnancy.
Neonatal infection
Newborns are prone to infection in the first month of life. The organism S. agalactiae (Group B Streptococcus) or (GBS) is most often the cause of these occasionally fatal infections. The baby contracts the infection from the mother during labour. In 2014 it was estimated that about one in 2000 newborn babies have GBS bacterial infections within the first week of life, usually evident as respiratory disease, general sepsis, or meningitis.Untreated sexually transmitted infections (STIs) are associated with congenital and infections in newborn babies, particularly in the areas where rates of infection remain high. The majority of STIs have no symptoms or only mild symptoms that may not be recognised. Mortality rates resulting from some infections may be high, for example the overall perinatal mortality rate associated with untreated syphilis is 30 per cent.
Perinatal asphyxia
Perinatal asphyxia is the medical condition resulting from deprivation of oxygen to a newborn infant that lasts long enough during the birth process to cause physical harm. Hypoxic damage can also occur to most of the infants organs (heart, lungs, liver, gut, kidneys), but brain damage is of most concern and perhaps the least likely to quickly or completely heal. Oxygen deprivation can lead to permanent disabilities in the child, such as cerebral palsy.
Mechanical fetal injury
Risk factors for fetal birth injury include fetal macrosomia (big baby), maternal obesity, the need for instrumental delivery, and an inexperienced attendant. Specific situations that can contribute to birth injury include breech presentation and shoulder dystocia. Most fetal birth injuries resolve without long term harm, but brachial plexus injury may lead to Erbs palsy or Klumpkes paralysis.
History
Role of males
Historically, women have been attended and supported by other women during labour and birth. Midwife training in European cities began in the 1400s, but rural women were usually assisted by female family or friends. However, it was not simply a ladies social bonding event as some historians have portrayed – fear and pain often filled the atmosphere, as death during childbirth was a common occurrence. In the United States before the 1950s, a father would not be in the birthing room. It did not matter if it was a home birth; the father would be waiting downstairs or in another room in the home. If it was in a hospital, then the father would wait in the waiting room. Fathers were only permitted in the room if the life of the mother or baby was severely at-risk. In 1522, a German physician was sentenced to death for sneaking into a delivery room dressed as a woman.The majority of guidebooks related to pregnancy and childbirth were written by men who had never been involved in the birthing process. A Greek physician, Soranus of Ephesus, wrote a book about obstetrics and gynaecology in the second century, which was referenced for the next thousand years. The book contained endless home remedies for pregnancy and childbirth, many of which would be considered heinous by modern women and medical professionals.Both preterm and full term infants benefit from skin to skin contact, sometimes called Kangaroo care, immediately following birth and for the first few weeks of life. Some fathers have begun to hold their newborns skin to skin; the new baby is familiar with the fathers voice and it is believed that contact with the father helps the infant to stabilise and promotes father to infant bonding. Looking at recent studies, a 2019 review found that the level of oxytocin was found to increase not only in mothers who had experienced early skin to skin attachment with their infants but in the fathers as well, suggesting a neurobiological connection. If the infants mother had a caesarean birth, the father can hold their baby in skin-to-skin contact while the mother recovers from the anaesthetic.
Hospitals
Historically, most women gave birth at home without emergency medical care available. In the early days of hospitalisation of childbirth, a 17th-century maternity ward in Paris was incredibly congested, with up to five pregnant women sharing one bed. At this hospital, one in five women died during the birthing process. At the onset of the Industrial Revolution, giving birth at home became more difficult due to congested living spaces and dirty living conditions. That drove urban and lower-class women to newly available hospitals, while wealthy and middle-class women continued to labour at home. Consequently, wealthier women experienced lower maternal mortality rates than those of a lower social class. Throughout the 1900s, there was an increasing availability of hospitals, and more women began going into the hospital for labour and delivery. In the United States, 5% of women gave birth in hospitals in 1900. By 1930, 50% of all women and 75% of urban-dwelling women delivered in hospitals. By 1960, this number increased to 96%. By the 1970s, home birth rates fell to approximately 1%. In the United States, the middle classes were especially receptive to the medicalisation of childbirth, which promised a safer and less painful labour.Accompanied by the shift from home to hospital was the shift from midwife to physician. Male physicians began to replace female midwives in Europe and the United States in the 1700s. The rise in status and popularity of this new position was accompanied by a drop in status for midwives. By the 1800s, affluent families were primarily calling male doctors to assist with their deliveries, and female midwives were seen as a resource for women who could not afford better care. That completely removed women from assisting in labour, as only men were eligible to become doctors at the time. Additionally, it privatised the birthing process as family members and friends were often banned from the delivery room.There was opposition to the change from both progressive feminists and religious conservatives. The feminists were concerned about job security for a role that had traditionally been held by women. The conservatives argued that it was immoral for a woman to be exposed in such a way in front of a man. For that reason, many male obstetricians performed deliveries in dark rooms or with their patient fully covered with a drape.
Baby Friendly Hospitals
In 1991 the WHO launched a global program, the Baby Friendly Hospital Initiative (BFHI), that encourages birthing centers and hospitals to institute procedures that encourage mother/baby bonding and breastfeeding. The Johns Hopkins Hospital describes the process of receiving the Baby Friendly designation:
It involves changing long-standing policies, protocols and behaviors. The Baby-Friendly Hospital Initiative includes a very rigorous credentialing process that includes a two-day site visit, where assessors evaluate policies, community partnerships and education plans, as well as interview patients, physicians and staff members.
Every major health organization, such as the CDC, supports the BFHI. As of 2019, 28% of hospitals in the US have been accredited by the WHO.
Medication
The use of pain medication in labour has been a controversial issue for hundreds of years. A Scottish woman was burned at the stake in 1591 for requesting pain relief in the delivery of twins. Medication became more acceptable in 1852, when Queen Victoria used chloroform as pain relief during labour. The use of morphine and scopolamine, also known as "twilight sleep", was first used in Germany and popularised by German physicians Bernard Kronig and Karl Gauss. This concoction offered minor pain relief but mostly allowed women to completely forget the entire delivery process. Under twilight sleep, mothers were often blindfolded and restrained as they experienced the immense pain of childbirth. The cocktail came with severe side effects, such as decreased uterine contractions and altered mental state. Additionally, babies delivered with the use of childbirth drugs often experienced temporarily-ceased breathing. The feminist movement in the United States openly and actively supported the use of twilight sleep, which was introduced to the country in 1914. Some physicians, many of whom had been using painkillers for the past fifty years, including opium, cocaine, and quinine, embraced the new drug. Others were rightfully hesitant.
Caesarean sections
There are many conflicting stories of the first successful cesarean section (or C-section) in which both mother and baby survived. It is, however, known that the procedure had been attempted for hundreds of years before it became accepted in the beginning of the twentieth century. While forceps have gone through periods of high popularity, today they are only used in approximately 10 percent of deliveries. The c-section has become the more popular solution for difficult deliveries. In 2005, one-third of babies were born via C-section. Historically, surgical delivery was a last-resort method of extracting a baby from its deceased or dying mother but today caesarean delivery on maternal request is a medically unnecessary caesarean section, where the infant is born by a caesarean section requested by the parent even though there is not a medical indication to have the surgery.
Natural childbirth
The reemergence of "natural childbirth" began in Europe and was adopted by some in the US as early as the late 1940s. Early supporters believed that the drugs used during deliveries interfered with "happy childbirth" and could negatively impact the newborns "emotional wellbeing". By the 1970s, the call for natural childbirth was spread nationwide, in conjunction with the second-wave of the feminist movement. While it is still most common for American women to deliver in the hospital, supporters of natural birth still widely exist, especially in the UK where midwife-assisted home births have gained popularity.
Epidemiology
The United Nations Population Fund estimated that 303,000 women died of pregnancy or childbirth related causes in 2015. These causes range from severe bleeding to obstructed labour, for which there are highly effective interventions. As women have gained access to family planning and skilled birth attendants with backup emergency obstetric care, the global maternal mortality ratio has fallen from 385 maternal deaths per 100,000 live births in 1990 to 216 deaths per 100,000 live births in 2015, and it was reported in 2017 that many countries had halved their maternal death rates in the last 10 years |
Childbirth | .Outcomes for mothers in childbirth were especially poor before antibiotics were discovered in the 1930s, because of high rates of puerperal fever. Until germ theory was accepted in the mid-1800s, it was assumed that puerperal fever was caused by a variety of sources, including the leakage of breast milk into the body and anxiety. Later, it was discovered that puerperal fever was transmitted by the dirty hands and tools of doctors.Home births facilitated by trained midwives produced the best outcomes from 1880 to 1930 in the US and Europe, whereas physician-facilitated hospital births produced the worst. The change in trend of maternal mortality can be attributed with the widespread use of antibiotics along with the progression of medical technology, more extensive physician training, and less medical interference with normal deliveries.Since the US began recording childbirth statistics in 1915, the US has had historically poor maternal mortality rates in comparison to other developed countries. Britain started recording maternal mortality data from 1880 onward.
Society and culture
Distress levels vary widely during pregnancy as well as during labour and delivery. They appear to be influenced by fear and anxiety levels, experience with prior childbirth, cultural ideas of childbirth pain, mobility during labour, and the support received during labour. Personal expectations, the amount of support from caregivers, quality of the caregiver-patient relationship, and involvement in decision-making are more important in mothers overall satisfaction with the birthing experience than are other factors such as age, socioeconomic status, ethnicity, preparation, physical environment, pain, immobility, or medical interventions.
Costs
According to a 2013 analysis performed commissioned by the New York Times and performed by Truven Healthcare Analytics, the cost of childbirth varies dramatically by country. In the United States the average amount actually paid by insurance companies or other payers in 2012 averaged $9,775 for an uncomplicated conventional delivery and $15,041 for a caesarean birth. The aggregate charges of healthcare facilities for four million annual births in the United States was estimated at over $50 billion. The summed cost of prenatal care, childbirth, and newborn care came to $30,000 for a vaginal delivery and $50,000 for a caesarian section.In the United States, childbirth hospital stays have some of the lowest ICU utilisations. Vaginal delivery with and without complicating diagnoses and caesarean section with and without comorbidities or major comorbidities account for four of the 15 types of hospital stays with low rates of ICU utilisation (where less than 20% of visits were admitted to the ICU). During stays with ICU services, approximately 20% of costs were attributable to the ICU.A 2013 study found varying costs by facility for childbirth expenses in California, varying from $3,296 to $37,227 for a vaginal birth and from $8,312 to $70,908 for a caesarean birth.Beginning in 2014, the National Institute for Health and Care Excellence began recommending that many women give birth at home under the care of a midwife rather than an obstetrician, citing lower expenses and better healthcare outcomes. The median cost associated with home birth was estimated to be about $1,500 vs. about $2,500 in hospital.
Location
Childbirth routinely occurs in hospitals in many developed countries. Before the 20th century and in some countries to the present day, such as the Netherlands, it has more typically occurred at home.In rural and remote communities of many countries, hospitalised childbirth may not be readily available or the best option. Maternal evacuation is the predominant risk management method for assisting mothers in these communities. Maternal evacuation is the process of relocating pregnant women in remote communities to deliver their babies in a nearby urban hospital setting. This practice is common in Indigenous Inuit and Northern Manitoban communities in Canada as well as Australian aboriginal communities. There has been research considering the negative effects of maternal evacuation due to a lack of social support provided to these women. These negative effects include an increase in maternal newborn complications and postpartum depression, and decreased breastfeeding rates.The exact location in which childbirth takes place is an important factor in determining nationality, in particular for birth aboard aircraft and ships.
Facilities
Facilities for childbirth include:
A labour ward, also called a delivery ward or labour and delivery, is generally a department of a hospital that focuses on providing health care to women and their children during childbirth. It is generally closely linked to the hospitals neonatal intensive care unit and/or obstetric surgery unit if present. A maternity ward or maternity unit may include facilities both for childbirth and for postpartum rest and observation of mothers in normal as well as complicated cases.
A maternity hospital is a hospital that specialises in caring for women while they are pregnant and during childbirth and provide care for newborn babies,
A birthing center generally presents a simulated home-like environment. Birthing centers may be located on hospital grounds or "free standing" (that is, not affiliated with a hospital).
A home birth is usually accomplished with the assist of a midwife. Some women choose to give birth at home without any professionals present, termed an unassisted childbirth.
Associated occupations
Different categories of birth attendants may provide support and care during pregnancy and childbirth, although there are important differences across categories based on professional training and skills, practice regulations, and the nature of care delivered. Many of these occupations are highly professionalised, but other roles exist on a less formal basis.
"Childbirth educators" are instructors who aim to teach pregnant women and their partners about the nature of pregnancy, labour signs and stages, techniques for giving birth, breastfeeding and newborn baby care. Training for this role can be found in hospital settings or through independent certifying organisations. Each organisation teaches its own curriculum and each emphasises different techniques. The Lamaze technique is one well-known example.
Doulas are assistants who support mothers during pregnancy, labour, birth, and postpartum. They are not medical attendants; rather, they provide emotional support and non-medical pain relief for women during labour. Like childbirth educators and other unlicensed assistive personnel, certification to become a doula is not compulsory, thus, anyone can call themself a doula or a childbirth educator.Confinement nannies are individuals who are employed to provide assistance and stay with the mothers at their home after childbirth. They are usually experienced mothers who took courses on how to take care of mothers and newborn babies.Midwives are autonomous practitioners who provide basic and emergency health care before, during and after pregnancy and childbirth, generally to women with low-risk pregnancies. Midwives are trained to assist during labour and birth, either through direct-entry or nurse-midwifery education programs. Jurisdictions where midwifery is a regulated profession will typically have a registering and disciplinary body for quality control, such as the American Midwifery Certification Board in the United States, the College of Midwives of British Columbia in Canada or the Nursing and Midwifery Council in the United Kingdom.In the past, midwifery played a crucial role in childbirth throughout most indigenous societies. Although western civilisations attempted to assimilate their birthing technologies into certain indigenous societies, like Turtle Island, and get rid of the midwifery, the National Aboriginal Council of Midwives brought back the cultural ideas and midwifery that were once associated with indigenous birthing.In jurisdictions where midwifery is not a regulated profession, traditional birth attendants, also known as traditional or lay midwives, may assist women during childbirth, although they do not typically receive formal health care education and training.
Medical doctors who practise in the field of childbirth include categorically specialised obstetricians, family practitioners and general practitioners whose training, skills and practices include obstetrics, and in some contexts general surgeons. These physicians and surgeons variously provide care across the whole spectrum of normal and abnormal births and pathological labour conditions. Categorically specialised obstetricians are qualified surgeons, so they can undertake surgical procedures relating to childbirth. Some family practitioners or general practitioners also perform obstetrical surgery. Obstetrical procedures include cesarean sections, episiotomies, and assisted delivery. Categorical specialists in obstetrics are commonly trained in both obstetrics and gynaecology (OB/GYN), and may provide other medical and surgical gynaecological care, and may incorporate more general, well-woman, primary care elements in their practices. Maternal–fetal medicine specialists are obstetrician/gynecologists subspecialised in managing and treating high-risk pregnancy and delivery.
Anaesthetists or anesthesiologists are medical doctors who specialise in pain relief and the use of drugs to facilitate surgery and other painful procedures. They may contribute to the care of a woman in labour by performing an epidural or by providing anaesthesia (often spinal anaesthesia) for Cesarean section or forceps delivery. They are experts in pain management during childbirth.
Obstetric nurses assist midwives, doctors, women, and babies before, during, and after the birth process, in the hospital system. They hold various nursing certifications and typically undergo additional obstetric training in addition to standard nursing training.
Paramedics are healthcare providers that are able to provide emergency care to both the mother and infant during and after delivery using a wide range of medications and tools on an ambulance. They are capable of delivering babies but can do very little for infants that become "stuck" and are unable to be delivered vaginally.
Lactation consultants assist the mother and newborn to breastfeed successfully. A health visitor comes to see the mother and baby at home, usually within 24 hours of discharge, and checks the infants adaptation to extrauterine life and the mothers postpartum physiological changes.
Non-western communities
Cultural values, assumptions, and practices of pregnancy and childbirth vary across cultures. For example, some Maya women who work in agricultural fields of some rural communities will usually continue to work in a similar function to how they normally would throughout pregnancy, in some cases working until labour begins.Comfort and proximity to extended family and social support systems may be a childbirth priority of many communities in developing countries, such as the Chillihuani in Peru and the Mayan town of San Pedro La Laguna. Home births can help women in these cultures feel more comfortable as they are in their own home with their family around them helping out in different ways. Traditionally, it has been rare in these cultures for the mother to lie down during childbirth, opting instead for standing, kneeling, or walking around prior to and during birthing.Some communities rely heavily on religion for their birthing practices. It is believed that if certain acts are carried out, then it will allow the child for a healthier and happier future. One example of this is the belief in the Chillihuani that if a knife or scissors are used for cutting the umbilical cord, it will cause for the child to go through clothes very quickly. In order to prevent this, a jagged ceramic tile is used to cut the umbilical cord. In Mayan societies, ceremonial gifts are presented to the mother throughout pregnancy and childbirth in order to help her into the beginning of her childs life.Ceremonies and customs can vary greatly between countries. See;
Collecting stem cells
It is currently possible to collect two types of stem cells during childbirth: amniotic stem cells and umbilical cord blood stem cells. They are being studied as possible treatments of a number of conditions.
Placentophagy
Some animal mothers are known to eat their afterbirth, called placentophagy. In some cultures the placenta may be consumed as a nutritional boost, but it may also be seen as a special part of birth and eaten by the newborns family ceremonially. In the developed world the placenta may be eaten believing that it reduces postpartum bleeding, increases milk supply, provides micronutrients such as iron, and improves mood and boosts energy. The CDC advises against this practice, saying it has not been shown to promote health but has been shown to possibly transmit disease organisms that were passed from the placenta into the mothers breastmilk and then infecting the baby.
See also
References
External links
Spontaneous Vaginal Delivery, Video by Merck Manual Professional Edition
Maternal Morbidity/Mortality in the Media |
Lambert–Eaton myasthenic syndrome | Lambert–Eaton myasthenic syndrome (LEMS) is a rare autoimmune disorder characterized by muscle weakness of the limbs.
Around 60% of those with LEMS have an underlying malignancy, most commonly small-cell lung cancer; it is therefore regarded as a paraneoplastic syndrome (a condition that arises as a result of cancer elsewhere in the body). It is the result of antibodies against presynaptic voltage-gated calcium channels, and likely other nerve terminal proteins, in the neuromuscular junction (the connection between nerves and the muscle that they supply). The diagnosis is usually confirmed with electromyography and blood tests; these also distinguish it from myasthenia gravis, a related autoimmune neuromuscular disease.If the disease is associated with cancer, direct treatment of the cancer often relieves the symptoms of LEMS. Other treatments often used are steroids, azathioprine, which suppress the immune system, intravenous immunoglobulin, which outcompetes autoreactive antibody for Fc receptors, and pyridostigmine and 3,4-diaminopyridine, which enhance the neuromuscular transmission. Occasionally, plasma exchange is required to remove the antibodies.The condition affects about 3.4 per million people. LEMS usually occurs in people over 40 years of age, but may occur at any age.
Signs and symptoms
The weakness from LEMS typically involves the muscles of the proximal arms and legs (the muscles closer to the trunk). In contrast to myasthenia gravis, the weakness affects the legs more than the arms. This leads to difficulties climbing stairs and rising from a sitting position. Weakness is often relieved temporarily after exertion or physical exercise. High temperatures can worsen the symptoms. Weakness of the bulbar muscles (muscles of the mouth and throat) is occasionally encountered. Weakness of the eye muscles is uncommon. Some may have double vision, drooping of the eyelids and difficulty swallowing, but generally only together with leg weakness; this too distinguishes LEMS from myasthenia gravis, in which eye signs are much more common. In the advanced stages of the disease, weakness of the respiratory muscles may occur. Some may also experience problems with coordination (ataxia).Three-quarters of people with LEMS also have disruption of the autonomic nervous system. This may be experienced as a dry mouth, constipation, blurred vision, impaired sweating, and orthostatic hypotension (falls in blood pressure on standing, potentially leading to blackouts). Some report a metallic taste in the mouth.On neurological examination, the weakness demonstrated with normal testing of power is often less severe than would be expected on the basis of the symptoms. Strength improves further with repeated testing, e.g. improvement of power on repeated hand grip (a phenomenon known as "Lamberts sign"). At rest, reflexes are typically reduced; with muscle use, reflex strength increases. This is a characteristic feature of LEMS. The pupillary light reflex may be sluggish.In LEMS associated with lung cancer, most have no suggestive symptoms of cancer at the time, such as cough, coughing blood, and unintentional weight loss. LEMS associated with lung cancer may be more severe.
Causes
LEMS is often associated with lung cancer (50–70%), specifically small-cell carcinoma, making LEMS a paraneoplastic syndrome. Of the people with small-cell lung cancer, 1–3% have LEMS. In most of these cases, LEMS is the first symptom of the lung cancer, and it is otherwise asymptomatic.LEMS may also be associated with endocrine diseases, such as hypothyroidism (an underactive thyroid gland) or diabetes mellitus type 1. Myasthenia gravis, too, may happen in the presence of tumors (thymoma, a tumor of the thymus in the chest); people with MG without a tumor and people with LEMS without a tumor have similar genetic variations that seem to predispose them to these diseases. HLA-DR3-B8 (an HLA subtype), in particular, seems to predispose to LEMS.
Mechanism
In normal neuromuscular function, a nerve impulse is carried down the axon (the long projection of a nerve cell) from the spinal cord. At the nerve ending in the neuromuscular junction, where the impulse is transferred to the muscle cell, the nerve impulse leads to the opening of voltage-gated calcium channels (VGCC), the influx of calcium ions into the nerve terminal, and the calcium-dependent triggering of synaptic vesicle fusion with plasma membrane. These synaptic vesicles contain acetylcholine, which is released into the synaptic cleft and stimulates the acetylcholine receptors on the muscle. The muscle then contracts.In LEMS, antibodies against VGCC, particularly the P/Q-type VGCC, decrease the amount of calcium that can enter the nerve ending, hence less acetylcholine can be released from the neuromuscular junction. Apart from skeletal muscle, the autonomic nervous system also requires acetylcholine neurotransmission; this explains the occurrence of autonomic symptoms in LEMS. P/Q voltage-gated calcium channels are also found in the cerebellum, explaining why some experience problems with coordination. The antibodies bind particularly to the part of the receptor known as the "domain III S5–S6 linker peptide". Antibodies may also bind other VGCCs. Some have antibodies that bind synaptotagmin, the protein sensor for calcium-regulated vesicle fusion. Many people with LEMS, both with and without VGCC antibodies, have detectable antibodies against the M1 subtype of the acetylcholine receptor; their presence may participate in a lack of compensation for the weak calcium influx.Apart from the decreased calcium influx, a disruption of active zone vesicle release sites also occurs, which may also be antibody-dependent, since people with LEMS have antibodies to components of these active zones (including voltage-dependent calcium channels). Together, these abnormalities lead to the decrease in muscle contractility. Repeated stimuli over a period of about 10 seconds eventually lead to sufficient delivery of calcium, and an increase in muscle contraction to normal levels, which can be demonstrated using an electrodiagnostic medicine study called needle electromyography by increasing amplitude of repeated compound muscle action potentials.The antibodies found in LEMS associated with lung cancer also bind to calcium channels in the cancer cells, and it is presumed that the antibodies originally develop as a reaction to these cells. It has been suggested that the immune reaction to the cancer cells suppresses their growth and improves the prognosis from the cancer.
Diagnosis
The diagnosis is usually made with nerve conduction study (NCS) and electromyography (EMG), which is one of the standard tests in the investigation of otherwise unexplained muscle weakness. EMG involves the insertion of small needles into the muscles. NCS involves administering small electrical impulses to the nerves, on the surface of the skin, and measuring the electrical response of the muscle in question. NCS investigation in LEMS primarily involves evaluation of compound motor action potentials (CMAPs) of effected muscles and sometimes EMG single-fiber examination can be used.CMAPs show small amplitudes but normal latency and conduction velocities. If repeated impulses are administered (2 per second or 2 Hz), it is normal for CMAP amplitudes to become smaller as the acetylcholine in the motor end plate is depleted. In LEMS, this decrease is larger than observed normally. Eventually, stored acetylcholine is made available, and the amplitudes increase again. In LEMS, this remains insufficient to reach a level sufficient for transmission of an impulse from nerve to muscle; all can be attributed to insufficient calcium in the nerve terminal. A similar pattern is witnessed in myasthenia gravis. In LEMS, in response to exercising the muscle, the CMAP amplitude increases greatly (over 200%, often much more). This also occurs on the administration of a rapid burst of electrical stimuli (20 impulses per second for 10 seconds). This is attributed to the influx of calcium in response to these stimuli. On single-fiber examination, features may include increased jitter (seen in other diseases of neuromuscular transmission) and blocking.Blood tests may be performed to exclude other causes of muscle disease (elevated creatine kinase may indicate a myositis, and abnormal thyroid function tests may indicate thyrotoxic myopathy). Antibodies against voltage-gated calcium channels can be identified in 85% of people with EMG-confirmed LEMS. Once LEMS is diagnosed, investigations such as a CT scan of the chest are usually performed to identify any possible underlying lung tumors. Around 50–60% of these are discovered immediately after the diagnosis of LEMS. The remainder is diagnosed later, but usually within two years and typically within four years. As a result, scans are typically repeated every six months for the first two years after diagnosis. While CT of the lungs is usually adequate, a positron emission tomography scan of the body may also be performed to search for an occult tumour, particularly of the lung.
Treatment
If LEMS is caused by an underlying cancer, treatment of the cancer usually leads to resolution of the symptoms. Treatment usually consists of chemotherapy, with radiation therapy in those with limited disease.
Immunosuppression
Some evidence supports the use of intravenous immunoglobulin (IVIG). Immune suppression tends to be less effective than in other autoimmune diseases. Prednisolone (a glucocorticoid or steroid) suppresses the immune response, and the steroid-sparing agent azathioprine may replace it once therapeutic effect has been achieved. IVIG may be used with a degree of effectiveness. Plasma exchange (or plasmapheresis), the removal of plasma proteins such as antibodies and replacement with normal plasma, may provide improvement in acute severe weakness. Again, plasma exchange is less effective than in other related conditions such as myasthenia gravis, and additional immunosuppressive medication is often needed.
Other
Three other treatment modalities also aim at improving LEMS symptoms, namely pyridostigmine, 3,4-diaminopyridine (amifampridine), and guanidine. They work to improve neuromuscular transmission.
Tentative evidence supports 3,4-diaminopyridine] at least for a few weeks. The 3,4-diaminopyridine base or the water-soluble 3,4-diaminopyridine phosphate may be used. Both 3,4-diaminopyridine formulations delay the repolarization of nerve terminals after a discharge, thereby allowing more calcium to accumulate in the nerve terminal.Pyridostigmine decreases the degradation of acetylcholine after release into the synaptic cleft, and thereby improves muscle contraction. An older agent, guanidine, causes many side effects and is not recommended. 4-Aminopyridine (dalfampridine), an agent related to 3,4-aminopyridine, causes more side effects than 3,4-DAP and is also not recommended.
History
Anderson and colleagues from St Thomas Hospital, London, were the first to mention a case with possible clinical findings of LEMS in 1953, but Edward H. Lambert, Lee Eaton, and E.D. Rooke at the Mayo Clinic were the first physicians to substantially describe the clinical and electrophysiological findings of the disease in 1956. In 1972, the clustering of LEMS with other autoimmune diseases led to the hypothesis that it was caused by autoimmunity. Studies in the 1980s confirmed the autoimmune nature, and research in the 1990s demonstrated the link with antibodies against P/Q-type voltage-gated calcium channels.
References
== External links == |
Larva migrans | Larva migrans can refer to:
Cutaneous larva migrans, a skin disease in humans, caused by the larvae of various nematode parasites
Visceral larva migrans, a condition in children caused by the migratory larvae of nematodes
Ocular larva migrans, an ocular form of the larva migrans syndrome that occurs when larvae invade the eye
Larva migrans profundus, also known as Gnathostomiasis |
Lead poisoning | Lead poisoning, also known as plumbism and saturnism, is a type of metal poisoning caused by lead in the body. The brain is the most sensitive. Symptoms may include abdominal pain, constipation, headaches, irritability, memory problems, infertility, and tingling in the hands and feet. It causes almost 10% of intellectual disability of otherwise unknown cause and can result in behavioral problems. Some of the effects are permanent. In severe cases, anemia, seizures, coma, or death may occur.Exposure to lead can occur by contaminated air, water, dust, food, or consumer products. Children are at greater risk as they are more likely to put objects in their mouth such as those that contain lead paint and absorb a greater proportion of the lead that they eat. Exposure at work is a common cause of lead poisoning in adults with certain occupations at particular risk. Diagnosis is typically by measurement of the blood lead level. The Centers for Disease Control and Prevention (US) has set the upper limit for blood lead for adults at 10 µg/dl (10 µg/100 g) and for children at 3.5 µg/dl, previously before October 2021 5 µg/dl Elevated lead may also be detected by changes in red blood cells or dense lines in the bones of children as seen on X-ray.Lead poisoning is preventable. This includes individual efforts such as removing lead-containing items from the home, workplace efforts such as improved ventilation and monitoring, state laws that ban the use of and national policies such as laws that ban lead in products such as paint, gasoline, ammunition, wheel weights, and fishing weights reduce allowable levels in water or soil, and provide for cleanup of contaminated soil. Workers education could be helpful as well. The major treatments are removal of the source of lead and the use of medications that bind lead so it can be eliminated from the body, known as chelation therapy. Chelation therapy in children is recommended when blood levels are greater than 40–45 µg/dl. Medications used include dimercaprol, edetate calcium disodium, and succimer.In 2016, lead is believed to have resulted in 540,000 deaths worldwide. It occurs most commonly in the developing world. There also are numerous cases in the developed world, with there being thousands of American communities with higher lead burdens than seen during the peak of the Flint water crisis. Those who are poor are at greater risk. Lead is believed to result in 0.6% of the worlds disease burden. According to a study, half of the US population has been exposed to substantially detrimental lead levels in early childhood – mainly from car exhaust whose lead pollution peaked in the 1970s and caused widespread loss in cognitive ability.People have been mining and using lead for thousands of years. Descriptions of lead poisoning date to at least 2000 BC, while efforts to limit leads use date back to at least the 16th century. Concerns for low levels of exposure began in the 1970s with there being no safe threshold for lead exposure.
Classification
Classically, "lead poisoning" or "lead intoxication" has been defined as exposure to high levels of lead typically associated with severe health effects. Poisoning is a pattern of symptoms that occur with toxic effects from mid to high levels of exposure; toxicity is a wider spectrum of effects, including subclinical ones (those that do not cause symptoms). However, professionals often use "lead poisoning" and "lead toxicity" interchangeably, and official sources do not always restrict the use of "lead poisoning" to refer only to symptomatic effects of lead.The amount of lead in the blood and tissues, as well as the time course of exposure, determine toxicity.
Lead poisoning may be acute (from intense exposure of short duration) or chronic (from repeat low-level exposure over a prolonged period), but the latter is much more common.
Diagnosis and treatment of lead exposure are based on blood lead level (the amount of lead in the blood), measured in micrograms of lead per deciliter of blood (μg/dL). Urine lead levels may be used as well, though less commonly. In cases of chronic exposure, lead often sequesters in the highest concentrations first in the bones, then in the kidneys. If a provider is performing a provocative excretion test, or "chelation challenge", a measurement obtained from urine rather than blood is likely to provide a more accurate representation of total lead burden to a skilled interpreter.The US Centers for Disease Control and Prevention and the World Health Organization state that a blood lead level of 10 μg/dL or above is a cause for concern; however, lead may impair development and have harmful health effects even at lower levels, and there is no known safe exposure level. Authorities such as the American Academy of Pediatrics define lead poisoning as blood lead levels higher than 10 μg/dL.Lead forms a variety of compounds and exists in the environment in various forms. Features of poisoning differ depending on whether the agent is an organic compound (one that contains carbon), or an inorganic one. Organic lead poisoning is now very rare, because countries across the world have phased out the use of organic lead compounds as gasoline additives, but such compounds are still used in industrial settings. Organic lead compounds, which cross the skin and respiratory tract easily, affect the central nervous system predominantly.
Signs and symptoms
Lead poisoning can cause a variety of symptoms and signs which vary depending on the individual and the duration of lead exposure. Symptoms are nonspecific and may be subtle, and someone with elevated lead levels may have no symptoms. Symptoms usually develop over weeks to months as lead builds up in the body during a chronic exposure, but acute symptoms from brief, intense exposures also occur.
Symptoms from exposure to organic lead, which is probably more toxic than inorganic lead due to its lipid solubility, occur rapidly. Poisoning by organic lead compounds has symptoms predominantly in the central nervous system, such as insomnia, delirium, cognitive deficits, tremor, hallucinations, and convulsions.Symptoms may be different in adults and children; the main symptoms in adults are headache, abdominal pain, memory loss, kidney failure, male reproductive problems, and weakness, pain, or tingling in the extremities.Early symptoms of lead poisoning in adults are commonly nonspecific and include depression, loss of appetite, intermittent abdominal pain, nausea, diarrhea, constipation, and muscle pain. Other early signs in adults include malaise, fatigue, decreased libido, and problems with sleep. An unusual taste in the mouth and personality changes are also early signs.In adults, symptoms can occur at levels above 40 μg/dL, but are more likely to occur only above 50–60 μg/dL. Symptoms begin to appear in children generally at around 60 μg/dL. However, the lead levels at which symptoms appear vary widely depending on unknown characteristics of each individual. At blood lead levels between 25 and 60 μg/dL, neuropsychiatric effects such as delayed reaction times, irritability, and difficulty concentrating, as well as slowed motor nerve conduction and headache can occur. Anemia may appear at blood lead levels higher than 50 μg/dL. In adults, abdominal colic, involving paroxysms of pain, may appear at blood lead levels greater than 80 μg/dL. Signs that occur in adults at blood lead levels exceeding 100 μg/dL include wrist drop and foot drop, and signs of encephalopathy (a condition characterized by brain swelling), such as those that accompany increased pressure within the skull, delirium, coma, seizures, and headache. In children, signs of encephalopathy such as bizarre behavior, discoordination, and apathy occur at lead levels exceeding 70 μg/dL. For both adults and children, it is rare to be asymptomatic if blood lead levels exceed 100 μg/dL.
Acute poisoning
In acute poisoning, typical neurological signs are pain, muscle weakness, numbness and tingling, and, rarely, symptoms associated with inflammation of the brain. Abdominal pain, nausea, vomiting, diarrhea, and constipation are other acute symptoms. Leads effects on the mouth include astringency and a metallic taste. Gastrointestinal problems, such as constipation, diarrhea, poor appetite, or weight loss, are common in acute poisoning. Absorption of large amounts of lead over a short time can cause shock (insufficient fluid in the circulatory system) due to loss of water from the gastrointestinal tract. Hemolysis (the rupture of red blood cells) due to acute poisoning can cause anemia and hemoglobin in the urine. Damage to kidneys can cause changes in urination such as acquired fanconi syndrome and decreased urine output. People who survive acute poisoning often go on to display symptoms of chronic poisoning.
Chronic poisoning
Chronic poisoning usually presents with symptoms affecting multiple systems, but is associated with three main types of symptoms: gastrointestinal, neuromuscular, and neurological. Central nervous system and neuromuscular symptoms usually result from intense exposure, while gastrointestinal symptoms usually result from exposure over longer periods. Signs of chronic exposure include loss of short-term memory or concentration, depression, nausea, abdominal pain, loss of coordination, and numbness and tingling in the extremities. Fatigue, problems with sleep, headaches, stupor, slurred speech, and anemia are also found in chronic lead poisoning. A "lead hue" of the skin with pallor and/or lividity is another feature. A blue line along the gum with bluish black edging to the teeth, known as a Burton line, is another indication of chronic lead poisoning. Children with chronic poisoning may refuse to play or may have hyperkinetic or aggressive behavior disorders. Visual disturbance may present with gradually progressing blurred vision as a result of central scotoma, caused by toxic optic neuritis.
Effects on children
A pregnant woman who has elevated blood lead levels is at greater risk of a premature birth or with a low birth weight. Children are more at risk for lead poisoning because their smaller bodies are in a continuous state of growth and development. Young children are much more vulnerable to lead poisoning, as they absorb 4 to 5 times more lead than an adult from a given source. Furthermore, children, especially as they are learning to crawl and walk, are constantly on the floor and therefore more prone to ingesting and inhaling dust that is contaminated with lead.The classic signs and symptoms in children are loss of appetite, abdominal pain, vomiting, weight loss, constipation, anemia, kidney failure, irritability, lethargy, learning disabilities, and behavioral problems. Slow development of normal childhood behaviors, such as talking and use of words, and permanent intellectual disability are both commonly seen. Although less common, it is possible for fingernails to develop leukonychia striata if exposed to abnormally high lead concentrations.On July 30, 2020, a report by UNICEF and Pure Earth revealed that lead poisoning is affecting children on a "massive and previously unknown scale". According to the report, one in three children, up to 800 million globally, have blood lead levels at, or above, 5 micrograms per decilitre (µg/dL), the amount at which action is required.
By organ system
Lead affects every one of the bodys organ systems, especially the nervous system, but also the bones and teeth, the kidneys, and the cardiovascular, immune, and reproductive systems. Hearing loss and tooth decay have been linked to lead exposure, as have cataracts. Intrauterine and neonatal lead exposure promote tooth decay. Aside from the developmental effects unique to young children, the health effects experienced by adults are similar to those in children, although the thresholds are generally higher.
Kidneys
Kidney damage occurs with exposure to high levels of lead, and evidence suggests that lower levels can damage kidneys as well. The toxic effect of lead causes nephropathy and may cause Fanconi syndrome, in which the proximal tubular function of the kidney is impaired. Long-term exposure at levels lower than those that cause lead nephropathy have also been reported as nephrotoxic in patients from developed countries that had chronic kidney disease or were at risk because of hypertension or diabetes mellitus.
Lead poisoning inhibits excretion of the waste product urate and causes a predisposition for gout, in which urate builds up. This condition is known as saturnine gout.
Cardiovascular system
Evidence suggests lead exposure is associated with high blood pressure, and studies have also found connections between lead exposure and coronary heart disease, heart rate variability, and death from stroke, but this evidence is more limited. People who have been exposed to higher concentrations of lead may be at a higher risk for cardiac autonomic dysfunction on days when ozone and fine particles are higher.
Reproductive system
Lead affects both the male and female reproductive systems. In men, when blood lead levels exceed 40 μg/dL, sperm count is reduced and changes occur in volume of sperm, their motility, and their morphology.
A pregnant womans elevated blood lead level can lead to miscarriage, prematurity, low birth weight, and problems with development during childhood. Lead is able to pass through the placenta and into breast milk, and blood lead levels in mothers and infants are usually similar. A fetus may be poisoned in utero if lead from the mothers bones is subsequently mobilized by the changes in metabolism due to pregnancy; increased calcium intake in pregnancy may help mitigate this phenomenon.
Nervous system
Lead affects the peripheral nervous system (especially motor nerves) and the central nervous system. Peripheral nervous system effects are more prominent in adults and central nervous system effects are more prominent in children. Lead causes the axons of nerve cells to degenerate and lose their myelin coats.Lead exposure in young children has been linked to learning disabilities, and children with blood lead concentrations greater than 10 μg/dL are in danger of developmental disabilities. Increased blood lead level in children has been correlated with decreases in intelligence, nonverbal reasoning, short-term memory, attention, reading and arithmetic ability, fine motor skills, emotional regulation, and social engagement.The effect of lead on childrens cognitive abilities takes place at very low levels. There is apparently no lower threshold to the dose-response relationship (unlike other heavy metals such as mercury). Reduced academic performance has been associated with lead exposure even at blood lead levels lower than 5 μg/dL. Blood lead levels below 10 μg/dL have been reported to be associated with lower IQ and behavior problems such as aggression, in proportion with blood lead levels. Between the blood lead levels of 5 and 35 μg/dL, an IQ decrease of 2–4 points for each μg/dL increase is reported in children. However, studies that show associations between low-level lead exposure and health effects in children may be affected by confounding and overestimate the effects of low-level lead exposure.High blood lead levels in adults are also associated with decreases in cognitive performance and with psychiatric symptoms such as depression and anxiety. It was found in a large group of current and former inorganic lead workers in Korea that blood lead levels in the range of 20–50 μg/dL were correlated with neuro-cognitive defects. Increases in blood lead levels from about 50 to about 100 μg/dL in adults have been found to be associated with persistent, and possibly permanent, impairment of central nervous system function.Lead exposure in children is also correlated with neuropsychiatric disorders such as attention deficit hyperactivity disorder and anti-social behaviour. Elevated lead levels in children are correlated with higher scores on aggression and delinquency measures. A correlation has also been found between prenatal and early childhood lead exposure and violent crime in adulthood. Countries with the highest air lead levels have also been found to have the highest murder rates, after adjusting for confounding factors. A May 2000 study by economic consultant Rick Nevin theorizes that lead exposure explains 65% to 90% of the variation in violent crime rates in the US. A 2007 paper by the same author claims to show a strong association between preschool blood lead and subsequent crime rate trends over several decades across nine countries. Lead exposure in childhood appears to increase school suspensions and juvenile detention among boys. It is believed that the U.S. ban on lead paint in buildings in the late 1970s, as well as the phaseout of leaded gasoline in the 1970s and 1980s, partially helped contribute to the decline of violent crime in the United States since the early 1990s.
Exposure routes
Lead is a common environmental pollutant. Causes of environmental contamination include industrial use of lead, such as found in facilities that process lead-acid batteries or produce lead wire or pipes, and metal recycling and foundries. Storage batteries and ammunition are made with the largest amounts of lead consumed in the economy each year, in the US as of 2013. Children living near facilities that process lead, such as lead smelters, have been found to have unusually high blood lead levels. In August 2009, parents rioted in China after lead poisoning was found in nearly 2000 children living near zinc and manganese smelters. Lead exposure can occur from contact with lead in air, household dust, soil, water, and commercial products. Leaded gasoline has also been linked to increases in lead pollution. Some research has suggested a link between leaded gasoline and crime rates. Man made lead pollution has been elevated in the air for the past 2000 years. Lead pollution in the air is entirely due to human activity (mining and smelting, as well as in gasoline).
Occupational exposure
In adults, occupational exposure is the main cause of lead poisoning. People can be exposed when working in facilities that produce a variety of lead-containing products; these include radiation shields, ammunition, certain surgical equipment, developing dental X-ray films prior to digital X-rays (each film packet had a lead liner to prevent the radiation from going through), fetal monitors, plumbing, circuit boards, jet engines, and ceramic glazes. In addition, lead miners and smelters, plumbers and fitters, auto mechanics, glass manufacturers, construction workers, battery manufacturers and recyclers, firing range workers, and plastic manufacturers are at risk for lead exposure. Other occupations that present lead exposure risks include welding, manufacture of rubber, printing, zinc and copper smelting, processing of ore, combustion of solid waste, and production of paints and pigments. Lead exposure can also occur with intense use of gun ranges, regardless of whether these ranges are indoor or out. Parents who are exposed to lead in the workplace can bring lead dust home on clothes or skin and expose their children. Occupational exposure to lead increases the risk of cardiovascular disease, in particular: stroke, and high blood pressure.
Food
Lead may be found in food when food is grown in soil that is high in lead, airborne lead contaminates the crops, animals eat lead in their diet, or lead enters the food either from what it was stored or cooked in. Ingestion of lead paint and batteries is also a route of exposure for livestock, which can subsequently affect humans. Milk produced by contaminated cattle can be diluted to a lower lead concentration and sold for consumption.In Bangladesh, lead compounds have been added to turmeric to make it more yellow. This is believed to have started in the 1980s and continues as of 2019. It is believed to be one of the main sources of high lead levels in the country. In Hong Kong the maximum allowed lead parts per million is 6 in solid foods and 1 in liquid foods.
Paint
Some lead compounds are colorful and are used widely in paints, and lead paint is a major route of lead exposure in children. A study conducted in 1998–2000 found that 38 million housing units in the US had lead-based paint, down from a 1990 estimate of 64 million. Deteriorating lead paint can produce dangerous lead levels in household dust and soil. Deteriorating lead paint and lead-containing household dust are the main causes of chronic lead poisoning. The lead breaks down into the dust and since children are more prone to crawling on the floor, it is easily ingested. Many young children display pica, eating things that are not food. Even a small amount of a lead-containing product such as a paint chip or a sip of glaze can contain tens or hundreds of milligrams of lead. Eating chips of lead paint presents a particular hazard to children, generally producing more severe poisoning than occurs from dust. Because removing lead paint from dwellings, e.g. by sanding or torching, creates lead-containing dust and fumes, it is generally safer to seal the lead paint under new paint (excepting moveable windows and doors, which create paint dust when operated). Alternatively, special precautions must be taken if the lead paint is to be removed.In oil painting, it was once common for colours such as yellow or white to be made with lead carbonate. Lead white oil colour was the main white of oil painters until superseded by compounds containing zinc or titanium in the mid-20th century. It is speculated that the painter Caravaggio and possibly Francisco Goya and Vincent Van Gogh had lead poisoning due to overexposure or carelessness when handling this colour.
Soil
Residual lead in soil contributes to lead exposure in urban areas. It has been thought that the more polluted an area is with various contaminants, the more likely it is to contain lead. However, this is not always the case, as there are several other reasons for lead contamination in soil.Lead content in soil may be caused by broken-down lead paint, residues from lead-containing gasoline, used engine oil, tire weights, or pesticides used in the past, contaminated landfills, or from nearby industries such as foundries or smelters. For example, in the Montevideo neighborhood of La Teja, former industrial sites became important sources of exposure in local communities in the early 2000s. Although leaded soil is less of a problem in countries that no longer have leaded gasoline, it remains prevalent, raising concerns about the safety of urban agriculture; eating food grown in contaminated soil can present a lead hazard. Interfacial solar evaporation has been recently studied as a technique for remediating lead-contaminated sites, which involves the evaporation of heavy metal ions from moist soil.
Water
Lead from the atmosphere or soil can end up in groundwater and surface water. It is also potentially in drinking water, e.g. from plumbing and fixtures that are either made of lead or have lead solder. Since acidic water breaks down lead in plumbing more readily, chemicals can be added to municipal water to increase the pH and thus reduce the corrosivity of the public water supply. Chloramines, which were adopted as a substitute for chlorine disinfectants due to fewer health concerns, increase corrositivity. In the US, 14–20% of total lead exposure is attributed to drinking water. In 2004, a team of seven reporters from The Washington Post discovered high levels of lead in the drinking water in Washington, DC, and won an award for investigative reporting for a series of articles about this contamination. In the water crisis in Flint, Michigan, a switch to a more corrosive municipal water source caused elevated lead levels in domestic tap water.Like Flint MI and Washington DC, a similar situation affects the State of Wisconsin, where estimates call for replacement of up to 176,000 underground pipes made of lead known as lead service lines. The city of Madison, Wisconsin addressed the issue and replaced all of their lead service lines, but there are still others that have yet to follow suit. While there are chemical methods that could help reduce the amount of lead in the water distributed, a permanent fix would be to replace the pipes completely. While the state may replace the pipes below ground, it will be up to the homeowners to replace the pipes on their property, at an average cost of $3,000. Experts say that if the city were to replace their pipes and the citizens were to keep the old pipes located within their homes, there would be a potential for more lead to dissolve into their drinking water.Collected rainwater from roof runoff used as potable water may contain lead, if there are lead contaminants on the roof or in the storage tank. The Australian Drinking Water Guidelines allow a maximum of 0.01 mg/L (10 ppb) lead in water.Lead wheel weights have been found to accumulate on roads and interstates and erode in traffic entering the water runoff through drains. Leaded fishing weights accumulate in rivers, streams, ponds, and lakes.
Gasoline
Lead was first added to gasoline in 1923, as it helped keep car engines healthy. Automotive exhaust represented a major way for lead to be inhaled, invade the bloodstream and pass into the brain.The use of lead in gasoline peaked in the 1970s. By the next decade most high incomes countries prohibited the use of leaded petrol. As late as 2002, almost all low- and middle-income countries including some OECD members still used it. The UN Environment Programme (UNEP) thus launched a campaign in 2002 to eliminate its use, leading to Algeria being the last country to stop its use in July 2021.
Lead-containing products
Lead can be found in products such as kohl, an ancient cosmetic from the Middle East, South Asia, and parts of Africa that has many other names; and from some toys. In 2007, millions of toys made in China were recalled from multiple countries owing to safety hazards including lead paint. Vinyl mini-blinds, found especially in older housing, may contain lead.
Lead is commonly incorporated into herbal remedies such as Indian Ayurvedic preparations and remedies of Chinese origin. There are also risks of elevated blood lead levels caused by folk remedies like azarcon and greta, which each contain about 95% lead.Ingestion of metallic lead, such as small lead fishing lures, increases blood lead levels and can be fatal. Ingestion of lead-contaminated food is also a threat. Ceramic glaze often contains lead, and dishes that have been improperly fired can leach the metal into food, potentially causing severe poisoning. In some places, the solder in cans used for food contains lead. When manufacturing medical instruments and hardware, solder containing lead may be present. People who eat animals hunted with lead bullets may be at risk for lead exposure. Bullets lodged in the body rarely cause significant levels of lead, but bullets lodged in the joints are the exception, as they deteriorate and release lead into the body over time.In May 2015, Indian food safety regulators in the state of Uttar Pradesh found that samples of Maggi 2 Minute Noodles contained lead up to 17 times beyond permissible limits. On 3 June 2015, New Delhi Government banned the sale of Maggi noodles in New Delhi stores for 15 days because it was found to contain lead beyond the permissible limit. The Gujarat FDA on 4 June 2015 banned the noodles for 30 days after 27 out of 39 samples were detected with objectionable levels of metallic lead, among other things. Some of Indias biggest retailers like Future Group, Big Bazaar, Easyday and Nilgiris have imposed a nationwide ban on Maggi noodles. Many other states too have banned Maggi noodles.
Bullets
Contact with ammunition is a source of lead exposure. As of 2013, lead-based ammunition production is the second largest annual use of lead in the US, accounting for over 84,800 metric tons consumed in 2013, second only to the manufacture of storage batteries. The Environmental Protection Agency (EPA) cannot regulate cartridges and shells, as a matter of law. Lead birdshot is banned in some areas, but this is primarily for the benefit of the birds and their predators, rather than humans. Contamination from heavily used gun ranges is of concern to those who live near by. Non-lead alternatives include copper, zinc, steel, tungsten-nickel-iron, bismuth-tin, and polymer blends such as tungsten-polymer and copper-polymer.
Because game animals can be shot using lead bullets, the potential for lead ingestion from game meat consumption has been studied clinically and epidemiologically. In a recent study conducted by the CDC, a cohort from North Dakota was enrolled and asked to self-report historical consumption of game meat, and participation in other activities that could cause lead exposure. The study found that participants age, sex, housing age, current hobbies with potential for lead exposure, and game consumption were all associated with blood lead level (PbB).
According to a study published in 2008, 1.1% of the 736 persons consuming wild game meat tested had PbB ≥5 μg/dl In November 2015 The US HHS/CDC/NIOSH designated 5 µg/dL (five micrograms per deciliter) of whole blood, in a venous blood sample, as the reference blood lead level for adults. An elevated BLL is defined as a BLL ≥5 µg/d |
Lead poisoning | L. This case definition is used by the ABLES program, the Council of State and Territorial Epidemiologists (CSTE), and CDCs National Notifiable Diseases Surveillance System (NNDSS). Previously (i.e. from 2009 until November 2015), the case definition for an elevated BLL was a BLL ≥10 µg/dL.
To virtually eliminate the potential for lead contamination, some researchers have suggested the use of lead-free copper non-fragmenting bullets.Bismuth is an element used as a lead-replacement for shotgun pellets used in waterfowl hunting although shotshells made from bismuth are nearly ten times the cost of lead.
Opium
Lead contaminated opium has been the source of poisoning in Iran and other Middle Eastern countries. This has also appeared in the illicit narcotic supply in North America, resulting in confirmed lead poisoning.
Pathophysiology
Exposure occurs through inhalation, ingestion or occasionally skin contact. Lead may be taken in through direct contact with mouth, nose, and eyes (mucous membranes), and through breaks in the skin. Tetraethyllead, which was a gasoline additive and is still used in aviation gasoline, passes through the skin; and other forms of lead, including inorganic lead are also absorbed through skin. The main sources of absorption of inorganic lead are from ingestion and inhalation. In adults, about 35–40% of inhaled lead dust is deposited in the lungs, and about 95% of that goes into the bloodstream. Of ingested inorganic lead, about 15% is absorbed, but this percentage is higher in children, pregnant women, and people with deficiencies of calcium, zinc, or iron. Infants may absorb about 50% of ingested lead, but little is known about absorption rates in children.The main body tissues that store lead are the blood, soft tissues, and bone; the half-life of lead in these tissues is measured in weeks for blood, months for soft tissues, and years for bone. Lead in the bones, teeth, hair, and nails is bound tightly and not available to other tissues, and is generally thought not to be harmful. In adults, 94% of absorbed lead is deposited in the bones and teeth, but children only store 70% in this manner, a fact which may partially account for the more serious health effects on children. The half-life of lead in bone has been estimated as years to decades, and bone can introduce lead into the bloodstream long after the initial exposure is gone. The half-life of lead in the blood in men is about 40 days, but it may be longer in children and pregnant women, whose bones are undergoing remodeling, which allows the lead to be continuously re-introduced into the bloodstream. Also, if lead exposure takes place over years, clearance is much slower, partly due to the re-release of lead from bone. Many other tissues store lead, but those with the highest concentrations (other than blood, bone, and teeth) are the brain, spleen, kidneys, liver, and lungs.
Lead is removed from the body very slowly, mainly through urine. Smaller amounts of lead are also eliminated through the feces, and very small amounts in hair, nails, and sweat.Lead has no known physiologically relevant role in the body, and its harmful effects are myriad. Lead and other heavy metals create reactive radicals which damage cell structures including DNA and cell membranes. Lead also interferes with DNA transcription, enzymes that help in the synthesis of vitamin D, and enzymes that maintain the integrity of the cell membrane. Anemia may result when the cell membranes of red blood cells become more fragile as the result of damage to their membranes. Lead interferes with metabolism of bones and teeth and alters the permeability of blood vessels and collagen synthesis. Lead may also be harmful to the developing immune system, causing production of excessive inflammatory proteins; this mechanism may mean that lead exposure is a risk factor for asthma in children. Lead exposure has also been associated with a decrease in activity of immune cells such as polymorphonuclear leukocytes. Lead also interferes with the normal metabolism of calcium in cells and causes it to build up within them.
Enzymes
The primary cause of leads toxicity is its interference with a variety of enzymes because it binds to sulfhydryl groups found on many enzymes. Part of leads toxicity results from its ability to mimic other metals that take part in biological processes, which act as cofactors in many enzymatic reactions, displacing them at the enzymes on which they act. Lead is able to bind to and interact with many of the same enzymes as these metals but, due to its differing chemistry, does not properly function as a cofactor, thus interfering with the enzymes ability to catalyze its normal reaction or reactions. Among the essential metals with which lead interacts are calcium, iron, and zinc.The lead ion has a lone pair in its electronic structure, which can result in a distortion in the coordination of ligands, and in 2007 was hypothesized to be important in lead poisonings effects on enzymes (see Lone pair § Unusual lone pairs).One of the main causes for the pathology of lead is that it interferes with the activity of an essential enzyme called delta-aminolevulinic acid dehydratase, or ALAD (see image of the enzyme structure), which is important in the biosynthesis of heme, the cofactor found in hemoglobin. Lead also inhibits the enzyme ferrochelatase, another enzyme involved in the formation of heme. Ferrochelatase catalyzes the joining of protoporphyrin and Fe2+ to form heme. Leads interference with heme synthesis results in production of zinc protoporphyrin and the development of anemia. Another effect of leads interference with heme synthesis is the buildup of heme precursors, such as aminolevulinic acid, which may be directly or indirectly harmful to neurons. Elevation of aminolevulinic acid results in lead poisoning having symptoms similar to acute porphyria.
Neurons
The brain is the organ most sensitive to lead exposure. Lead is able to pass through the endothelial cells at the blood brain barrier because it can substitute for calcium ions and be taken up by calcium-ATPase pumps. Lead poisoning interferes with the normal development of a childs brain and nervous system; therefore children are at greater risk of lead neurotoxicity than adults are. In a childs developing brain, lead interferes with synapse formation in the cerebral cortex, neurochemical development (including that of neurotransmitters), and organization of ion channels. It causes loss of neurons myelin sheaths, reduces numbers of neurons, interferes with neurotransmission, and decreases neuronal growth.Lead-ions (Pb2+), like magnesium-ions (Mg2+), block NMDA receptors. Therefore, an increase in Pb2+ concentration will effectively inhibit ongoing long-term potentiation (LTP), and lead to an abnormal increase in long-term depression (LTD) on neurons in the affected parts of the nervous system. These abnormalities lead to the indirect downregulation of NMDA-receptors, effectively initiating a positive feedback-loop for LTD. The targeting of NMDA receptors is thought to be one of the main causes for leads toxicity to neurons.
Diagnosis
Diagnosis includes determining the clinical signs and the medical history, with inquiry into possible routes of exposure. Clinical toxicologists, medical specialists in the area of poisoning, may be involved in diagnosis and treatment.
The main tool in diagnosing and assessing the severity of lead poisoning is laboratory analysis of the blood lead level (BLL).
Blood film examination may reveal basophilic stippling of red blood cells (dots in red blood cells visible through a microscope), as well as the changes normally associated with iron-deficiency anemia (microcytosis and hypochromasia). This may be known as sideroblastic anemia. However, basophilic stippling is also seen in unrelated conditions, such as megaloblastic anemia caused by vitamin B12 (colbalamin) and folate deficiencies.
Contrary to other sideroblastic anemia, there are no ring sideroblasts in a bone marrow smear.Exposure to lead also can be evaluated by measuring erythrocyte protoporphyrin (EP) in blood samples. EP is a part of red blood cells known to increase when the amount of lead in the blood is high, with a delay of a few weeks. Thus EP levels in conjunction with blood lead levels can suggest the time period of exposure; if blood lead levels are high but EP is still normal, this finding suggests exposure was recent. However, the EP level alone is not sensitive enough to identify elevated blood lead levels below about 35 μg/dL. Due to this higher threshold for detection and the fact that EP levels also increase in iron deficiency, use of this method for detecting lead exposure has decreased.Blood lead levels are an indicator mainly of recent or current lead exposure, not of total body burden. Lead in bones can be measured noninvasively by X-ray fluorescence; this may be the best measure of cumulative exposure and total body burden. However this method is not widely available and is mainly used for research rather than routine diagnosis. Another radiographic sign of elevated lead levels is the presence of radiodense lines called lead lines at the metaphysis in the long bones of growing children, especially around the knees. These lead lines, caused by increased calcification due to disrupted metabolism in the growing bones, become wider as the duration of lead exposure increases. X-rays may also reveal lead-containing foreign materials such as paint chips in the gastrointestinal tract.Fecal lead content that is measured over the course of a few days may also be an accurate way to estimate the overall amount of childhood lead intake. This form of measurement may serve as a useful way to see the extent of oral lead exposure from all the diet and environmental sources of lead.Lead poisoning shares symptoms with other conditions and may be easily missed. Conditions that present similarly and must be ruled out in diagnosing lead poisoning include carpal tunnel syndrome, Guillain–Barré syndrome, renal colic, appendicitis, encephalitis in adults, and viral gastroenteritis in children. Other differential diagnoses in children include constipation, abdominal colic, iron deficiency, subdural hematoma, neoplasms of the central nervous system, emotional and behavior disorders, and intellectual disability.
Reference levels
The current reference range for acceptable blood lead concentrations in healthy persons without excessive exposure to environmental sources of lead is less than 3.5 µg/dL for children. It was less than 25 µg/dL for adults. Previous to 2012 the value for children was 10 (µg/dl). Lead-exposed workers in the U.S. are required to be removed from work when their level is greater than 50 µg/dL if they do construction and otherwise greater than 60 µg/dL.In 2015, US HHS/CDC/NIOSH designated 5 µg/dL (five micrograms per deciliter) of whole blood, in a venous blood sample, as the reference blood lead level for adults. An elevated BLL is defined as a BLL ≥5 µg/dL. This case definition is used by the ABLES program, the Council of State and Territorial Epidemiologists (CSTE), and CDCs National Notifiable Diseases Surveillance System (NNDSS). Previously (i.e. from 2009 until November 2015), the case definition for an elevated BLL was a BLL ≥10 µg/dL. The U.S. national BLL geometric mean among adults was 1.2 μg/dL in 2009–2010.Blood lead concentrations in poisoning victims have ranged from 30 to 80 µg/dL in children exposed to lead paint in older houses, 77–104 µg/dL in persons working with pottery glazes, 90–137 µg/dL in individuals consuming contaminated herbal medicines, 109–139 µg/dL in indoor shooting range instructors and as high as 330 µg/dL in those drinking fruit juices from glazed earthenware containers.
Prevention
In most cases, lead poisoning is preventable by avoiding exposure to lead. Prevention strategies can be divided into individual (measures taken by a family), preventive medicine (identifying and intervening with high-risk individuals), and public health (reducing risk on a population level).Recommended steps by individuals to reduce the blood lead levels of children include increasing their frequency of hand washing and their intake of calcium and iron, discouraging them from putting their hands to their mouths, vacuuming frequently, and eliminating the presence of lead-containing objects such as blinds and jewellery in the house. In houses with lead pipes or plumbing solder, these can be replaced. Less permanent but cheaper methods include running water in the morning to flush out the most contaminated water, or adjusting the waters chemistry to prevent corrosion of pipes. Lead testing kits are commercially available for detecting the presence of lead in the household. Testing kit accuracy depends on the user testing all layers of paint and the quality of the kit; the US Environmental Protection Agency (EPA) only approves kits with an accuracy rating of at least 95%. Professional lead testing companies caution that DIY test kits can create health risks for users that do not understand their limitations and liability issues for employers with regard to worker protection. As hot water is more likely than cold water to contain higher amounts of lead, use only cold water from the tap for drinking, cooking, and making baby formula. Since most of the lead in household water usually comes from plumbing in the house and not from the local water supply, using cold water can avoid lead exposure. Measures such as dust control and household education do not appear to be effective in changing childrens blood levels.Prevention measures also exist on national and municipal levels. Recommendations by health professionals for lowering childhood exposures include banning the use of lead where it is not essential and strengthening regulations that limit the amount of lead in soil, water, air, household dust, and products. Regulations exist to limit the amount of lead in paint; for example, a 1978 law in the US restricted the lead in paint for residences, furniture, and toys to 0.06% or less. In October 2008, the US EPA reduced the allowable lead level by a factor of ten to 0.15 micrograms per cubic meter of air, giving states five years to comply with the standards. The European Unions Restriction of Hazardous Substances Directive limits amounts of lead and other toxic substances in electronics and electrical equipment. In some places, remediation programs exist to reduce the presence of lead when it is found to be high, for example in drinking water. As a more radical solution, entire towns located near former lead mines have been "closed" by the government, and the population resettled elsewhere, as was the case with Picher, Oklahoma, in 2009. Removing lead from airplane fuel would also be useful.
Screening
Screening may be an important method of prevention for those at high risk, such as those who live near lead-related industries. The USPSTF has stated that general screening of those without symptoms include children and pregnant women is of unclear benefit as of 2019. The ACOG and APP, however, recommends asking about risk factors and testing those who have them.
Education
The education of workers on lead, its danger and how its workplace exposure can be decreased, especially when initial blood lead level and urine lead level are high, could help reduce the risk of lead poisoning in the workplace.
Treatment
The mainstays of treatment are removal from the source of lead and, for people who have significantly high blood lead levels or who have symptoms of poisoning, chelation therapy. Treatment of iron, calcium, and zinc deficiencies, which are associated with increased lead absorption, is another part of treatment for lead poisoning. When lead-containing materials are present in the gastrointestinal tract (as evidenced by abdominal X-rays), whole bowel irrigation, cathartics, endoscopy, or even surgical removal may be used to eliminate it from the gut and prevent further exposure. Lead-containing bullets and shrapnel may also present a threat of further exposure and may need to be surgically removed if they are in or near fluid-filled or synovial spaces. If lead encephalopathy is present, anticonvulsants may be given to control seizures, and treatments to control swelling of the brain include corticosteroids and mannitol. Treatment of organic lead poisoning involves removing the lead compound from the skin, preventing further exposure, treating seizures, and possibly chelation therapy for people with high blood lead concentrations.
A chelating agent is a molecule with at least two negatively charged groups that allow it to form complexes with metal ions with multiple positive charges, such as lead. The chelate that is thus formed is nontoxic and can be excreted in the urine, initially at up to 50 times the normal rate. The chelating agents used for treatment of lead poisoning are edetate disodium calcium (CaNa2EDTA), dimercaprol (BAL), which are injected, and succimer and d-penicillamine, which are administered orally.Chelation therapy is used in cases of acute lead poisoning, severe poisoning, and encephalopathy, and is considered for people with blood lead levels above 25 µg/dL. While the use of chelation for people with symptoms of lead poisoning is widely supported, use in asymptomatic people with high blood lead levels is more controversial. Chelation therapy is of limited value for cases of chronic exposure to low levels of lead. Chelation therapy is usually stopped when symptoms resolve or when blood lead levels return to premorbid levels. When lead exposure has taken place over a long period, blood lead levels may rise after chelation is stopped because lead is leached into blood from stores in the bone; thus repeated treatments are often necessary.People receiving dimercaprol need to be assessed for peanut allergies since the commercial formulation contains peanut oil. Calcium EDTA is also effective if administered four hours after the administration of dimercaprol. Administering dimercaprol, DMSA (Succimer), or DMPS prior to calcium EDTA is necessary to prevent the redistribution of lead into the central nervous system. Dimercaprol used alone may also redistribute lead to the brain and testes. An adverse side effect of calcium EDTA is renal toxicity. Succimer (DMSA) is the preferred agent in mild to moderate lead poisoning cases. This may be the case in instances where children have a blood lead level >25μg/dL. The most reported adverse side effect for succimer is gastrointestinal disturbances. It is also important to note that chelation therapy only lowers blood lead levels and may not prevent the lead-induced cognitive problems associated with lower lead levels in tissue. This may be because of the inability of these agents to remove sufficient amounts of lead from tissue or inability to reverse preexisting damage.
Chelating agents can have adverse effects; for example, chelation therapy can lower the bodys levels of necessary nutrients like zinc. Chelating agents taken orally can increase the bodys absorption of lead through the intestine.Chelation challenge, also known as provocation testing, is used to indicate an elevated and mobilizable body burden of heavy metals including lead. This testing involves collecting urine before and after administering a one-off dose of chelating agent to mobilize heavy metals into the urine. Then urine is analyzed by a laboratory for levels of heavy metals; from this analysis overall body burden is inferred. Chelation challenge mainly measures the burden of lead in soft tissues, though whether it accurately reflects long-term exposure or the amount of lead stored in bone remains controversial. Although the technique has been used to determine whether chelation therapy is indicated and to diagnose heavy metal exposure, some evidence does not support these uses as blood levels after chelation are not comparable to the reference range typically used to diagnose heavy metal poisoning. The single chelation dose could also redistribute the heavy metals to more sensitive areas such as central nervous system tissue.
Epidemiology
Since lead has been used widely for centuries, the effects of exposure are worldwide. Environmental lead is ubiquitous, and everyone has some measurable blood lead level. Atmospheric lead pollution increased dramatically beginning in the 1950s as a result of the widespread use of leaded gasoline. Lead is one of the largest environmental medicine problems in terms of numbers of people exposed and the public health toll it takes. Lead exposure accounts for about 0.2% of all deaths and 0.6% of disability adjusted life years globally.Although regulation reducing lead in products has greatly reduced exposure in the developed world since the 1970s, lead is still allowed in products in many developing countries. Despite phase out in many parts of the Global North, Global South exposure has increased by nearly three times. In all countries that have banned leaded gasoline, average blood lead levels have fallen sharply. However, some developing countries still allow leaded gasoline, which is the primary source of lead exposure in most developing countries. Beyond exposure from gasoline, the frequent use of pesticides in developing countries adds a risk of lead exposure and subsequent poisoning. Poor children in developing countries are at especially high risk for lead poisoning. Of North American children, 7% have blood lead levels above 10 μg/dL, whereas among Central and South American children, the percentage is 33–34%. About one fifth of the worlds disease burden from lead poisoning occurs in the Western Pacific, and another fifth is in Southeast Asia.In developed countries, people with low levels of education living in poorer areas are most at risk for elevated lead. In the US, the groups most at risk for lead exposure are the impoverished, city-dwellers, and immigrants. African-American children and those living in old housing have also been found to be at elevated risk for high blood lead levels in the US. Low-income people often live in old housing with lead paint, which may begin to peel, exposing residents to high levels of lead-containing dust.
Risk factors for elevated lead exposure include alcohol consumption and smoking (possibly because of contamination of tobacco leaves with lead-containing pesticides). Adults with certain risk factors might be more susceptible to toxicity; these include calcium and iron deficiencies, old age, disease of organs targeted by lead (e.g. the brain, the kidneys), and possibly genetic susceptibility.
Differences in vulnerability to lead-induced neurological damage between males and females have also been found, but some studies have found males to be at greater risk, while others have found females to be.In adults, blood lead levels steadily increase with increasing age. In adults of all ages, men have higher blood lead levels than women do. Children are more sensitive to elevated blood lead levels than adults are. Children may also have a higher intake of lead than adults; they breathe faster and may be more likely to have contact with and ingest soil. Children of ages one to three tend to have the highest blood lead levels, possibly because at that age they begin to walk and explore their environment, and they use their mouths in their exploration. Blood levels usually peak at about 18–24 months old. In many countries including the US, household paint and dust are the major route of exposure in children.
Notable cases
Cases of mass lead poisoning can occur. 15,000 people are being relocated from Jiyuan in central Henan province to other locations after 1000 children living around Chinas largest smelter plant (owned and operated by Yuguang Gold and Lead) were found to have excess lead in their blood. The total cost of this project is estimated to around 1 billion yuan ($150 million). 70% of the cost will be paid by local government and the smelter company, while the rest will be paid by the residents themselves. The government has suspended production at 32 of 35 lead plants. The affected area includes people from 10 different villages.The Zamfara State lead poisoning epidemic occurred in Nigeria in 2010. As of 5 October 2010 at least 400 children have died from the effects of lead poisoning.
Prognosis
Reversibility
Outcome is related to the extent and duration of lead exposure. Effects of lead on the physiology of the kidneys and blood are generally reversible; its effects on the central nervous system are not. While peripheral effects in adults often go away when lead exposure ceases, evidence suggests that most of leads effects on a childs central nervous system are irreversible. Children with lead poisoning may thus have adverse health, cognitive, and behavioral effects that follow them into adulthood.
Encephalopathy
Lead encephalopathy is a medical emergency and causes permanent brain damage in 70–80% of children affected by it, even those that receive the best treatment. The mortality rate for people who develop cerebral involvement is about 25%, and of those who survive who had lead encephalopathy symptoms by the time chelation therapy was begun, about 40% have permanent neurological problems such as cerebral palsy.
Long-term
Exposure to lead may also decrease lifespan and have health effects in the long term. Death rates from a variety of causes have been found to be higher in people with elevated blood lead levels; these include cancer, stroke, and heart disease, and general death rates from all causes. Lead is considered a possible human carcinogen based on evidence from animal studies. Evidence also suggests that age-related mental decline and psychiatric symptoms are correlated with lead exposure. Cumulative exposure over a prolonged period may have a more important effect on some aspects of health than recent exposure. Some health effects, such as high blood pressure, are only significant risks when lead exposure is prolonged (over about one year). Furthermore, the neurological effects of lead exposure have been shown to be exacerbated and long lasting in low income children in comparison to those of higher economic standing. This does not imply that being wealthy can prevent lead from causing long-term mental health issues.
Violence
Lead poisoning in children has been linked to changes in brain function that can result in low IQ, and increased impulsivity and aggression. These traits of childhood lead exposure are associated with crimes of passion, such as aggravated assault in young adults. An increase in lead exposure in children was linked to an increase in aggravated assault rates 22 years later. For instance, the peak in leaded gasoline use in the late 1970s corresponds to a peak in aggravated assault rates in the late 1990s in urban areas across the United States.
History
Lead poisoning was among the first known and most widely studied work regarding environmental hazards. One of the first metals to be smelted and used, lead is thought to have been discovered and first mined in Anatolia around 6500 BC. Its density, workability, and corrosion resistance were among the metals attractions.In the 2nd century BC the Greek botanist Nicander described the colic and paralysis seen in lead-poisoned people. Dioscorides, a Greek physician who lived in the 1st century AD, wrote that lead makes the mind "give way".Lead was used extensively in Roman aqueducts from about 500 BC to 300 AD. Julius Caesars engineer, Vitruvius, reported, "water is much more wholesome from earthenware pipes than from lead pipes. For it seems to be made injurious by lead, because white lead is produced by it, and this is said to be harmful to the human body." Gout, prevalent in affluent Rome, is thought to be the result of lead, or leaded eating and drinking vessels. Sugar of lead (lead(II) acetate) was used to sweeten wine, and the gout that resulted from this was known as "saturnine" gout. It is even hypothesized that lead poisoning may have contributed to the decline of the Roman Empire, a hypothesis thoroughly disputed:
The great disadvantage of lead has always been that it is poisonous. This was fully recognised by the ancients, and Vitruvius specifically warns against its use. Because it was nevertheless used in profusion for carrying drinking water, the conclusion has often been drawn that the Romans must therefore have suffered from lead poisoning; sometimes conclusions are carried even further and it is inferred that this caused infertility and other unwelcome conditions, and that lead plumbing was largely responsible for the decline and fall of Rome.
Two things make this otherwise attractive hypothesis impossible. First, the calcium carbonate deposit that formed so thickly inside the aqueduct channels also formed inside the pipes, effectively insulating the water from the lead, so that the two never touched. Second, because the Romans had so few taps and the water was constantly running, it was never inside the pipes for more than a few minutes, and certainly not long enough to become contaminated.
However, recent research supports the idea that the lead found in the water came from the supply pipes, rather than another source of contamination. It was not unknown for locals to punch holes in the pipes to draw water off, increasing the number of people exposed to the lead.
Thirty years ago, Jerome Nriagu argued in a milestone paper |
Lead poisoning | that Roman civilization collapsed as a result of lead poisoning. Clair Patterson, the scientist who convinced governments to ban lead from gasoline, enthusiastically endorsed this idea, which nevertheless triggered a volley of publications aimed at refuting it. Although today lead is no longer seen as the prime culprit of Romes demise, its status in the system of water distribution by lead pipes (fistulæ) still stands as a major public health issue. By measuring Pb isotope compositions of sediments from the Tiber River and the Trajanic Harbor, the present work shows that "tap water" from ancient Rome had 100 times more lead than local spring waters.
Romans also consumed lead through the consumption of defrutum, carenum, and sapa, musts made by boiling down fruit in lead cookware. Defrutum and its relatives were used in ancient Roman cuisine and cosmetics, including as a food preservative. The use of leaden cookware, though popular, was not the general standard and copper cookware was used far more generally. There is also no indication how often sapa was added or in what quantity.
The consumption of sapa as having a role in the fall of the Roman Empire was used in a theory proposed by geochemist Jerome Nriagu to state that "lead poisoning contributed to the decline of the Roman Empire". In 1984, John Scarborough, a pharmacologist and classicist, criticized the conclusions drawn by Nriagus book as "so full of false evidence, miscitations, typographical errors, and a blatant flippancy regarding primary sources that the reader cannot trust the basic arguments."After antiquity, mention of lead poisoning was absent from medical literature until the end of the Middle Ages. In 1656 the German physician Samuel Stockhausen recognized dust and fumes containing lead compounds as the cause of disease, called since ancient Roman times morbi metallici, that were known to afflict miners, smelter workers, potters, and others whose work exposed them to the metal.The painter Caravaggio might have died of lead poisoning. Bones with high lead levels were recently found in a grave thought likely to be his. Paints used at the time contained high amounts of lead salts. Caravaggio is known to have exhibited violent behavior, a symptom commonly associated with lead poisoning.
In 17th-century Germany, the physician Eberhard Gockel discovered lead-contaminated wine to be the cause of an epidemic of colic. He had noticed that monks who did not drink wine were healthy, while wine drinkers developed colic, and traced the cause to sugar of lead, made by simmering litharge with vinegar. As a result, Eberhard Ludwig, Duke of Württemberg issued an edict in 1696 banning the adulteration of wines with litharge.In the 18th century lead poisoning was fairly frequent on account of the widespread drinking of rum, which was made in stills with a lead component (the "worm"). It was a significant cause of mortality amongst slaves and sailors in the colonial West Indies. Lead poisoning from rum was also noted in Boston. Benjamin Franklin suspected lead to be a risk in 1786. Also in the 18th century, "Devonshire colic" was the name given to the symptoms experienced by people of Devon who drank cider made in presses that were lined with lead. Lead was added to cheap wine illegally in the 18th and early 19th centuries as a sweetener. The composer Beethoven, a heavy wine drinker, had elevated lead levels (as later detected in his hair) possibly due to this; the cause of his death is controversial, but lead poisoning is a contender as a factor.With the Industrial Revolution in the 19th century, lead poisoning became common in the work setting. The introduction of lead paint for residential use in the 19th century increased childhood exposure to lead; for millennia before this, most lead exposure had been occupational. William James Furnival (1853-1928), research ceramist of City & Guilds London Institute, appeared before Parliament in 1901 and presented a decades evidence to convince the nations leaders to remove lead completely from the British ceramic industry. His 852-page treatise, Leadless Decorative Tiles, Faience, and Mosaic of 1904 published that campaign and provided recipes to promote lead-free ceramics. At the request of the Illinois state government, Alice Hamilton (1869-1970) documented lead toxicity in Illinois industry and in 1911 presented results to the 23rd Annual Meeting of the American Economic Association. Dr. Hamilton was a founder of the field of occupational safety and health and published the first edition of her manual, Industrial Toxicology in 1934, yet in print in revised forms. An important step in the understanding of childhood lead poisoning occurred when toxicity in children from lead paint was recognized in Australia in 1897. France, Belgium, and Austria banned white lead interior paints in 1909; the League of Nations followed suit in 1922. However, in the United States, laws banning lead house paint were not passed until 1971,and it was phased out and not fully banned until 1978.The 20th century saw an increase in worldwide lead exposure levels due to the increased widespread use of the metal. Beginning in the 1920s, lead was added to gasoline to improve its combustion; lead from this exhaust persists today in soil and dust in buildings. Midcentury ceramicist Carol Janeway provides a case history of lead poisoning in an artist using lead glazes in decorating tiles in the 1940s; her monograph suggests that other artists potential for lead poisoning be investigated, for example Vally Wieselthier and Dora Carrington. Blood lead levels worldwide have been declining sharply since the 1980s, when leaded gasoline began to be phased out. In those countries that have banned lead in solder for food and drink cans and have banned leaded gasoline additives, blood lead levels have fallen sharply since the mid-1980s.The levels found today in most people are orders of magnitude greater than those of pre-industrial society. Due to reductions of lead in products and the workplace, acute lead poisoning is rare in most countries today, but low-level lead exposure is still common. |bot=InternetArchiveBot |fix-attempted=yes }}</ref> It was not until the second half of the 20th century that subclinical lead exposure became understood to be a problem. During the end of the 20th century, the blood lead levels deemed acceptable steadily declined. Blood lead levels once considered safe are now considered hazardous, with no known safe threshold.In the late 1950s through the 1970s Herbert Needleman and Clair Cameron Patterson did research trying to prove leads toxicity to humans. In the 1980s Needleman was falsely accused of scientific misconduct by the lead industry associates.In 2002 Tommy Thompson, secretary of Health and Human Services appointed at least two persons with conflicts of interest to the CDCs Lead Advisory Committee.
In 2014 a case by the state of California against a number of companies decided against Sherwin-Williams, NL Industries and ConAgra and ordered them to pay $1.15 billion. The disposition of The People v. ConAgra Grocery Products Company et al. in the California 6th Appellate District Court on 14 November 2017 is that:... the judgment is reversed, and the matter is remanded to the trial court with directions to (1) recalculate the amount of the abatement fund to limit it to the amount necessary to cover the cost of remediating pre-1951 homes, and (2) hold an evidentiary hearing regarding the appointment of a suitable receiver. The Plaintiff shall recover its costs on appeal.
On 6 December 2017, the petitions for rehearing from NL Industries, Inc., ConAgra Grocery Products Company and The Sherwin-Williams Company were denied.Studies have found a weak link between lead from leaded gasoline and crime rates.As of 2022 in the United States lead paint in rental housing remains a hazard to children. Both landlords and insurance companies have adopted strategies which limit the chance of recovery for damages due to lead poisoning: insurance companies by excluding coverage for lead poisoning from policies and landlords by crafting barriers to collection of any money damages compensating plaintiffs for damage.
Other species
Humans are not alone in suffering from leads effects; plants and animals are also affected by lead toxicity to varying degrees depending on species. Animals experience many of the same effects of lead exposure as humans do, such as abdominal pain, peripheral neuropathy, and behavioral changes such as increased aggression. Much of what is known about human lead toxicity and its effects is derived from animal studies. Animals are used to test the effects of treatments, such as chelating agents, and to provide information on the pathophysiology of lead, such as how it is absorbed and distributed in the body.Farm animals such as cows and horses as well as pet animals are also susceptible to the effects of lead toxicity. Sources of lead exposure in pets can be the same as those that present health threats to humans sharing the environment, such as paint and blinds, and there is sometimes lead in toys made for pets. Lead poisoning in a pet dog may indicate that children in the same household are at increased risk for elevated lead levels.
Wildlife
Lead, one of the leading causes of toxicity in waterfowl, has been known to cause die-offs of wild bird populations. When hunters use lead shot, waterfowl such as ducks can ingest the spent pellets later and be poisoned; predators that eat these birds are also at risk. Lead shot-related waterfowl poisonings were first documented in the US in the 1880s. By 1919, the spent lead pellets from waterfowl hunting was positively identified as the source of waterfowl deaths. Lead shot has been banned for hunting waterfowl in several countries, including the US in 1991 and Canada in 1997. Other threats to wildlife include lead paint, sediment from lead mines and smelters, and lead weights from fishing lines. Lead in some fishing gear has been banned in several countries.The critically endangered California condor has also been affected by lead poisoning. As scavengers, condors eat carcasses of game that have been shot but not retrieved, and with them the fragments from lead bullets; this increases their lead levels. Among condors around the Grand Canyon, lead poisoning due to eating lead shot is the most frequently diagnosed cause of death. In an effort to protect this species, in areas designated as the California condors range the use of projectiles containing lead has been banned to hunt deer, feral pigs, elk, pronghorn antelope, coyotes, ground squirrels, and other non-game wildlife. Also, conservation programs exist which routinely capture condors, check their blood lead levels, and treat cases of poisoning.
Notes
References
Further reading
== External links == |
Leishmaniasis | Leishmaniasis is a wide array of clinical manifestations caused by parasites of the trypanosome genus Leishmania. It is generally spread through the bite of phlebotomine sandflies, Phlebotomus and Lutzomyia, and occurs most frequently in the tropics and sub-tropics of Africa, Asia, the Americas, and southern Europe. The disease can present in three main ways: cutaneous, mucocutaneous, or visceral. The cutaneous form presents with skin ulcers, while the mucocutaneous form presents with ulcers of the skin, mouth, and nose. The visceral form starts with skin ulcers and later presents with fever, low red blood cell count, and enlarged spleen and liver.Infections in humans are caused by more than 20 species of Leishmania. Risk factors include poverty, malnutrition, deforestation, and urbanization. All three types can be diagnosed by seeing the parasites under microscopy. Additionally, visceral disease can be diagnosed by blood tests.Leishmaniasis can be partly prevented by sleeping under nets treated with insecticide. Other measures include spraying insecticides to kill sandflies and treating people with the disease early to prevent further spread. The treatment needed is determined by where the disease is acquired, the species of Leishmania, and the type of infection. Some possible medications used for visceral disease include liposomal amphotericin B, a combination of pentavalent antimonials and paromomycin, and miltefosine. For cutaneous disease, paromomycin, fluconazole, or pentamidine may be effective.About 4 to 12 million people are currently infected in some 98 countries. About 2 million new cases and between 20 and 50 thousand deaths occur each year. About 200 million people in Asia, Africa, South and Central America, and southern Europe live in areas where the disease is common. The World Health Organization has obtained discounts on some medications to treat the disease. It is classified as a neglected tropical disease. The disease may occur in a number of other animals, including dogs and rodents.
Signs and symptoms
The symptoms of leishmaniasis are skin sores which erupt weeks to months after the person is bitten by infected sand flies.
Leishmaniasis may be divided into the following types:
Cutaneous leishmaniasis is the most common form, which causes an open sore at the bite sites, which heals in a few months to a year and half, leaving an unpleasant-looking scar. Diffuse cutaneous leishmaniasis produces widespread skin lesions which resemble leprosy, and may not heal on its own.
Mucocutaneous leishmaniasis causes both skin and mucosal ulcers with damage primarily of the nose and mouth.
Visceral leishmaniasis or kala-azar (black fever) is the most serious form, and is generally fatal if untreated. Other consequences, which can occur a few months to years after infection, include fever, damage to the spleen and liver, and anemia.Leishmaniasis is considered one of the classic causes of a markedly enlarged (and therefore palpable) spleen; the organ, which is not normally felt during examination of the abdomen, may even become larger than the liver in severe cases.
Cause
Leishmaniasis is transmitted by the bite of infected female phlebotomine sandflies which can transmit the protozoa Leishmania. (1) The sandflies inject the infective stage, metacyclic promastigotes, during blood meals. (2) Metacyclic promastigotes in the puncture wound are phagocytized by macrophages, and (3) transform into amastigotes. (4) Amastigotes multiply in infected cells and affect different tissues, depending in part on the host, and in part on which Leishmania species is involved. These differing tissue specificities cause the differing clinical manifestations of the various forms of leishmaniasis. (5,6) Sandflies become infected during blood meals on infected hosts when they ingest macrophages infected with amastigotes. (7) In the sandflys midgut, the parasites differentiate into promastigotes, (8) which multiply, differentiate into metacyclic promastigotes, and migrate to the proboscis.
The genomes of three Leishmania species (L. major, L. infantum, and L. braziliensis) have been sequenced, and this has provided much information about the biology of the parasite. For example, in Leishmania, protein-coding genes are understood to be organized as large polycistronic units in a head-to-head or tail-to-tail manner; RNA polymerase II transcribes long polycistronic messages in the absence of defined RNA pol II promoters, and Leishmania has unique features with respect to the regulation of gene expression in response to changes in the environment. The new knowledge from these studies may help identify new targets for urgently needed drugs and aid the development of vaccines.
Vector
Although most of the literature mentions only one genus transmitting Leishmania to humans (Lutzomyia) in the New World, a 2003 study by Galati suggested a new classification for New World sand flies, elevating several subgenera to the genus level. Elsewhere in the world, the genus Phlebotomus is considered the vector of leishmaniasis.
Possible non-human reservoirs
Some cases of infection of non-human animals of human-infecting species of Leishmania have been observed. In one study, L. major was identified in twelve out of ninety-one wild western lowland gorilla fecal samples and in a study of fifty-two captive non-human primates under zoo captivity in a leishmaniasis endemic area, eight (all three chimpanzees, three golden lion tamarins, a tufted capuchin, and an Angolan talapoin), were found to be infected with L. infantum and capable of infecting Lutzomyia longipalpis sand flies, although "parasite loads in infected sand flies observed in this study were considered low".
Organisms
Visceral disease is usually caused by Leishmania donovani, L. infantum, or L. chagasi, but occasionally these species may cause other forms of disease. The cutaneous form of the disease is caused by more than 15 species of Leishmania.
Risk factors
Risk factors include malnutrition, deforestation, lack of sanitation, suppressed immune system and urbanization.
Diagnosis
Leishmaniasis is diagnosed in the hematology laboratory by direct visualization of the amastigotes (Leishman–Donovan bodies). Buffy-coat preparations of peripheral blood or aspirates from marrow, spleen, lymph nodes, or skin lesions should be spread on a slide to make a thin smear and stained with Leishman stain or Giemsa stain (pH 7.2) for 20 minutes. Amastigotes are seen within blood and spleen monocytes or, less commonly, in circulating neutrophils and in aspirated tissue macrophages. They are small, round bodies 2–4 μm in diameter with indistinct cytoplasm, a nucleus, and a small, rod-shaped kinetoplast. Occasionally, amastigotes may be seen lying free between cells. However, the retrieval of tissue samples is often painful for the patient and identification of the infected cells can be difficult. So, other indirect immunological methods of diagnosis are developed, including enzyme-linked immunosorbent assay, antigen-coated dipsticks, and direct agglutination test. Although these tests are readily available, they are not the standard diagnostic tests due to their insufficient sensitivity and specificity.
Several different polymerase chain reaction (PCR) tests are available for the detection of Leishmania DNA. With this assay, a specific and sensitive diagnostic procedure is finally possible. The most sensitive PCR tests use minicircle kinetoplast DNA found in the parasite. Kinetoplast DNA contains sequences for mitochondrial proteins in its maxicircles(~25-50 per parasite), and guide RNA in its minicircles(~10000 per parasite) of the kinetoplast. With this specific method, one can still detect Leishmania even with a very low parasite load. When needing to diagnose a specific species of Leishmania, as opposed to only detection, other PCR methods have been superior.Most forms of the disease are transmitted only from nonhuman animals, but some can be spread between humans. Infections in humans are caused by about 21 of 30 species that infect mammals; the different species look the same, but they can be differentiated by isoenzyme analysis, DNA sequence analysis, or monoclonal antibodies.
Prevention
Using insect repellent to exposed skin and under the ends of sleeves and pant legs. Follow the instructions on the label of the repellent. The most effective repellents generally are those that contain the chemical DEET (N,N-diethylmetatoluamide)
Leishmaniasis can be partly prevented by using nets treated with insecticide or insect repellent while sleeping. To provide good protection against sandflies, fine mesh sizes of 0.6 mm or less are required, but a mosquito net with 1.2mm mesh will provide a limited reduction in the number of sandfly bites. Finer mesh sizes have the downside of higher cost and reduced air circulation which can cause overheating. Many Phlebotomine sandfly attacks occur at sunset rather than at night, so it may also be useful to put nets over doors and windows or to use insect repellents.
Use of insecticide-impregnated dog collars and treatment or culling of infected dogs.
Spraying houses and animal shelters with insecticides.
Treatment
The treatment is determined by where the disease is acquired, the species of Leishmania, and the type of infection.
For visceral leishmaniasis in India, South America, and the Mediterranean, liposomal amphotericin B is the recommended treatment and is often used as a single dose. Rates of cure with a single dose of amphotericin have been reported as 95%. In India, almost all infections are resistant to pentavalent antimonials. In Africa, a combination of pentavalent antimonials and paromomycin is recommended. These, however, can have significant side effects. Miltefosine, an oral medication, is effective against both visceral and cutaneous leishmaniasis. Side effects are generally mild, though it can cause birth defects if taken within 3 months of getting pregnant. It does not appear to work for L. major or L. braziliensis.The evidence around the treatment of cutaneous leishmaniasis is poor. A number of topical treatments may be used for cutaneous leishmaniasis. Which treatments are effective depends on the strain, with topical paromomycin effective for L. major, L. tropica, L. mexicana, L. panamensis, and L. braziliensis. Pentamidine is effective for L. guyanensis. Oral fluconazole or itraconazole appears effective in L. major and L. tropica. There is limited evidence to support the use of heat therapy in cutaneous leishmaniasis as of 2015.There are no studies determining the effect of oral nutritional supplements on visceral leishmaniasis being treated with anti-leishmanial drug therapy.
Epidemiology
Out of 200 countries and territories reporting to WHO, 97 countries and territories are endemic for leishmaniasis. The settings in which leishmaniasis is found range from rainforests in Central and South America to deserts in western Asia and the Middle East. It affects as many as 12 million people worldwide, with 1.5–2.0 million new cases each year. The visceral form of leishmaniasis has an estimated incidence of 500,000 new
cases. In 2014, more than 90% of new cases reported to WHO occurred in six countries: Brazil, Ethiopia, India, Somalia, South Sudan and Sudan. As of 2010, it caused about 52,000 deaths, down from 87,000 in 1990.
Different types of the disease occur in different regions of the world. Cutaneous disease is most common in Afghanistan, Algeria, Brazil, Colombia, and Iran, while mucocutaneous disease is most common in Bolivia, Brazil, and Peru, and visceral disease is most common in Bangladesh, Brazil, Ethiopia, India, and Sudan.Leishmaniasis is found through much of the Americas from northern Argentina to South Texas, though not in Uruguay or Chile, and has recently been shown to be spreading to North Texas and Oklahoma, and further expansion to the north may be facilitated by climate change as more habitat becomes suitable for vector and reservoir species for leishmaniasis. Leishmaniasis is also known as papalomoyo, papa lo moyo, úlcera de los chicleros, and chiclera in Latin America. During 2004, an estimated 3,400 troops from the Colombian army, operating in the jungles near the south of the country (in particular around the Meta and Guaviare departments), were infected with leishmaniasis. Allegedly, a contributing factor was that many of the affected soldiers did not use the officially provided insect repellent because of its disturbing odor. Nearly 13,000 cases of the disease were recorded in all of Colombia throughout 2004, and about 360 new instances of the disease among soldiers had been reported in February 2005.The disease is found across much of Asia, and in the Middle East. Within Afghanistan, leishmaniasis occurs commonly in Kabul, partly due to bad sanitation and waste left uncollected in streets, allowing parasite-spreading sand flies an environment they find favorable. In Kabul, the number of people infected was estimated to be at least 200,000, and in three other towns (Herat, Kandahar, and Mazar-i-Sharif) about 70,000 more occurred, according to WHO figures from 2002. Kabul is estimated as the largest center of cutaneous leishmaniasis in the world, with around 67,500 cases as of 2004. Africa, in particular the East and North, is also home to cases of leishmaniasis. Leishmaniasis is considered endemic also in some parts of southern parts of western Europe and spreading towards north in recent years. For example, an outbreak of cutaneous and visceral leishmaniasis was reported from Madrid, Spain, between 2010 and 2012.Leishmaniasis is mostly a disease of the developing world, and is rarely known in the developed world outside a small number of cases, mostly in instances where troops are stationed away from their home countries. Leishmaniasis has been reported by U.S. troops stationed in Saudi Arabia and Iraq since the Gulf War of 1990, including visceral leishmaniasis.
In September 2005, the disease was contracted by at least four Dutch marines who were stationed in Mazar-i-Sharif, Afghanistan, and subsequently repatriated for treatment.
History
Descriptions of conspicuous lesions similar to cutaneous leishmaniasis appear on tablets from King Ashurbanipal from the seventh century BCE, some of which may have derived from even earlier texts from 1500 to 2500 BCE. Persian physicians, including Avicenna in the 10th century CE, gave detailed descriptions of what was called balkh sore. In 1756, Alexander Russell, after examining a Turkish patient, gave one of the most detailed clinical descriptions of the disease. Physicians in the Indian subcontinent would describe it as kala-azar (pronounced kālā āzār, the Urdu, Hindi, and Hindustani phrase for "black fever", kālā meaning black and āzār meaning fever or disease). In the Americas, evidence of the cutaneous form of the disease in Ecuador and Peru appears in pre-Inca pottery depicting skin lesions and deformed faces dating back to the first century CE. Some 15th- and 16th-century texts from the Inca period and from Spanish colonials mention "valley sickness", "Andean sickness", or "white leprosy", which are likely to be the cutaneous form.It remains unclear who first discovered the organism. David Douglas Cunningham, Surgeon Major of the British Indian army, may have seen it in 1885 without being able to relate it to the disease. Peter Borovsky, a Russian military surgeon working in Tashkent, conducted research into the etiology of "oriental sore", locally known as sart sore, and in 1898 published the first accurate description of the causative agent, correctly described the parasites relation to host tissues and correctly referred it to the protozoa. However, because his results were published in Russian in a journal with low circulation, his results were not internationally acknowledged during his lifetime. In 1901, William Boog Leishman identified certain organisms in smears taken from the spleen of a patient who had died from "dum-dum fever" (Dum Dum is an area close to Calcutta) and proposed them to be trypanosomes, found for the first time in India. A few months later, Captain Charles Donovan (1863–1951) confirmed the finding of what became known as Leishman-Donovan bodies in smears taken from people in Madras in southern India. But it was Ronald Ross who proposed that Leishman-Donovan bodies were the intracellular stages of a new parasite, which he named Leishmania donovani. The link with the disease kala-azar was first suggested by Charles Donovan, and was conclusively demonstrated by Charles Bentleys discovery of L. donovani in patients with kala-azar. Transmission by the sandfly was hypothesized by Lionel Napier and Ernest Struthers at the School of Tropical Medicine at Calcutta and later proven by his colleagues. The disease became a major problem for Allied troops fighting in Sicily during the Second World War; research by Leonard Goodwin then showed pentostam was an effective treatment.
Society and culture
The Institute for OneWorld Health has reintroduced the drug paromomycin for treatment of leishmaniasis, results with which led to its approval as an orphan drug. The Drugs for Neglected Diseases Initiative is also actively facilitating the search for novel therapeutics. A treatment with paromomycin will cost about US$10. The drug had originally been identified in the 1960s, but had been abandoned because it would not be profitable, as the disease mostly affects poor people. The Indian government approved paromomycin for sale in August 2006.By 2012 the World Health Organization had successfully negotiated with the manufacturers to achieve a reduced cost for liposomal amphotericin B, to US$18 a vial, but a number of vials are needed for treatment and it must be kept at a stable, cool temperature.
Research
As of 2017, no leishmaniasis vaccine for humans was available. Research to produce a human vaccine is ongoing.Currently some effective leishmaniasis vaccines for dogs exist. There is also consideration that public health practices can control or eliminate leishmaniasis without a vaccine.
See also
Canine vector-borne disease
Tropical disease
References
External links
Leishmaniasis at Curlie
Doctors Without Borders Leishmaniasis Information Page
CDC Leishmaniasis Page |
Lentigo | A lentigo () (plural lentigines, ) is a small pigmented spot on the skin with a clearly defined edge, surrounded by normal-appearing skin. It is a harmless (benign) hyperplasia of melanocytes which is linear in its spread. This means the hyperplasia of melanocytes is restricted to the cell layer directly above the basement membrane of the epidermis where melanocytes normally reside. This is in contrast to the "nests" of multi-layer melanocytes found in moles (melanocytic nevi). Because of this characteristic feature, the adjective "lentiginous" is used to describe other skin lesions that similarly proliferate linearly within the basal cell layer.
Diagnosis
Conditions characterized by lentigines include:
Lentigo simplex
Solar lentigo (Liver spots)
PUVA lentigines
Ink spot lentigo
LEOPARD syndrome
Mucosal lentigines
Multiple lentigines syndrome
Moynahan syndrome
Generalized lentiginosis
Centrofacial lentiginosis
Carney complex
Inherited patterned lentiginosis in black persons
Partial unilateral lentiginosis
Peutz–Jeghers syndrome
Lentigo maligna
Lentigo maligna melanoma
Acral lentiginous melanoma
Differential diagnosis
Lentigines are distinguished from freckles (ephelis) based on the proliferation of melanocytes. Freckles have a relatively normal number of melanocytes but an increased amount of melanin. A lentigo has an increased number of melanocytes. Freckles will increase in number and darkness with sunlight exposure, whereas lentigines will stay stable in their color regardless of sunlight exposure.
Treatment
Lentigines by themselves are benign, however one might desire the removal or treatment of some of them for cosmetic purposes. In this case they can be removed surgically, or lightened with the use of topical depigmentation agents. Some common depigmentation agents such as azelaic acid and kojic acid seem to be inefficient in this case, however other agents might work well (4% hydroquinone, 5% topical cysteamine, 10% topical ascorbic acid).
See also
Freckle
List of skin diseases
Mole
Skin disease
Skin lesion
References
== External links == |
Leprosy | Leprosy, also known as Hansens disease (HD), is a long-term infection by the bacteria Mycobacterium leprae or Mycobacterium lepromatosis. Infection can lead to damage of the nerves, respiratory tract, skin, and eyes. This nerve damage may result in a lack of ability to feel pain, which can lead to the loss of parts of a persons extremities from repeated injuries or infection through unnoticed wounds. An infected person may also experience muscle weakness and poor eyesight. Leprosy symptoms may begin within one year, but, for some people, symptoms may take 20 years or more to occur.Leprosy is spread between people, although extensive contact is necessary. Leprosy has a low pathogenicity, and 95% of people who contract M. leprae do not develop the disease. Spread is thought to occur through a cough or contact with fluid from the nose of a person infected by leprosy. Genetic factors and immune function play a role in how easily a person catches the disease. Leprosy does not spread during pregnancy to the unborn child or through sexual contact. Leprosy occurs more commonly among people living in poverty. There are two main types of the disease – paucibacillary and multibacillary, which differ in the number of bacteria present. A person with paucibacillary disease has five or fewer poorly-pigmented, numb skin patches, while a person with multibacillary disease has more than five skin patches. The diagnosis is confirmed by finding acid-fast bacilli in a biopsy of the skin.Leprosy is curable with multidrug therapy. Treatment of paucibacillary leprosy is with the medications dapsone, rifampicin, and clofazimine for six months. Treatment for multibacillary leprosy uses the same medications for 12 months. A number of other antibiotics may also be used. These treatments are provided free of charge by the World Health Organization.Leprosy is not highly contagious. People with leprosy can live with their families and go to school and work. In the 1980s, there were 5.2 million cases globally but went down to less than 0.2 million by 2020. Most new cases occur in 14 countries, with India accounting for more than half. In the 20 years from 1994 to 2014, 16 million people worldwide were cured of leprosy. About 200 cases per year are reported in the United States. Separating people affected by leprosy by placing them in leper colonies still occurs in some areas of India, China, areas in the African continent, and Thailand.Leprosy has affected humanity for thousands of years. The disease takes its name from the Greek word λέπρᾱ (léprā), from λεπῐ́ς (lepís; scale), while the term "Hansens disease" is named after the Norwegian physician Gerhard Armauer Hansen. Leprosy has historically been associated with social stigma, which continues to be a barrier to self-reporting and early treatment. Some consider the word leper offensive, preferring the phrase "person affected with leprosy". Leprosy is classified as a neglected tropical disease. World Leprosy Day was started in 1954 to draw awareness to those affected by leprosy.
Signs and symptoms
Common symptoms present in the different types of leprosy include a runny nose; dry scalp; eye problems; skin lesions; muscle weakness; reddish skin; smooth, shiny, diffuse thickening of facial skin, ear, and hand; loss of sensation in fingers and toes; thickening of peripheral nerves; a flat nose from destruction of nasal cartilage; and changes in phonation and other aspects of speech production. In addition, atrophy of the testes and impotence may occur.Leprosy can affect people in different ways. The average incubation period is five years. People may begin to notice symptoms within the first year or up to 20 years after infection. The first noticeable sign of leprosy is often the development of pale or pink coloured patches of skin that may be insensitive to temperature or pain. Patches of discolored skin are sometimes accompanied or preceded by nerve problems including numbness or tenderness in the hands or feet. Secondary infections (additional bacterial or viral infections) can result in tissue loss, causing fingers and toes to become shortened and deformed, as cartilage is absorbed into the body. A persons immune response differs depending on the form of leprosy.Approximately 30% of people affected with leprosy experience nerve damage. The nerve damage sustained is reversible when treated early, but becomes permanent when appropriate treatment is delayed by several months. Damage to nerves may cause loss of muscle function, leading to paralysis. It may also lead to sensation abnormalities or numbness, which may lead to additional infections, ulcerations, and joint deformities.
Cause
M. leprae and M. lepromatosis
M. leprae and M. lepromatosis are the mycobacteria that cause leprosy. M. lepromatosis is a relatively newly identified mycobacterium isolated from a fatal case of diffuse lepromatous leprosy in 2008. M. lepromatosis is indistinguishable clinically from M. leprae.M. leprae is an intracellular, acid-fast bacterium that is aerobic and rod-shaped. M. leprae is surrounded by the waxy cell envelope coating characteristic of the genus Mycobacterium.Genetically, M. leprae and M. lepromatosis lack the genes that are necessary for independent growth. M. leprae and M. lepromatosis are obligate intracellular pathogens, and can not be grown (cultured) in the laboratory. The inability to culture M. leprae and M. lepromatosis has resulted in a difficulty definitively identifying the bacterial organism under a strict interpretation of Kochs postulates.While the causative organisms have to date been impossible to culture in vitro, it has been possible to grow them in animals such as mice and armadillos.Naturally occurring infection has been reported in nonhuman primates (including the African chimpanzee, the sooty mangabey, and the cynomolgus macaque), armadillos, and red squirrels. Multilocus sequence typing of the armadillo M. leprae strains suggests that they were of human origin for at most a few hundred years. Thus, it is suspected that armadillos first acquired the organism incidentally from early American explorers. This incidental transmission was sustained in the armadillo population, and it may be transmitted back to humans, making leprosy a zoonotic disease (spread between humans and animals).Red squirrels (Sciurus vulgaris), a threatened species in Great Britain, were found to carry leprosy in November 2016. It has been suggested that the trade in red squirrel fur, highly prized in the medieval period and intensively traded, may have been responsible for the leprosy epidemic in medieval Europe. A pre-Norman-era skull excavated in Hoxne, Suffolk, in 2017 was found to carry DNA from a strain of Mycobacterium leprae, which closely matched the strain carried by modern red squirrels on Brownsea Island, UK.
Risk factors
The greatest risk factor for developing leprosy is contact with another person infected by leprosy. People who are exposed to a person who has leprosy are 5–8 times more likely to develop leprosy than members of the general population. Leprosy also occurs more commonly among those living in poverty. Not all people who are infected with M. leprae develop symptoms.Conditions that reduce immune function, such as malnutrition, other illnesses, or genetic mutations, may increase the risk of developing leprosy. Infection with HIV does not appear to increase the risk of developing leprosy. Certain genetic factors in the person exposed have been associated with developing lepromatous or tuberculoid leprosy.
Transmission
Transmission of leprosy occurs during close contact with those who are infected. Transmission of leprosy is through the upper respiratory tract. Older research suggested the skin as the main route of transmission, but recent research has increasingly favored the respiratory route.Leprosy is not sexually transmitted and is not spread through pregnancy to the unborn child. The majority (95%) of people who are exposed to M. leprae do not develop leprosy; casual contact such as shaking hands and sitting next to someone with leprosy does not lead to transmission. People are considered non-infectious 72 hours after starting appropriate multi-drug therapy.Two exit routes of M. leprae from the human body often described are the skin and the nasal mucosa, although their relative importance is not clear. Lepromatous cases show large numbers of organisms deep in the dermis, but whether they reach the skin surface in sufficient numbers is doubtful.Leprosy may also be transmitted to humans by armadillos, although the mechanism is not fully understood.
Genetics
Not all people who are infected or exposed to M. leprae develop leprosy, and genetic factors are suspected to play a role in susceptibility to an infection. Cases of leprosy often cluster in families and several genetic variants have been identified. In many people who are exposed, the immune system is able to eliminate the leprosy bacteria during the early infection stage before severe symptoms develop. A genetic defect in cell-mediated immunity may cause a person to be susceptible to develop leprosy symptoms after exposure to the bacteria. The region of DNA responsible for this variability is also involved in Parkinsons disease, giving rise to current speculation that the two disorders may be linked at the biochemical level.
Mechanism
Most leprosy complications are the result of nerve damage. The nerve damage occurs from direct invasion by the M. leprae bacteria and a persons immune response resulting in inflammation. The molecular mechanism underlying how M. leprae produces the symptoms of leprosy is not clear, but M. leprae has been shown to bind to Schwann cells, which may lead to nerve injury including demyelination and a loss of nerve function (specifically a loss of axonal conductance). Numerous molecular mechanisms have been associated with this nerve damage including the presence of a laminin-binding protein and the glycoconjugate (PGL-1) on the surface of M. leprae that can bind to laminin on peripheral nerves.As part of the human immune response, white blood cell-derived macrophages may engulf M. leprae by phagocytosis.In the initial stages, small sensory and autonomic nerve fibers in the skin of a person with leprosy are damaged. This damage usually results in hair loss to the area, a loss of the ability to sweat, and numbness (decreased ability to detect sensations such as temperature and touch). Further peripheral nerve damage may result in skin dryness, more numbness, and muscle weaknesses or paralysis in the area affected. The skin can crack and if the skin injuries are not carefully cared for, there is a risk for a secondary infection that can lead to more severe damage.
Diagnosis
In countries where people are frequently infected, a person is considered to have leprosy if they have one of the following two signs:
Skin lesion consistent with leprosy and with definite sensory loss.
Positive skin smears.Skin lesions can be single or many, and usually hypopigmented, although occasionally reddish or copper-colored. The lesions may be flat (macules), raised (papules), or solid elevated areas (nodular). Experiencing sensory loss at the skin lesion is a feature that can help determine if the lesion is caused by leprosy or by another disorder such as tinea versicolor. Thickened nerves are associated with leprosy and can be accompanied by loss of sensation or muscle weakness, but muscle weakness without the characteristic skin lesion and sensory loss is not considered a reliable sign of leprosy.In some cases, acid-fast leprosy bacilli in skin smears are considered diagnostic; however, the diagnosis is typically made without laboratory tests, based on symptoms. If a person has a new leprosy diagnosis and already has a visible disability caused by leprosy, the diagnosis is considered late.In countries or areas where leprosy is uncommon, such as the United States, diagnosis of leprosy is often delayed because healthcare providers are unaware of leprosy and its symptoms. Early diagnosis and treatment prevent nerve involvement, the hallmark of leprosy, and the disability it causes.There is no recommended test to diagnose latent leprosy in people without symptoms. Few people with latent leprosy test positive for anti PGL-1. The presence of M. leprae bacterial DNA can be identified using a polymerase chain reaction (PCR)-based technique. This molecular test alone is not sufficient to diagnose a person, but this approach may be used to identify someone who is at high risk of developing or transmitting leprosy such as those with few lesions or an atypical clinical presentation.
Classification
Several different approaches for classifying leprosy exist. There are similarities between the classification approaches.
The World Health Organization system distinguishes "paucibacillary" and "multibacillary" based upon the proliferation of bacteria. ("pauci-" refers to a low quantity.)
The Ridley-Jopling scale provides five gradations.
The ICD-10, though developed by the WHO, uses Ridley-Jopling and not the WHO system. It also adds an indeterminate ("I") entry.
In MeSH, three groupings are used.
Leprosy may also occur with only neural involvement, without skin lesions.
Prevention
Early detection of the disease is important, since physical and neurological damage may be irreversible even if cured. Medications can decrease the risk of those living with people who have leprosy from acquiring the disease and likely those with whom people with leprosy come into contact outside the home. The WHO recommends that preventive medicine be given to people who are in close contact with someone who has leprosy. The suggested preventive treatment is a single dose of rifampicin (SDR) in adults and children over 2 years old who do not already have leprosy or tuberculosis. Preventive treatment is associated with a 57% reduction in infections within 2 years and a 30% reduction in infections within 6 years.The Bacillus Calmette–Guérin (BCG) vaccine offers a variable amount of protection against leprosy in addition to its closely related target of tuberculosis. It appears to be 26% to 41% effective (based on controlled trials) and about 60% effective based on observational studies with two doses possibly working better than one. The WHO concluded in 2018 that the BCG vaccine at birth reduces leprosy risk and is recommended in countries with high incidence of TB and people who have leprosy. People living in the same home as a person with leprosy are suggested to take a BCG booster which may improve their immunity by 56%. Development of a more effective vaccine is ongoing.A novel vaccine called LepVax entered clinical trials in 2017 with the first encouraging results reported on 24 participants published in 2020. If successful, this would be the first leprosy-specific vaccine available.
Treatment
Anti-leprosy medication
A number of leprostatic agents are available for treatment. A three-drug regimen of rifampicin, dapsone and clofazimine is recommended for all people with leprosy, for six months for paucibacillary leprosy and 12 months for multibacillary leprosy.Multidrug therapy (MDT) remains highly effective, and people are no longer infectious after the first monthly dose. It is safe and easy to use under field conditions because of its presentation in calendar blister packs. Post-treatment relapse rates remain low. Resistance has been reported in several countries, although the number of cases is small. People with rifampicin-resistant leprosy may be treated with second line drugs such as fluoroquinolones, minocycline, or clarithromycin, but the treatment duration is 24 months because of their lower bactericidal activity. Evidence on the potential benefits and harms of alternative regimens for drug-resistant leprosy is not yet available.
Skin changes
For people with nerve damage, protective footwear may help prevent ulcers and secondary infection. Canvas shoes may be better than PVC boots. There may be no difference between double rocker shoes and below-knee plaster.Topical ketanserin seems to have a better effect on ulcer healing than clioquinol cream or zinc paste, but the evidence for this is weak. Phenytoin applied to the skin improves skin changes to a greater degree when compared to saline dressings.
Outcomes
Although leprosy has been curable since the mid-20th century, left untreated it can cause permanent physical impairments and damage to a persons nerves, skin, eyes, and limbs. Despite leprosy not being very infectious and having a low pathogenicity, there is still significant stigma and prejudice associated with the disease. Because of this stigma, leprosy can affect a persons participation in social activities and may also affect the lives of their family and friends. People with leprosy are also at a higher risk for problems with their mental well-being. The social stigma may contribute to problems obtaining employment, financial difficulties, and social isolation. Efforts to reduce discrimination and reduce the stigma surrounding leprosy may help improve outcomes for people with leprosy.
Epidemiology
In 2018, there were 208,619 new cases of leprosy recorded, a slight decrease from 2017. In 2015, 94% of the new leprosy cases were confined to 14 countries. India reported the greatest number of new cases (60% of reported cases), followed by Brazil (13%) and Indonesia (8%). Although the number of cases worldwide continues to fall, there are parts of the world where leprosy is more common, including Brazil, South Asia (India, Nepal, Bhutan), some parts of Africa (Tanzania, Madagascar, Mozambique), and the western Pacific. About 150 to 250 cases are diagnosed in the United States each year.In the 1960s, there were tens of millions of leprosy cases recorded when the bacteria started to develop resistance to dapsone, the most common treatment option at the time. International (e.g., the WHOs "Global Strategy for Reducing Disease Burden Due to Leprosy") and national (e.g., the International Federation of Anti-Leprosy Associations) initiatives have reduced the total number and the number of new cases of the disease.
Disease burden
The number of new leprosy cases is difficult to measure and monitor because of leprosys long incubation period, delays in diagnosis after onset of the disease, and lack of medical care in affected areas. The registered prevalence of the disease is used to determine disease burden. Registered prevalence is a useful proxy indicator of the disease burden, as it reflects the number of active leprosy cases diagnosed with the disease and receiving treatment with MDT at a given point in time. The prevalence rate is defined as the number of cases registered for MDT treatment among the population in which the cases have occurred, again at a given point in time.
History
Historical distribution
Using comparative genomics, in 2005, geneticists traced the origins and worldwide distribution of leprosy from East Africa or the Near East along human migration routes. They found four strains of M. leprae with specific regional locations. Strain 1 occurs predominantly in Asia, the Pacific region, and East Africa; strain 4, in West Africa and the Caribbean; strain 3 in Europe, North Africa, and the Americas; and strain 2 only in Ethiopia, Malawi, Nepal, north India, and New Caledonia.
This confirms the spread of the disease along the migration, colonisation, and slave trade routes taken from East Africa to India, West Africa to the New World, and from Africa into Europe and vice versa.Skeletal remains discovered in 2009 represent the oldest documented evidence for leprosy, dating to the 2nd millennium BC. Located at Balathal, Rajasthan, in northwest India, the discoverers suggest that, if the disease did migrate from Africa to India during the 3rd millennium BC "at a time when there was substantial interaction among the Indus Civilization, Mesopotamia, and Egypt, there needs to be additional skeletal and molecular evidence of leprosy in India and Africa to confirm the African origin of the disease." A proven human case was verified by DNA taken from the shrouded remains of a man discovered in a tomb next to the Old City of Jerusalem, Israel, dated by radiocarbon methods to the first half of the 1st century.The oldest strains of leprosy known from Europe are from Great Chesterford in southeast England and dating back to AD 415–545. These findings suggest a different path for the spread of leprosy, meaning it may have originated in Western Eurasia. This study also indicates that there were more strains in Europe at the time than previously determined.
Discovery and scientific progress
Literary attestation of leprosy is unclear because of the ambiguity of many early sources, including the Indian Atharvaveda and Kausika Sutra, the Egyptian Ebers papyrus, and the Hebrew Bibles various sections regarding signs of impurity (tzaraath). Clearly leprotic symptoms are attested in the Indian doctor Sushrutas Compendium, originally dating to c. 600 BC but only surviving in emended texts no earlier than the 5th century. They were separately described by Hippocrates in 460 BC. However, Hansens disease probably did not exist in Greece or the Middle East before the Common Era. In 1846, Francis Adams produced The Seven Books of Paulus Aegineta which included a commentary on all medical and surgical knowledge and descriptions and remedies to do with leprosy from the Romans, Greeks, and Arabs.Leprosy did not exist in the Americas before colonization by modern Europeans nor did it exist in Polynesia until the middle of the 19th century.
The causative agent of leprosy, M. leprae, was discovered by G. H. Armauer Hansen in Norway in 1873, making it the first bacterium to be identified as causing disease in humans.
Treatment
The first effective treatment (promin) became available in the 1940s. In the 1950s, dapsone was introduced. The search for further effective antileprosy drugs led to the use of clofazimine and rifampicin in the 1960s and 1970s. Later, Indian scientist Shantaram Yawalkar and his colleagues formulated a combined therapy using rifampicin and dapsone, intended to mitigate bacterial resistance. Multi-drug therapy (MDT) combining all three drugs was first recommended by the WHO in 1981. These three antileprosy drugs are still used in the standard MDT regimens.Leprosy was once believed to be highly contagious and was treated with mercury, as was syphilis, which was first described in 1530. Many early cases thought to be leprosy could actually have been syphilis.Resistance has developed to initial treatment. Until the introduction of MDT in the early 1980s, leprosy could not be diagnosed and treated successfully within the community.Japan still has sanatoriums (although Japans sanatoriums no longer have active leprosy cases, nor are survivors held in them by law).The importance of the nasal mucosa in the transmission of M leprae was recognized as early as 1898 by Schäffer, in particular, that of the ulcerated mucosa. The mechanism of plantar ulceration in leprosy and its treatment was first described by Dr Ernest W Price.
Etymology
The word "leprosy" comes from the Greek word "λέπος (lépos) – skin" and "λεπερός (leperós) – scaly man".
Society and culture
India
British India enacted the Leprosy Act of 1898 which institutionalized those affected and segregated them by sex to prevent reproduction. The act was difficult to enforce but was repealed in 1983 only after multidrug therapy had become widely available. In 1983, the National Leprosy Elimination Programme, previously the National Leprosy Control Programme, changed its methods from surveillance to the treatment of people with leprosy. India still accounts for over half of the global disease burden. According to WHO, new cases in India during 2019 diminished to 114,451 patients (57% of the worlds total new cases).
Until 2019, one could justify a petition for divorce with the spouses diagnosis of leprosy.
Treatment cost
Between 1995 and 1999, the WHO, with the aid of the Nippon Foundation, supplied all endemic countries with free multidrug therapy in blister packs, channeled through ministries of health. This free provision was extended in 2000 and again in 2005, 2010 and 2015 with donations by the multidrug therapy manufacturer Novartis through the WHO. In the latest agreement signed between the company and the WHO in October 2015, the provision of free multidrug therapy by the WHO to all endemic countries will run until the end of 2025. At the national level, nongovernment organizations affiliated with the national program will continue to be provided with an appropriate free supply of multidrug therapy by the WHO.
Historical texts
Written accounts of leprosy date back thousands of years. Various skin diseases translated as leprosy appear in the ancient Indian text, the Atharava Veda, by 600 BC. Another Indian text, the Manusmriti (200 BC), prohibited contact with those infected with the disease and made marriage to a person infected with leprosy punishable.The Hebraic root tsara or tsaraath (צָרַע, – tsaw-rah – to be struck with leprosy, to be leprous) and the Greek (λεπρός–lepros), are of broader classification than the more narrow use of the term related to Hansens Disease. Any progressive skin disease (a whitening or splotchy bleaching of the skin, raised manifestations of scales, scabs, infections, rashes, etc....), as well as generalized molds and surface discoloration of any clothing, leather, or discoloration on walls or surfaces throughout homes all, came under the "law of leprosy" (Leviticus 14:54–57). Ancient sources such as the Talmud (Sifra 63) make clear that tzaraath refers to various types of lesions or stains associated with ritual impurity and occurring on cloth, leather, or houses, as well as skin. Traditional Judaism and Jewish rabbinical authorities, both historical and modern, emphasize that the tsaraath of Leviticus is a spiritual ailment with no direct relationship to Hansens disease or physical contagions. The relation of tsaraath to "leprosy" comes from translations of Hebrew Biblical texts into Greek and ensuing misconceptions.All three Synoptic Gospels of the New Testament describe instances of Jesus healing people with leprosy (Matthew 8:1–4, Mark 1:40–45, and Luke 5:12–16). The Bibles description of leprosy is congruous (if lacking detail) with the symptoms of modern leprosy, but the relationship between this disease, tzaraath, and Hansens disease has been disputed. The biblical perception that people with leprosy were unclean can be found in a passage from Leviticus 13: 44–46. While this text defines the leper as impure, it did not explicitly make a moral judgement on those with leprosy. Some Early Christians believed that those affected by leprosy were being punished by God for sinful behavior. Moral associations have persisted throughout history. Pope Gregory the Great (540–604) and Isidor of Seville (560–636) considered people with the disease to be heretics.
Middle Ages
It is likely that a rise in leprosy in Western Europe occurred in the Middle Ages based on the increased number of hospitals created to treat people with leprosy in the 12th and 13th centuries. France alone had nearly |
Leprosy | 2,000 leprosariums during this period.The social perception of leprosy in medieval communities was generally one of fear, and people infected with the disease were thought to be unclean, untrustworthy, and morally corrupt. Segregation from mainstream society was common, and people with leprosy were often required to wear clothing that identified them as such or carry a bell announcing their presence. The third Lateran Council of 1179 and a 1346 edict by King Edward expelled lepers from city limits. Because of the moral stigma of the disease, methods of treatment were both physical and spiritual, and leprosariums were established under the purview of the Roman Catholic Church.
19th century
Norway
Norway was the location of a progressive stance on leprosy tracking and treatment and played an influential role in European understanding of the disease. In 1832, Dr. JJ Hjort conducted the first leprosy survey, thus establishing a basis for epidemiological surveys. Subsequent surveys resulted in the establishment of a national leprosy registry to study the causes of leprosy and for tracking the rate of infection.
Early leprosy research throughout Europe was conducted by Norwegian scientists Daniel Cornelius Danielssen and Carl Wilhelm Boeck. Their work resulted in the establishment of the National Leprosy Research and Treatment Center. Danielssen and Boeck believed the cause of leprosy transmission was hereditary. This stance was influential in advocating for the isolation of those infected by sex to prevent reproduction.
Colonialism and imperialism
Though leprosy in Europe was again on the decline by the 1860s, Western countries embraced isolation treatment out of fear of the spread of disease from developing countries, minimal understanding of bacteriology, lack of diagnostic ability or knowledge of how contagious the disease was, and missionary activity. Growing imperialism and pressures of the industrial revolution resulted in a Western presence in countries where leprosy was endemic, namely the British presence in India. Isolation treatment methods were observed by Surgeon-Mayor Henry Vandyke Carter of the British Colony in India while visiting Norway, and these methods were applied in India with the financial and logistical assistance of religious missionaries. Colonial and religious influence and associated stigma continued to be a major factor in the treatment and public perception of leprosy in endemic developing countries until the mid-twentieth century.
20th century
United States
The National Leprosarium at Carville, Louisiana, known in 1955 as the Louisiana Leper Home, was the only leprosy hospital on the mainland United States. Leprosy patients from all over the United States were sent to Carville in order to be kept in isolation away from the public, as not much about leprosy transmission was known at the time and stigma against those with leprosy was high (see Leprosy stigma). The Carville leprosarium was known for its innovations in reconstructive surgery for those with leprosy. In 1941, 22 patients at Carville underwent trials for a new drug called promin. The results were described as miraculous, and soon after the success of promin came dapsone, a medicine even more effective in the fight against leprosy.
Stigma
Despite now effective treatment and education efforts, leprosy stigma continues to be problematic in developing countries where the disease is common. Leprosy is most common amongst impoverished populations where social stigma is likely to be compounded by poverty. Fears of ostracism, loss of employment, or expulsion from family and society may contribute to a delayed diagnosis and treatment.Folk beliefs, lack of education, and religious connotations of the disease continue to influence social perceptions of those affected in many parts of the world. In Brazil, for example, folklore holds that leprosy is a disease transmitted by dogs, or that it is associated with sexual promiscuity, or that it is a punishment for sins or moral transgressions (distinct from other diseases and misfortunes, which are in general thought of as being according to the will of God). Socioeconomic factors also have a direct impact. Lower-class domestic workers who are often employed by those in a higher socioeconomic class may find their employment in jeopardy as physical manifestations of the disease become apparent. Skin discoloration and darker pigmentation resulting from the disease also have social repercussions.In extreme cases in northern India, leprosy is equated with an "untouchable" status that "often persists long after individuals with leprosy have been cured of the disease, creating lifelong prospects of divorce, eviction, loss of employment, and ostracism from family and social networks."
Public policy
A goal of the World Health Organization is to "eliminate leprosy" and in 2016 the organization launched "Global Leprosy Strategy 2016–2020: Accelerating towards a leprosy-free world". Elimination of leprosy is defined as "reducing the proportion of leprosy patients in the community to very low levels, specifically to below one case per 10 000 population". Diagnosis and treatment with multidrug therapy are effective, and a 45% decline in disease burden has occurred since multidrug therapy has become more widely available. The organization emphasizes the importance of fully integrating leprosy treatment into public health services, effective diagnosis and treatment, and access to information. The approach includes supporting an increase in health care professionals who understand the disease, and a coordinated and renewed political commitment that includes coordination between countries and improvements in the methodology for collecting and analysing data.Interventions in the "Global Leprosy Strategy 2016–2020: Accelerating towards a leprosy-free world":
Early detection of cases focusing on children to reduce transmission and disabilities
Enhanced healthcare services and improved access for people who may be marginalized
For countries where leprosy is endemic, further interventions include an improved screening of close contacts, improved treatment regimens, and interventions to reduce stigma and discrimination against people who have leprosy.
Community-based interventions
In some instances in India, community-based rehabilitation is embraced by local governments and NGOs alike. Often, the identity cultivated by a community environment is preferable to reintegration, and models of self-management and collective agency independent of NGOs and government support have been desirable and successful.
Notable cases
Josephine Cafrine of Seychelles had leprosy from the age of 12 and kept a personal journal that documented her struggles and suffering. It was published as an autobiography in 1923.
Saint Damien De Veuster, a Roman Catholic priest from Belgium, himself eventually contracting leprosy, ministered to lepers who had been placed under a government-sanctioned medical quarantine on the island of Molokaʻi in the Kingdom of Hawaiʻi.
Baldwin IV of Jerusalem was a Christian king of Latin Jerusalem who had leprosy.
Josefina Guerrero was a Filipino spy during World War II, who used the Japanese fear of her leprosy to listen to their battle plans and deliver the information to the American forces under Douglas MacArthur.
King Henry IV of England (reigned 1399 to 1413) possibly had leprosy.
Vietnamese poet Hàn Mặc Tử
Ōtani Yoshitsugu, a Japanese daimyō
Leprosy in the media
English author Graham Greenes novel A Burnt-Out Case is set in a leper colony in Belgian Congo. The story is also predominantly about a disillusioned architect working with a doctor on devising new cure and amenities for mutilated victims of lepers; the title, too, refers to the condition of mutilation and disfigurement in the disease.
Forugh Farrokhzad made a 22-minute documentary about a leprosy colony in Iran in 1962 titled The House Is Black. The film humanizes the people affected and opens by saying that "there is no shortage of ugliness in the world, but by closing our eyes on ugliness, we will intensify it."
Molokai is a novel by Alan Brennert about a leper colony in Hawaii. This novel follows the story of a seven year old girl taken from her family and put on the small Hawaiian island of Molokais leper settlement. Even though this is a fiction novel it is based upon some very true and revealing incidents which occurred at this Leprosy settlement.
The lead character in The Chronicles of Thomas Covenant by Stephen R. Donaldson suffers from leprosy. His condition seems to be cured by the magic of the fantasy land he finds himself in, but he resists believing in its reality, for example, by continuing to perform a regular visual surveillance of extremities as a safety check. Donaldson gained experience with the disease as a young man in India, where his father worked in a missionary for people with leprosy.
Infection of animals
Wild nine-banded armadillos (Dasypus novemcinctus) in south central United States often carry Mycobacterium leprae. This is believed to be because armadillos have a low body temperature. Leprosy lesions appear mainly in cooler body regions such as the skin and mucous membranes of the upper respiratory tract. Because of armadillos armor, skin lesions are hard to see. Abrasions around the eyes, nose and feet are the most common signs. Infected armadillos make up a large reservoir of M. leprae and may be a source of infection for some humans in the United States or other locations in the armadillos home range. In armadillo leprosy, lesions do not persist at the site of entry in animals, M. leprae multiply in macrophages at the site of inoculation and lymph nodes.A recent outbreak in chimpanzees in West Africa is showing that the bacteria can infect another species and also possibly have additional rodent hosts.Recent studies have demonstrated that the disease is endemic in the UK red Eurasian squirrel population, with Mycobacterium leprae and Mycobacterium lepromatosis appearing in different populations. The Mycobacteria leprae strain discovered on Brownsea Island is equated to one thought to have died out in the human population in mediaeval times. Despite this, and speculation regarding past transmission through trade in squirrel furs, there does not seem to be a high risk of squirrel to human transmission from the wild population: although leprosy continues to be diagnosed in immigrants to the UK, the last known human case of leprosy arising in the UK was recorded over 200 years ago.
See also
Leper Colony
Alice Ball
Maurice Born
Kate Marsden
References
Further reading
Pam Fessler (2020). Carvilles Cure: Leprosy, Stigma, and the Fight for Justice. Liveright. ISBN 978-1631495038.
External links
Leprosy at Curlie
Links and resources to information about leprosy selected by the World Health Organization |
Liver abscess | A liver abscess is a mass filled with pus inside the liver. Common causes are abdominal conditions such as appendicitis or diverticulitis due to haematogenous spread through the portal vein. It can also develop as a complication of a liver injury.
Causes
Risk factors for developing liver abscess can be due to infection, post-procedural infection and metastasis such as primary liver tumours, liver metastasis, biliary procedures, biliary injuries, biliary tract disease, appendicitis, and diverticulitis.Major bacterial causes of liver abscess include the following:
Streptococcus species (including Enterococcus)
Escherichia species
Staphylococcus species
Klebsiella species (Higher rates in the Far East)
Anaerobes (including Bacteroides species)
Pseudomonas species
Proteus species
Entamoeba HistolyticaHowever, as noted above, many cases are polymicrobial.
Diagnosis
Types
There are several major forms of liver abscess, classified by cause:
Pyogenic liver abscess, which is most often polymicrobial, accounts for 80% of hepatic abscess cases in the United States.
Amoebic liver abscess due to Entamoeba histolytica accounts for 10% of cases. The incidence is much higher in developing countries.
Fungal abscess, most often due to Candida species, accounts for less than 10% of cases.
Iatrogenic abscess, caused by medical interventions
Management
Antibiotics: IV metronidazole and third generation cephalosporin/quinolones, β-lactam antibiotics, and aminoglycosides are effective.
Prognosis
The prognosis has improved for liver abscesses. The mortality rate in-hospital is about 2.5-19%. The elderly, ICU admissions, shock, cancer, fungal infections, cirrhosis, chronic kidney disease, acute respiratory failure, severe disease, or disease of biliary origin have a worse prognosis.
References
External links
Liver Abscess CT Images CTCases Liver Abscess CT Scan. |
Liver fluke | Liver fluke is a collective name of a polyphyletic group of parasitic trematodes under the phylum Platyhelminthes.
They are principally parasites of the liver of various mammals, including humans. Capable of moving along the blood circulation, they can occur also in bile ducts, gallbladder, and liver parenchyma. In these organs, they produce pathological lesions leading to parasitic diseases. They have complex life cycles requiring two or three different hosts, with free-living larval stages in water.
Biology
The body of liver flukes is leaf-like and flattened. The body is covered with a tegument. They are hermaphrodites having complete sets of both male and female reproductive systems. They have simple digestive systems and primarily feed on blood. The anterior end is the oral sucker opening into the mouth. Inside, mouth lead to a small pharynx which is followed by an extended intestine that runs through the entire length of the body. The intestine is heavily branched and anus is absent. Instead, the intestine runs along an excretory canal that opens at the posterior end. Adult flukes produce eggs that are passed out through the excretory pore. The eggs infect different species of snails (as intermediate hosts) in which they grow into larvae. The larvae are released into the environment from where the definitive hosts (humans and other mammals) get the infection. In some species, another intermediate host is required, generally a cyprinid fish. In this case, the definitive hosts are infected from eating infected fish. Hence, they are food-borne parasites.
Pathogenicity
Liver fluke infections cause serious medical and veterinary diseases. Fasciolosis of sheep, goats and cattle, is the major cause of economic losses in dairy and meat industry. Fasciolosis of humans produces clinical symptoms such as fever, nausea, swollen liver, extreme abdominal pain, jaundice and anemia.Clonorchiasis and opisthorchiasis (due to Opisthorchis viverrini) are particularly dangerous. They can survive for several decades in humans causing chronic inflammation of the bile ducts, epithelial hyperplasia, periductal fibrosis and bile duct dilatation. In many infections these symptoms cause further complications such as stone formation, recurrent pyogenic cholangitis and cancer (cholangiocarcinoma). Opisthorchiasis is particularly the leading cause of cholangiocarcinoma in Thailand and Laos. Both clonorchiasis and opisthorchiasis are classified as Group 1 human biological agents (carcinogens) by International Agency of Research on Cancer (IARC).
Species
Species of liver fluke include:
Clonorchis sinensis (the Chinese liver fluke, or the Oriental liver fluke)
Dicrocoelium dendriticum (the lancet liver fluke)
Dicrocoelium hospes
Fasciola hepatica (the sheep liver fluke)
Fascioloides magna (the giant liver fluke)
Fasciola gigantica
Fasciola jacksoni
Metorchis conjunctus
Metorchis albidus
Protofasciola robusta
Parafasciolopsis fasciomorphae
Opisthorchis viverrini (Southeast Asian liver fluke)
Opisthorchis felineus (Cat liver fluke)
Opisthorchis guayaquilensis
See also
The Integrated Opisthorchiasis Control Program
== References == |
Lymphogranuloma venereum | Lymphogranuloma venereum (LGV; also known as climatic bubo, Durand–Nicolas–Favre disease, poradenitis inguinale, lymphogranuloma inguinale, and strumous bubo) is a sexually transmitted disease caused by the invasive serovars L1, L2, L2a, L2b, or L3 of Chlamydia trachomatis.LGV is primarily an infection of lymphatics and lymph nodes. Chlamydia trachomatis is the bacterium responsible for LGV. It gains entrance through breaks in the skin, or it can cross the epithelial cell layer of mucous membranes. The organism travels from the site of inoculation down the lymphatic channels to multiply within mononuclear phagocytes of the lymph nodes it passes.
In developed nations, it was considered rare before 2003. However, a recent outbreak in the Netherlands among gay men has led to an increase of LGV in Europe and the United States.LGV was first described by Wallace in 1833 and again by Durand, Nicolas, and Favre in 1913. Since the 2004 Dutch outbreak many additional cases have been reported, leading to greater surveillance. Soon after the initial Dutch report, national and international health authorities launched warning initiatives and multiple LGV cases were identified in several more European countries (Belgium, France, the UK, Germany, Sweden, Italy and Switzerland) and the US and Canada. All cases reported in Amsterdam and France and a considerable percentage of LGV infections in the UK and Germany were caused by a newly discovered Chlamydia variant, L2b, a.k.a. the Amsterdam variant. The L2b variant could be traced back and was isolated from anal swabs of men who have sex with men (MSM) who visited the STI city clinic of San Francisco in 1981. This finding suggests that the recent LGV outbreak among MSM in industrialised countries is a slowly evolving epidemic. The L2b serovar has also been identified in Australia.
Signs and symptoms
The clinical manifestation of LGV depends on the site of entry of the infectious organism (the sex contact site) and the stage of disease progression.
Inoculation at the mucous lining of external sex organs (penis and vagina) can lead to the inguinal syndrome named after the formation of buboes or abscesses in the groin (inguinal) region where draining lymph nodes are located. These signs usually appear from 3 days to a month after exposure.
The rectal syndrome (lymphogranuloma venereum proctitis, or LGVP) arises if the infection takes place via the rectal mucosa (through anal sex) and is mainly characterized by proctocolitis or proctitis symptoms.
The pharyngeal syndrome is rare. It starts after infection of pharyngeal tissue, and buboes in the neck region can occur.
Primary stage
LGV may begin as a self-limited painless genital ulcer that occurs at the contact site 3–12 days after infection. Women rarely notice a primary infection because the initial ulceration where the organism penetrates the mucosal layer is often located out of sight, in the vaginal wall. In men fewer than one-third of those infected notice the first signs of LGV. This primary stage heals in a few days. Erythema nodosum occurs in 10% of cases.
Secondary stage
The secondary stage most often occurs 10–30 days later, but can present up to six months later. The infection spreads to the lymph nodes through lymphatic drainage pathways. The most frequent presenting clinical manifestation of LGV among males whose primary exposure was genital is unilateral (in two-thirds of cases) lymphadenitis and lymphangitis, often with tender inguinal and/or femoral lymphadenopathy because of the drainage pathway for their likely infected areas. Lymphangitis of the dorsal penis may also occur and resembles a string or cord. If the route was anal sex, the infected person may experience lymphadenitis and lymphangitis noted above. They may instead develop proctitis, inflammation limited to the rectum (the distal 10–12 cm) that may be associated with anorectal pain, tenesmus, and rectal discharge, or proctocolitis, inflammation of the colonic mucosa extending to 12 cm above the anus and associated with symptoms of proctitis plus diarrhea or abdominal cramps.In addition, symptoms may include inflammatory involvement of the perirectal or perianal lymphatic tissues. In females, cervicitis, perimetritis, or salpingitis may occur as well as lymphangitis and lymphadenitis in deeper nodes. Because of lymphatic drainage pathways, some patients develop an abdominal mass which seldom suppurates, and 20–30% develop inguinal lymphadenopathy. Systemic signs which can appear include fever, decreased appetite, and malaise. Diagnosis is more difficult in women and men who have sex with men (MSM) who may not have the inguinal symptoms.Over the course of the disease, lymph nodes enlarge, as may occur in any infection of the same areas as well. Enlarged nodes are called buboes. Buboes are commonly painful. Nodes commonly become inflamed, thinning and fixation of the overlying skin. These changes may progress to necrosis, fluctuant and suppurative lymph nodes, abscesses, fistulas, strictures, and sinus tracts. During the infection and when it subsides and healing takes place, fibrosis may occur. This can result in varying degrees of lymphatic obstruction, chronic edema, and strictures. These late stages characterised by fibrosis and edema are also known as the third stage of LGV, and are mainly permanent.
Diagnosis
The diagnosis usually is made serologically (through complement fixation) and by exclusion of other causes of inguinal lymphadenopathy or genital ulcers. Serologic testing has a sensitivity of 80% after two weeks. Serologic testing may not be specific for serotype (has some cross reactivity with other chlamydia species) and can suggest LGV from other forms because of their difference in dilution, 1:64 more likely to be LGV and lower than 1:16 is likely to be other chlamydia forms (emedicine).For identification of serotypes, culture is often used. Culture is difficult. Requiring a special medium, cycloheximide-treated McCoy or HeLa cells, and yields are still only 30-50%. DFA, or direct fluorescent antibody test, PCR of likely infected areas and pus, are also sometimes used. DFA test for the L-type serovar of C. trachomatis is the most sensitive and specific test, but is not readily available.If polymerase chain reaction (PCR) tests on infected material are positive, subsequent restriction endonuclease pattern analysis of the amplified outer membrane protein A gene can be done to determine the genotype.Recently a fast realtime PCR (TaqMan analysis) has been developed to diagnose LGV. With this method an accurate diagnosis is feasible within a day. It has been noted that one type of testing may not be thorough enough.
Treatment
Treatment involves antibiotics and may involve drainage of the buboes or abscesses by needle aspiration or incision. Further supportive measure may need to be taken: dilatation of the rectal stricture, repair of rectovaginal fistulae, or colostomy for rectal obstruction.Common antibiotic treatments include tetracycline (doxycycline) (all tetracyclines, including doxycycline, are contraindicated during pregnancy and in children due to effects on bone development and tooth discoloration), and erythromycin. Azithromycin is also a drug of choice in LGV.
Further recommendations
As with all STIs, sex partners of patients who have LGV should be examined and tested for urethral or cervical chlamydial infection. After a positive culture for chlamydia, clinical suspicion should be confirmed with testing to distinguish serotype. Antibiotic treatment should be started if they had sexual contact with the patient during the 30 days preceding onset of symptoms in the patient. Patients with a sexually transmitted disease should be tested for other STDs due to high rates of comorbid infections. Antibiotics are not without risks and prophylactic broad antibiotic coverage is not recommended.
Prognosis
Prognosis is highly variable. Spontaneous remission is common. Complete cure can be obtained with proper antibiotic treatments to kill the causative bacteria, such as tetracycline, doxycycline, or erythromycin. Prognosis is more favorable with early treatment. Bacterial superinfections may complicate course. Death can occur from bowel obstruction or perforation, and follicular conjunctivitis due to autoinoculation of infectious discharge can occur.
Long-term complications
Genital elephantiasis or esthiomene, which is the dramatic end-result of lymphatic obstruction, which may occur because of the strictures themselves, or fistulas. This is usually seen in females, may ulcerate and often occurs 1–20 years after primary infection.
Fistulas of, but not limited to, the penis, urethra, vagina, uterus, or rectum. Also, surrounding edema often occurs. Rectal or other strictures and scarring. Systemic spread may occur, possible results are arthritis, pneumonitis, hepatitis, or perihepatitis.
Notes
References
External links
Sexually transmitted infections (BMJ publishing) |
Lysosomal acid lipase deficiency | Lysosomal acid lipase deficiency (LAL deficiency or LAL-D) is an autosomal recessive inborn error of metabolism that results in the body not producing enough active lysosomal acid lipase (LAL) enzyme. This enzyme plays an important role in breaking down fatty material (cholesteryl esters and triglycerides) in the body. Infants, children and adults that have LAL deficiency experience a range of serious health problems. The lack of the LAL enzyme can lead to a build-up of fatty material in a number of body organs including the liver, spleen, gut, in the wall of blood vessels and other important organs.
Very low levels of the LAL enzyme lead to LAL deficiency. LAL deficiency typically affects infants in the first year of life. The accumulation of fat in the walls of the gut in early onset disease leads to serious digestive problems including malabsorption, a condition in which the gut fails to absorb nutrients and calories from food. Because of these digestive complications, affected infants usually fail to grow and gain weight at the expected rate for their age (failure to thrive). As the disease progresses, it can cause life-threatening liver dysfunction or liver failure.Until 2015, there was no treatment, and very few infants with LAL-D survived beyond the first year of life. In 2015, an enzyme replacement therapy, sebelipase alfa, was approved in the US and EU. The therapy was additionally approved in Japan in 2016.
Symptoms and signs
Infants may present with feeding difficulties with frequent vomiting, diarrhea, swelling of the abdomen, and failure to gain weight or sometimes weight loss.As the disease progresses in infants, increasing fat accumulation in the liver leads to other complications including yellowing of the skin and whites of the eyes (jaundice), and a persistent low-grade fever. An ultrasound examination shows accumulation of chalky material (calcification) in the adrenal gland in about half of infants with LAL-D. Complications of LAL-D progress over time, eventually leading to life-threatening problems such as extremely low levels of circulating red blood cells (severe anemia), liver dysfunction or failure, and physical wasting (cachexia).People who are older children or adults generally present with a wide range of signs and symptoms that overlap with other disorders. They may have diarrhoea, stomach pain, vomiting, or poor growth, a sign of malabsorption. They may have signs of bile duct problems, like itchiness, jaundice, pale stool, or dark urine. Their feces may be excessively greasy. They often have an enlarged liver, liver disease, and may have yellowish deposits of fat underneath the skin, usually around their eyelids. The disease is often undiagnosed in adults. The person may have a history of premature cardiac disease or premature stroke.
Cause
Lysosomal acid lipase deficiency is a genetic disease that is autosomal recessive. It is an inborn error of metabolism that causes a lysosomal storage disease. The condition is caused by a mutation of the LIPA gene, which is responsible for the gene coding of the lysosomal lipase protein (also called lysosomal acid lipase or LAL), which results in a loss of the proteins normal function. When LAL functions normally, it breaks down cholesteryl esters and triglycerides in low density lipoprotein particles into free cholesterol and free fatty acids that the body can reuse; when LAL doesnt function, cholesteryl esters and triglycerides build up in the liver, spleen and other organs. The accumulation of fat in the walls of the gut and other organs in leads to serious digestive problems including malabsorption, a condition in which the gut fails to absorb nutrients and calories from food, persistent and often forceful vomiting, frequent diarrhea, foul-smelling and fatty stools (steatorrhea), and failure to grow.Lysosomal acid lipase deficiencies occur when a person has defects (mutations) in both copies of the LIPA gene. Each parent of a person with LAL deficiency carries one copy of the defective LIPA gene. With every pregnancy, parents with a son or daughter affected by LAL deficiency have a 1 in 4 (25%) chance of having another affected child. A person born with defects in both LIPA genes is not able to produce adequate amounts of the LAL enzyme.
Diagnosis
Blood tests may show anaemia and their lipid profiles are generally similar to people with more common familial hypercholesterolemia, including elevated total cholesterol, elevated low-density lipoprotein cholesterol, decreased high-density lipoprotein cholesterol and elevated serum transaminases.Liver biopsy findings will generally show a bright yellow-orange color, enlarged, lipid-laden hepatocytes and Kupffer cells, microvesicular and macrovesicular steatosis, fibrosis, and cirrhosis. The only definitive tests are genetic, which may be conducted in any number of ways.
Screening
Because LAL deficiency is inherited, each sibling of an affected individual has a 25% chance of having pathological mutations in LAL genes from both their mother and their father, a 50% chance of having a pathological mutation in only one gene, and a 25% chance of having no pathological mutations. Genetic testing for family members and genetic prenatal diagnosis of pregnancies for women who are at increased risk are possible if family members carrying pathological mutations have been identified.
Management
LAL deficiency can be treated with sebelipase alfa is a recombinant form of LAL that was approved in 2015 in the US and EU. The disease of LAL affects < 0.2 in 10,000 people in the EU. According to an estimate by a Barclays analyst, the drug will be priced at about US$375,000 per year.It is administered once a week via intraveneous infusion in people with rapidly progressing disease in the first six months of life. In people with less aggressive disease, it is given every other week.Before the drug was approved, treatment of infants was mainly focused on reducing specific complications and was provided in specialized centers. Specific interventions for infants included changing from breast or normal bottle formula to a specialized low fat formula, intravenous feeding, antibiotics for infections, and steroid replacement therapy because of concerns about adrenal function.Statins were used in people with LAL-D prior to the approval of sebelipase alfa; they helped control cholesterol but did not appear to slow liver damage; liver transplantation was necessary in most patients.
Prognosis
Infants with LAL deficiencies typically show signs of disease in the first weeks of life and if untreated, die within 6–12 months due to multi-organ failure. Older children or adults with LAL-D may remain undiagnosed or be misdiagnosed until they die early from a heart attack or stroke or die suddenly of liver failure. The first enzyme replacement therapy was approved in 2015. In those clinical trials nine infants were followed for one year; 6 of them lived beyond one year. Older children and adults were followed for 36 weeks.
Epidemiology
Depending on ethnicity and geography, prevalence has been estimated to be between 1 in 40,000 and 1 in 300,000; based on these estimates the disease may be underdiagnosed. Jewish infants of Iraqi or Iranian origin appear to be most at risk based on a study of a community in Los Angeles in which there was a prevalence of 1 in 4200.
History
In 1956, Moshe Wolman, along with two other doctors, published the first case study of a LAL deficiency in a child born to closely related Persian Jews; 12 years later a case study on an older boy was published, which turned out to be the first case study of LAL-D.LAL-D was historically referred to as two separate disorders:
Wolman disease, presenting in infant patients
Cholesteryl Ester Storage Disease, presenting in pediatric and adult patientsAround 2010 both presentations have come to be known as LAL-D, as both are due to a deficiency of the LAL enzyme.In 2015 an enzyme replacement therapy, sebelipase alfa, was approved in the US and EU for the treatment of human LAL enzyme deficiency. Before the approval of that drug, as of 2009 the two oldest survivors of LAL-D in the world were then aged 4 and 11; both of them had been treated with hematopoietic stem cell treatment.
Research directions
Some children with LAL-D have had an experimental therapy called hematopoietic stem cell transplantation (HSCT), also known as bone marrow transplant, to try to prevent the disease from getting worse. Data are sparse but there is a known high risk of serious complications including death, graft-versus-host disease.
References
External links
National Organization for Rare Disorders (NORD)
Article - LYSOSOMAL ACID LIPASE/NIH.gov
Article - LYSOSOMAL ACID LIPASE DEFICIENCY/NIH.gov
Lipid Storage Diseases Fact Sheet at ninds.nih.gov |
Maceration | Maceration may refer to:
Maceration (food), in food preparation
Maceration (wine), a step in wine-making
Carbonic maceration, a wine-making technique
Maceration (sewage), in sewage treatment
Maceration (bone), a method of preparing bones
Acid maceration, the use of an acid to extract micro-fossils from rock
Maceration, in chemistry, the preparation of an extract by solvent extraction
Maceration, in biology, the mechanical breakdown of ingested food into chyme
Skin maceration, in dermatology, the softening and whitening of skin that is kept constantly wet
Maceration, in poultry farming, a method of chick culling |
Malaria | Malaria is a mosquito-borne infectious disease that affects humans and other animals. Malaria causes symptoms that typically include fever, tiredness, vomiting, and headaches. In severe cases, it can cause jaundice, seizures, coma, or death. Symptoms usually begin ten to fifteen days after being bitten by an infected mosquito. If not properly treated, people may have recurrences of the disease months later. In those who have recently survived an infection, reinfection usually causes milder symptoms. This partial resistance disappears over months to years if the person has no continuing exposure to malaria.Malaria is caused by single-celled microorganisms of the Plasmodium group. It is spread exclusively through bites of infected Anopheles mosquitoes. The mosquito bite introduces the parasites from the mosquitos saliva into a persons blood. The parasites travel to the liver where they mature and reproduce. Five species of Plasmodium can infect and be spread by humans. Most deaths are caused by P. falciparum, whereas P. vivax, P. ovale, and P. malariae generally cause a milder form of malaria. The species P. knowlesi rarely causes disease in humans. Malaria is typically diagnosed by the microscopic examination of blood using blood films, or with antigen-based rapid diagnostic tests. Methods that use the polymerase chain reaction to detect the parasites DNA have been developed, but are not widely used in areas where malaria is common due to their cost and complexity.The risk of disease can be reduced by preventing mosquito bites through the use of mosquito nets and insect repellents or with mosquito-control measures such as spraying insecticides and draining standing water. Several medications are available to prevent malaria for travellers in areas where the disease is common. Occasional doses of the combination medication sulfadoxine/pyrimethamine are recommended in infants and after the first trimester of pregnancy in areas with high rates of malaria. As of 2020, there is one vaccine which has been shown to reduce the risk of malaria by about 40% in children in Africa. A pre-print study of another vaccine has shown 77% vaccine efficacy, but this study has not yet passed peer review. Efforts to develop more effective vaccines are ongoing. The recommended treatment for malaria is a combination of antimalarial medications that includes artemisinin. The second medication may be either mefloquine, lumefantrine, or sulfadoxine/pyrimethamine. Quinine, along with doxycycline, may be used if artemisinin is not available. It is recommended that in areas where the disease is common, malaria is confirmed if possible before treatment is started due to concerns of increasing drug resistance. Resistance among the parasites has developed to several antimalarial medications; for example, chloroquine-resistant P. falciparum has spread to most malarial areas, and resistance to artemisinin has become a problem in some parts of Southeast Asia.The disease is widespread in the tropical and subtropical regions that exist in a broad band around the equator. This includes much of sub-Saharan Africa, Asia, and Latin America. In 2020 there were 241 million cases of malaria worldwide resulting in an estimated 627,000 deaths. Approximately 95% of the cases and deaths occurred in sub-Saharan Africa. Rates of disease have decreased from 2010 to 2014 but increased from 2015 to 2020. Malaria is commonly associated with poverty and has a significant negative effect on economic development. In Africa, it is estimated to result in losses of US$12 billion a year due to increased healthcare costs, lost ability to work, and adverse effects on tourism.
Signs and symptoms
Adults with malaria tend to experience chills and fever – classically in periodic intense bouts lasting around six hours, followed by a period of sweating and fever relief – as well as headache, fatigue, abdominal discomfort, and muscle pain. Children tend to have more general symptoms: fever, cough, vomiting, and diarrhea.Initial manifestations of the disease—common to all malaria species—are similar to flu-like symptoms, and can resemble other conditions such as sepsis, gastroenteritis, and viral diseases. The presentation may include headache, fever, shivering, joint pain, vomiting, hemolytic anemia, jaundice, hemoglobin in the urine, retinal damage, and convulsions.The classic symptom of malaria is paroxysm—a cyclical occurrence of sudden coldness followed by shivering and then fever and sweating, occurring every two days (tertian fever) in P. vivax and P. ovale infections, and every three days (quartan fever) for P. malariae. P. falciparum infection can cause recurrent fever every 36–48 hours, or a less pronounced and almost continuous fever.Symptoms typically begin 10–15 days after the initial mosquito bite, but can occur as late as several months after infection with some P. vivax strains. Travellers taking preventative malaria medications may develop symptoms once they stop taking the drugs.Severe malaria is usually caused by P. falciparum (often referred to as falciparum malaria). Symptoms of falciparum malaria arise 9–30 days after infection. Individuals with cerebral malaria frequently exhibit neurological symptoms, including abnormal posturing, nystagmus, conjugate gaze palsy (failure of the eyes to turn together in the same direction), opisthotonus, seizures, or coma.
Complications
Malaria has several serious complications. Among these is the development of respiratory distress, which occurs in up to 25% of adults and 40% of children with severe P. falciparum malaria. Possible causes include respiratory compensation of metabolic acidosis, noncardiogenic pulmonary oedema, concomitant pneumonia, and severe anaemia. Although rare in young children with severe malaria, acute respiratory distress syndrome occurs in 5–25% of adults and up to 29% of pregnant women. Coinfection of HIV with malaria increases mortality. Kidney failure is a feature of blackwater fever, where haemoglobin from lysed red blood cells leaks into the urine.Infection with P. falciparum may result in cerebral malaria, a form of severe malaria that involves encephalopathy. It is associated with retinal whitening, which may be a useful clinical sign in distinguishing malaria from other causes of fever. An enlarged spleen, enlarged liver or both of these, severe headache, low blood sugar, and haemoglobin in the urine with kidney failure may occur. Complications may include spontaneous bleeding, coagulopathy, and shock.Malaria in pregnant women is an important cause of stillbirths, infant mortality, miscarriage and low birth weight, particularly in P. falciparum infection, but also with P. vivax.
Cause
Malaria is caused by infection with parasites in the genus Plasmodium. In humans, malaria is caused by six Plasmodium species: P. falciparum, P. malariae, P. ovale curtisi, P. ovale wallikeri, P. vivax and P. knowlesi. Among those infected, P. falciparum is the most common species identified (~75%) followed by P. vivax (~20%). Although P. falciparum traditionally accounts for the majority of deaths, recent evidence suggests that P. vivax malaria is associated with potentially life-threatening conditions about as often as with a diagnosis of P. falciparum infection. P. vivax proportionally is more common outside Africa. There have been documented human infections with several species of Plasmodium from higher apes; however, except for P. knowlesi—a zoonotic species that causes malaria in macaques—these are mostly of limited public health importance.
Parasites are typically introduced by the bite of an infected Anopheles mosquito. What these inoculated parasites, called "sporozoites", do in the skin and lymphatics, exactly, has yet to be accurately determined. However, a percentage of sporozoites follow the bloodstream to the liver, where they invade hepatocytes. They grow and divide in the liver for 2–10 days, with each infected hepatocyte eventually harboring up to 40,000 parasites. The infected hepatocytes break down, releasing this invasive form of Plasmodium cells, called "merozoites" into the bloodstream. In the blood, the merozoites rapidly invade individual red blood cells, replicating over 24–72 hours to form 16–32 new merozoites. The infected red blood cell lyses, and the new merozoites infect new red blood cells, resulting in a cycle that continuously amplifies the number of parasites in an infected person. However, most of the P. vivax replicating merozoite biomass is now (since 2021) known to be hidden in the spleen and bone marrow (perhaps elsewhere too), thereby supporting the astute, long-standing (since 2011) but previously ignored theory that non-circulating merozoites are the source many P. vivax malarial recurrences (see “Recurrent malaria” section below). Over rounds of this red blood cell infection cycle in the bloodstream and elsewhere, a small portion of parasites do not replicate, but instead develop into early sexual stage parasites called male and female "gametocytes". These gametocytes develop in the bone marrow for 11 days, then return to the blood circulation to await uptake by the bite of another mosquito. Once inside a mosquito, the gametocytes undergo sexual reproduction, and eventually form daughter sporozoites that migrate to the mosquitos salivary glands to be injected into a new host when the mosquito bites.The liver infection causes no symptoms; all symptoms of malaria result from the infection of red blood cells. Symptoms develop once there are more than around 100,000 parasites per milliliter of blood. Many of the symptoms associated with severe malaria are caused by the tendency of P. falciparum to bind to blood vessel walls, resulting in damage to the affected vessels and surrounding tissue. Parasites sequestered in the blood vessels of the lung contribute to respiratory failure. In the brain, they contribute to coma. In the placenta they contribute to low birthweight and preterm labor, and increase the risk of abortion and stillbirth. The destruction of red blood cells during infection often results in anemia, exacerbated by reduced production of new red blood cells during infection.Only female mosquitoes feed on blood; male mosquitoes feed on plant nectar and do not transmit the disease. Females of the mosquito genus Anopheles prefer to feed at night. They usually start searching for a meal at dusk, and continue through the night until they succeed. Malaria parasites can also be transmitted by blood transfusions, although this is rare.
Recurrent malaria
Symptoms of malaria can recur after varying symptom-free periods. Depending upon the cause, recurrence can be classified as recrudescence, relapse, or reinfection. Recrudescence is when symptoms return after a symptom-free period and the origin is parasites that survived in the blood as a result of inadequate or ineffective treatment. Relapse is when symptoms reappear after the parasites have been eliminated from the blood and the recurrence source is activated parasites which had persisted as dormant hypnozoites in liver cells. Relapse commonly occurs after 8–24 weeks and is often seen in P. vivax and P. ovale infections. However, relapse-like P. vivax recurrences are probably being over-attributed to hypnozoite activation. Some of them might have an extra-vascular or sequestered merozoite origin, making those recurrences recrudescences, not relapses. Newly recognised, non-hypnozoite, possible contributing sources to recurrent peripheral P. vivax parasitemia are erythrocytic forms in the bone marrow and spleen. P. vivax malaria cases in temperate areas often involve overwintering by hypnozoites, with relapses beginning the year after the mosquito bite. Reinfection means that the parasites responsible for the past infection were eliminated from the body but a new parasite(s) was introduced. Reinfection cannot readily be distinguished from relapse and recrudescence, although recurrence of infection within two weeks of treatment for the initial malarial manifestations is typically attributed to treatment failure. But doing this is not necessarily correct. People may develop some immunity when exposed to frequent infections.
Pathophysiology
Malaria infection develops via two phases: one that involves the liver (exoerythrocytic phase), and one that involves red blood cells, or erythrocytes (erythrocytic phase). When an infected mosquito pierces a persons skin to take a blood meal, sporozoites in the mosquitos saliva enter the bloodstream and migrate to the liver where they infect hepatocytes, multiplying asexually and asymptomatically for a period of 8–30 days.After a potential dormant period in the liver, these organisms differentiate to yield thousands of merozoites, which, following rupture of their host cells, escape into the blood and infect red blood cells to begin the erythrocytic stage of the life cycle. The parasite escapes from the liver undetected by wrapping itself in the cell membrane of the infected host liver cell.Within the red blood cells, the parasites multiply further, again asexually, periodically breaking out of their host cells to invade fresh red blood cells. Several such amplification cycles occur. Thus, classical descriptions of waves of fever arise from simultaneous waves of merozoites escaping and infecting red blood cells.Some P. vivax sporozoites do not immediately develop into exoerythrocytic-phase merozoites, but instead, produce hypnozoites that remain dormant for periods ranging from several months (7–10 months is typical) to several years. After a period of dormancy, they reactivate and produce merozoites. Hypnozoites are responsible for long incubation and late relapses in P. vivax infections, although their existence in P. ovale is uncertain.The parasite is relatively protected from attack by the bodys immune system because for most of its human life cycle it resides within the liver and blood cells and is relatively invisible to immune surveillance. However, circulating infected blood cells are destroyed in the spleen. To avoid this fate, the P. falciparum parasite displays adhesive proteins on the surface of the infected blood cells, causing the blood cells to stick to the walls of small blood vessels, thereby sequestering the parasite from passage through the general circulation and the spleen. The blockage of the microvasculature causes symptoms such as those in placental malaria. Sequestered red blood cells can breach the blood–brain barrier and cause cerebral malaria.
Genetic resistance
According to a 2005 review, due to the high levels of mortality and morbidity caused by malaria—especially the P. falciparum species—it has placed the greatest selective pressure on the human genome in recent history. Several genetic factors provide some resistance to it including sickle cell trait, thalassaemia traits, glucose-6-phosphate dehydrogenase deficiency, and the absence of Duffy antigens on red blood cells.The impact of sickle cell trait on malaria immunity illustrates some evolutionary trade-offs that have occurred because of endemic malaria. Sickle cell trait causes a change in the haemoglobin molecule in the blood. Normally, red blood cells have a very flexible, biconcave shape that allows them to move through narrow capillaries; however, when the modified haemoglobin S molecules are exposed to low amounts of oxygen, or crowd together due to dehydration, they can stick together forming strands that cause the cell to distort into a curved sickle shape. In these strands, the molecule is not as effective in taking or releasing oxygen, and the cell is not flexible enough to circulate freely. In the early stages of malaria, the parasite can cause infected red cells to sickle, and so they are removed from circulation sooner. This reduces the frequency with which malaria parasites complete their life cycle in the cell. Individuals who are homozygous (with two copies of the abnormal haemoglobin beta allele) have sickle-cell anaemia, while those who are heterozygous (with one abnormal allele and one normal allele) experience resistance to malaria without severe anaemia. Although the shorter life expectancy for those with the homozygous condition would tend to disfavour the traits survival, the trait is preserved in malaria-prone regions because of the benefits provided by the heterozygous form.
Liver dysfunction
Liver dysfunction as a result of malaria is uncommon and usually only occurs in those with another liver condition such as viral hepatitis or chronic liver disease. The syndrome is sometimes called malarial hepatitis. While it has been considered a rare occurrence, malarial hepatopathy has seen an increase, particularly in Southeast Asia and India. Liver compromise in people with malaria correlates with a greater likelihood of complications and death.
Diagnosis
Due to the non-specific nature of malaria symptoms, diagnosis is typically suspected based on symptoms and travel history, then confirmed with a parasitological test. In areas where malaria is common, the World Health Organization (WHO) recommends clinicians suspect malaria in any person who reports having fevers, or who has a current temperature above 37.5 °C without any other obvious cause. Malaria should similarly be suspected in children with signs of anemia: pale palms or a laboratory test showing hemoglobin levels below 8 grams per deciliter of blood. In areas with little to no malaria, the WHO recommends only testing people with possible exposure to malaria (typically travel to a malaria-endemic area) and unexplained fever.Malaria is usually confirmed by the microscopic examination of blood films or by antigen-based rapid diagnostic tests (RDT). Microscopy – i.e. examining Giemsa-stained blood with a light microscope – is the gold standard for malaria diagnosis. Microscopists typically examine both a "thick film" of blood, allowing them to scan many blood cells in a short time, and a "thin film" of blood, allowing them to clearly see individual parasites and identify the infecting Plasmodium species. Under typical field laboratory conditions, a microscopist can detect parasites when there are at least 100 parasites per microliter of blood, which is around the lower range of symptomatic infection. Microscopic diagnosis is relatively resource intensive, requiring trained personnel, specific equipment, electricity, and a consistent supply of microscopy slides and stains.In places where microscopy is unavailable, malaria is diagnosed with RDTs, rapid antigen tests that detect parasite proteins in a fingerstick blood sample. A variety of RDTs are commercially available, targeting the parasite proteins histidine rich protein 2 (HRP2, detects P. falciparum only), lactate dehydrogenase, or aldolase. The HRP2 test is widely used in Africa, where P. falciparum predominates. However, since HRP2 persists in the blood for up to five weeks after an infection is treated, an HRP2 test sometimes cannot distinguish whether someone currently has malaria or previously had it. Additionally, some P. falciparum parasites in the Amazon region lack the HRP2 gene, complicating detection. RDTs are fast and easily deployed to places without full diagnostic laboratories. However they give considerably less information than microscopy, and sometimes vary in quality from producer to producer and lot to lot.Serological tests to detect antibodies against Plasmodium from the blood have been developed, but are not used for malaria diagnosis due to their relatively poor sensitivity and specificity. Highly sensitive nucleic acid amplification tests have been developed, but are not used clinically due to their relatively high cost, and poor specificity for active infections.
Classification
Malaria is classified into either "severe" or "uncomplicated" by the World Health Organization (WHO). It is deemed severe when any of the following criteria are present, otherwise it is considered uncomplicated.
Decreased consciousness
Significant weakness such that the person is unable to walk
Inability to feed
Two or more convulsions
Low blood pressure (less than 70 mmHg in adults and 50 mmHg in children)
Breathing problems
Circulatory shock
Kidney failure or haemoglobin in the urine
Bleeding problems, or hemoglobin less than 50 g/L (5 g/dL)
Pulmonary oedema
Blood glucose less than 2.2 mmol/L (40 mg/dL)
Acidosis or lactate levels of greater than 5 mmol/L
A parasite level in the blood of greater than 100,000 per microlitre (μL) in low-intensity transmission areas, or 250,000 per μL in high-intensity transmission areasCerebral malaria is defined as a severe P. falciparum-malaria presenting with neurological symptoms, including coma (with a Glasgow coma scale less than 11, or a Blantyre coma scale less than 3), or with a coma that lasts longer than 30 minutes after a seizure.
Prevention
Methods used to prevent malaria include medications, mosquito elimination and the prevention of bites. As of 2020, there is one vaccine for malaria (known as RTS,S) which is licensed for use. The presence of malaria in an area requires a combination of high human population density, high anopheles mosquito population density and high rates of transmission from humans to mosquitoes and from mosquitoes to humans. If any of these is lowered sufficiently, the parasite eventually disappears from that area, as happened in North America, Europe, and parts of the Middle East. However, unless the parasite is eliminated from the whole world, it could re-establish if conditions revert to a combination that favors the parasites reproduction. Furthermore, the cost per person of eliminating anopheles mosquitoes rises with decreasing population density, making it economically unfeasible in some areas.Prevention of malaria may be more cost-effective than treatment of the disease in the long run, but the initial costs required are out of reach of many of the worlds poorest people. There is a wide difference in the costs of control (i.e. maintenance of low endemicity) and elimination programs between countries. For example, in China—whose government in 2010 announced a strategy to pursue malaria elimination in the Chinese provinces—the required investment is a small proportion of public expenditure on health. In contrast, a similar programme in Tanzania would cost an estimated one-fifth of the public health budget. In 2021, the World Health Organization confirms that China has eliminated malaria. In areas where malaria is common, children under five years old often have anaemia, which is sometimes due to malaria. Giving children with anaemia in these areas preventive antimalarial medication improves red blood cell levels slightly but does not affect the risk of death or need for hospitalisation.
Mosquito control
Vector control refers to methods used to decrease malaria by reducing the levels of transmission by mosquitoes. For individual protection, the most effective insect repellents are based on DEET or picaridin. However, there is insufficient evidence that mosquito repellents can prevent malaria infection. Insecticide-treated nets (ITNs) and indoor residual spraying (IRS) are effective, have been commonly used to prevent malaria, and their use has contributed significantly to the decrease in malaria in the 21st century. ITNs and IRS may not be sufficient to eliminate the disease, as these interventions depend on how many people use nets, how many gaps in insecticide there are (low coverage areas), if people are not protected when outside of the home, and an increase in mosquitoes that are resistant to insecticides. Modifications to peoples houses to prevent mosquito exposure may be an important long term prevention measure.
Insecticide-treated nets
Mosquito nets help keep mosquitoes away from people and reduce infection rates and transmission of malaria. Nets are not a perfect barrier and are often treated with an insecticide designed to kill the mosquito before it has time to find a way past the net. Insecticide-treated nets (ITNs) are estimated to be twice as effective as untreated nets and offer greater than 70% protection compared with no net. Between 2000 and 2008, the use of ITNs saved the lives of an estimated 250,000 infants in Sub-Saharan Africa. About 13% of households in Sub-Saharan countries owned ITNs in 2007 and 31% of African households were estimated to own at least one ITN in 2008. In 2000, 1.7 million (1.8%) African children living in areas of the world where malaria is common were protected by an ITN. That number increased to 20.3 million (18.5%) African children using ITNs in 2007, leaving 89.6 million children unprotected and to 68% African children using mosquito nets in 2015. Most nets are impregnated with pyrethroids, a class of insecticides with low toxicity. They are most effective when used from dusk to dawn. It is recommended to hang a large "bed net" above the center of a bed and either tuck the edges under the mattress or make sure it is large enough such that it touches the ground. ITNs are beneficial towards pregnancy outcomes in malaria-endemic regions in Africa but more data is needed in Asia and Latin America.In areas of high malaria resistance, piperonyl butoxide (PBO) combined with pyrethroids in mosquito netting is effective in reducing malaria infection rates. Questions remain concerning the durability of PBO on nets as the impact on mosquito mortality was not sustained after twenty washes in experimental trials.
Indoor residual spraying
Indoor residual spraying is the spraying of insecticides on the walls inside a home. After feeding, many mosquitoes rest on a nearby surface while digesting the bloodmeal, so if the walls of houses have been coated with insecticides, the resting mosquitoes can be killed before they can bite another person and transfer the malaria parasite. As of 2006, the World Health Organization recommends 12 insecticides in IRS operations, including DDT and the pyrethroids cyfluthrin and deltamethrin. This public health use of small amounts of DDT is permitted under the Stockholm Convention, which prohibits its agricultural use. One problem with all forms of IRS is insecticide resistance. Mosquitoes affected by IRS tend to rest and live indoors, and due to the irritation caused by spraying, their descendants tend to rest and live outdoors, meaning that they are less affected by the IRS. Communities using insecticide treated nets, in addition to indoor residual spraying with non-pyrethroid-like insecticides found associated reductions in malaria. Additionally, the use of pyrethroid-like insecticides in addition to indoor residual spraying did not result in a detectable additional benefit in communities using insecticide treated nets.
Housing modifications
Housing is a risk factor for malaria and modifying the house as a prevention measure may be a sustainable strategy that does not rely on the effectiveness of insecticides such as pyrethroids. The physical environment inside and outside the home that may improve the density of mosquitoes are considerations. Examples of potential modifications include how close the home is to mosquito breeding sites, drainage and water supply near the home, availability of mosquito resting sites (vegetation around the home), the proximity to live stock and domestic animals, and physical improvements or modifications to the design of the home to prevent mosquitoes from entering.
Other mosquito control methods
People have tried a number of other methods to reduce mosquito bites and slow the spread of malaria. Efforts to decrease mosquito larvae by decreasing the availability of open water where they develop, or by adding substances to decrease their development, are effective in some locations. Electronic mosquito repellent devices, which make very high-frequency sounds that are supposed to keep female mosquitoes away, have no supporting evidence of effectiveness. There is a low certainty evidence that fogging may have an effect on malaria transmission. Larviciding by hand delivery of chemical or microbial insecticides into water bodies containing low larval distribution may reduce malarial transmission. There is insufficient evidence to determine whether larvivorous fish can decrease mosquito density and transmission in the area.
Medications
There are a number of medications that can help prevent or interrupt malaria in travellers to places where infection is common. Many of these medications are also used in treatment. In places where Plasmodium is resistant to one or more medications, three medications—mefloquine, doxycycline, or the combination of atovaquone/proguanil (Malarone)—are frequently used for prevention. Doxycycline and the atovaquone/proguanil are better tolerated while mefloquine is taken once a week. Areas of the world with chloroquine-sensitive malaria are uncommon. Antimalarial mass drug administration to an entire population at the same time may reduce the risk of contracting malaria in the population, however the effectiveness of mass drug administration may vary depending on the prevalence of malaria in the area. Other factors such as |
Malaria | drug administration plus other protective measures such as mosiquito control, the proportion of people treated in the area, and the risk of reinfection with malaria may play a role in the effectiveness of mass drug treatment approaches.The protective effect does not begin immediately, and people visiting areas where malaria exists usually start taking the drugs one to two weeks before they arrive, and continue taking them for four weeks after leaving (except for atovaquone/proguanil, which only needs to be started two days before and continued for seven days afterward). The use of preventive drugs is often not practical for those who live in areas where malaria exists, and their use is usually given only to pregnant women and short-term visitors. This is due to the cost of the drugs, side effects from long-term use, and the difficulty in obtaining antimalarial drugs outside of wealthy nations. During pregnancy, medication to prevent malaria has been found to improve the weight of the baby at birth and decrease the risk of anaemia in the mother. The use of preventive drugs where malaria-bearing mosquitoes are present may encourage the development of partial resistance.Giving antimalarial drugs to infants through intermittent preventive therapy can reduce the risk of having malaria infection, hospital admission, and anaemia.Mefloquine is more effective than sulfadoxine-pyrimethamine in preventing malaria for HIV-negative pregnant women. Cotrimoxazole is effective in preventing malaria infection and reduce the risk of getting anaemia in HIV-positive women. Giving sulfadoxine-pyrimethamine for three or more doses as intermittent preventive therapy is superior than two doses for HIV-positive women living in malaria-endemic areas.Prompt treatment of confirmed cases with artemisinin-based combination therapies (ACTs) may also reduce transmission.
Others
Community participation and health education strategies promoting awareness of malaria and the importance of control measures have been successfully used to reduce the incidence of malaria in some areas of the developing world. Recognising the disease in the early stages can prevent it from becoming fatal. Education can also inform people to cover over areas of stagnant, still water, such as water tanks that are ideal breeding grounds for the parasite and mosquito, thus cutting down the risk of the transmission between people. This is generally used in urban areas where there are large centers of population in a confined space and transmission would be most likely in these areas. Intermittent preventive therapy is another intervention that has been used successfully to control malaria in pregnant women and infants, and in preschool children where transmission is seasonal.
Treatment
Malaria is treated with antimalarial medications; the ones used depends on the type and severity of the disease. While medications against fever are commonly used, their effects on outcomes are not clear. Providing free antimalarial drugs to households may reduce childhood deaths when used appropriately. Programmes which presumptively treat all causes of fever with antimalarial drugs may lead to overuse of antimalarials and undertreat other causes of fever. Nevertheless, the use of malaria rapid-diagnostic kits can help to reduce over-usage of antimalarials.
Uncomplicated malaria
Simple or uncomplicated malaria may be treated with oral medications. Artemisinin drugs are effective and safe in treating uncomplicated malaria. Artemisinin in combination with other antimalarials (known as artemisinin-combination therapy, or ACT) is about 90% effective when used to treat uncomplicated malaria. The most effective treatment for P. falciparum infection is the use of ACT, which decreases resistance to any single drug component. Artemether-lumefantrine (six-dose regimen) is more effective than the artemether-lumefantrine (four-dose regimen) or other regimens not containing artemisinin derivatives in treating falciparum malaria. Another recommended combination is dihydroartemisinin and piperaquine. Artemisinin-naphthoquine combination therapy showed promising results in treating falciparum malaria. However, more research is needed to establish its efficacy as a reliable treatment. Artesunate plus mefloquine performs better than mefloquine alone in treating uncomplicated falciparum malaria in low transmission settings. Atovaquone-proguanil is effective against uncomplicated falciparum with a possible failure rate of 5% to 10%; the addition of artesunate may reduce failure rate. Azithromycin monotherapy or combination therapy has not shown effectiveness in treating plasmodium or vivax malaria. Amodiaquine plus sulfadoxine-pyrimethamine may achieve less treatment failures when compared to sulfadoxine-pyrimethamine alone in uncomplicated falciparum malaria. There is insufficient data on chlorproguanil-dapsone in treating uncomplicated falciparum malaria. The addition of primaquine with artemisinin-based combination therapy for falciparum malaria reduces its transmission at day 3-4 and day 8 of infection. Sulfadoxine-pyrimethamine plus artesunate is better than sulfadoxine-pyrimethamine plus amodiaquine in controlling treatment failure at day 28. However, the latter is better than the former in reducing gametocytes in blood at day 7.Infection with P. vivax, P. ovale or P. malariae usually does not require hospitalisation. Treatment of P. vivax requires both treatment of blood stages (with chloroquine or artemisinin-based combination therapy) and clearance of liver forms with an 8-aminoquinoline agent such as primaquine or tafenoquine.To treat malaria during pregnancy, the WHO recommends the use of quinine plus clindamycin early in the pregnancy (1st trimester), and ACT in later stages (2nd and 3rd trimesters). There is limited safety data on the antimalarial drugs in pregnancy.
Severe and complicated malaria
Cases of severe and complicated malaria are almost always caused by infection with P. falciparum. The other species usually cause only febrile disease. Severe and complicated malaria cases are medical emergencies since mortality rates are high (10% to 50%).Recommended treatment for severe malaria is the intravenous use of antimalarial drugs. For severe malaria, parenteral artesunate was superior to quinine in both children and adults. In another systematic review, artemisinin derivatives (artemether and arteether) were as efficacious as quinine in the treatment of cerebral malaria in children. Treatment of severe malaria involves supportive measures that are best done in a critical care unit. This includes the management of high fevers and the seizures that may result from it. It also includes monitoring for poor breathing effort, low blood sugar, and low blood potassium. Artemisinin derivatives have the same or better efficacy than quinolones in preventing deaths in severe or complicated malaria. Quinine loading dose helps to shorten the duration of fever and increases parasite clearance from the body. There is no difference in effectiveness when using intrarectal quinine compared to intravenous or intramuscular quinine in treating uncomplicated/complicated falciparum malaria. There is insufficient evidence for intramuscular arteether to treat severe malaria. The provision of rectal artesunate before transfer to hospital may reduce the rate of death for children with severe malaria.Cerebral malaria is the form of severe and complicated malaria with the worst neurological symptoms. There is insufficient data on whether osmotic agents such as mannitol or urea are effective in treating cerebral malaria. Routine phenobarbitone in cerebral malaria is associated with fewer convulsions but possibly more deaths. There is no evidence that steroids would bring treatment benefits for cerebral malaria.Managing Cerebral Malaria
Cerebral malaria usually makes a patient comatose, if the cause of the coma is in doubt, test for other locally prevalent causes of encephalopathy (bacterial, viral or fungal infection) should be carried out. In areas where there is a high prevalence of malaria infection (e.g. tropical region) treatment can start without testing first. To manage the cerebral malaria when confirmed the following can be done:
Patients in coma should be given meticulous nursing care ( monitor vital signs, turn patient every 2 hours, avoid lying the patient in a wet bed etc.)
A sterile urethral catheter should be inserted to help with urinating
To aspirate stomach content, a sterile nasogastric tube should be inserted.
In the occasion of convulsions, a slow intravenous injection of benzodiazepine is administered.There is insufficient evidence to show that blood transfusion is useful in either reducing deaths for children with severe anaemia or in improving their haematocrit in one month. There is insufficient evidence that iron chelating agents such as deferoxamine and deferiprone improve outcomes of those with malaria falciparum infection.
Resistance
Drug resistance poses a growing problem in 21st-century malaria treatment. In the 2000s (decade), malaria with partial resistance to artemisins emerged in Southeast Asia. Resistance is now common against all classes of antimalarial drugs apart from artemisinins. Treatment of resistant strains became increasingly dependent on this class of drugs. The cost of artemisinins limits their use in the developing world. Malaria strains found on the Cambodia–Thailand border are resistant to combination therapies that include artemisinins, and may, therefore, be untreatable. Exposure of the parasite population to artemisinin monotherapies in subtherapeutic doses for over 30 years and the availability of substandard artemisinins likely drove the selection of the resistant phenotype. Resistance to artemisinin has been detected in Cambodia, Myanmar, Thailand, and Vietnam, and there has been emerging resistance in Laos. Resistance to the combination of artemisinin and piperaquine was first detected in 2013 in Cambodia, and by 2019 had spread across Cambodia and into Laos, Thailand and Vietnam (with up to 80 percent of malaria parasites resistant in some regions).There is insufficient evidence in unit packaged antimalarial drugs in preventing treatment failures of malaria infection. However, if supported by training of healthcare providers and patient information, there is improvement in compliance of those receiving treatment.
Prognosis
When properly treated, people with malaria can usually expect a complete recovery. However, severe malaria can progress extremely rapidly and cause death within hours or days. In the most severe cases of the disease, fatality rates can reach 20%, even with intensive care and treatment. Over the longer term, developmental impairments have been documented in children who have had episodes of severe malaria. Chronic infection without severe disease can occur in an immune-deficiency syndrome associated with a decreased responsiveness to Salmonella bacteria and the Epstein–Barr virus.During childhood, malaria causes anaemia during a period of rapid brain development, and also direct brain damage resulting from cerebral malaria. Some survivors of cerebral malaria have an increased risk of neurological and cognitive deficits, behavioural disorders, and epilepsy. Malaria prophylaxis was shown to improve cognitive function and school performance in clinical trials when compared to placebo groups.
Epidemiology
The WHO estimates that in 2019 there were 229 million new cases of malaria resulting in 409,000 deaths. Children under 5 years old are the most affected, accounting for 67% of malaria deaths worldwide in 2019. About 125 million pregnant women are at risk of infection each year; in Sub-Saharan Africa, maternal malaria is associated with up to 200,000 estimated infant deaths yearly. There are about 10,000 malaria cases per year in Western Europe, and 1300–1500 in the United States. The United States eradicated malaria as a major public health concern in 1951, though small outbreaks persist. About 900 people died from the disease in Europe between 1993 and 2003. Both the global incidence of disease and resulting mortality have declined in recent years. According to the WHO and UNICEF, deaths attributable to malaria in 2015 were reduced by 60% from a 2000 estimate of 985,000, largely due to the widespread use of insecticide-treated nets and artemisinin-based combination therapies. In 2012, there were 207 million cases of malaria. That year, the disease is estimated to have killed between 473,000 and 789,000 people, many of whom were children in Africa. Efforts at decreasing the disease in Africa since 2000 have been partially effective, with rates of the disease dropping by an estimated forty percent on the continent.Malaria is presently endemic in a broad band around the equator, in areas of the Americas, many parts of Asia, and much of Africa; in Sub-Saharan Africa, 85–90% of malaria fatalities occur. An estimate for 2009 reported that countries with the highest death rate per 100,000 of population were Ivory Coast (86.15), Angola (56.93) and Burkina Faso (50.66). A 2010 estimate indicated the deadliest countries per population were Burkina Faso, Mozambique and Mali. The Malaria Atlas Project aims to map global levels of malaria, providing a way to determine the global spatial limits of the disease and to assess disease burden. This effort led to the publication of a map of P. falciparum endemicity in 2010 and an update in 2019. As of 2010, about 100 countries have endemic malaria. Every year, 125 million international travellers visit these countries, and more than 30,000 contract the disease.The geographic distribution of malaria within large regions is complex, and malaria-afflicted and malaria-free areas are often found close to each other. Malaria is prevalent in tropical and subtropical regions because of rainfall, consistent high temperatures and high humidity, along with stagnant waters where mosquito larvae readily mature, providing them with the environment they need for continuous breeding. In drier areas, outbreaks of malaria have been predicted with reasonable accuracy by mapping rainfall. Malaria is more common in rural areas than in cities. For example, several cities in the Greater Mekong Subregion of Southeast Asia are essentially malaria-free, but the disease is prevalent in many rural regions, including along international borders and forest fringes. In contrast, malaria in Africa is present in both rural and urban areas, though the risk is lower in the larger cities.Since 1900 there has been substantial change in temperature and rainfall over Africa. However, factors that contribute to how rainfall results in water for mosquito breeding are complex, incorporating the extent to which it is absorbed into soil and vegetation for example, or rates of runoff and evaporation. Recent research has provided a more in-depth picture of conditions across Africa, combining a malaria climatic suitability model with a continental-scale model representing real-world hydrological processes.
History
Although the parasite responsible for P. falciparum malaria has been in existence for 50,000–100,000 years, the population size of the parasite did not increase until about 10,000 years ago, concurrently with advances in agriculture and the development of human settlements. Close relatives of the human malaria parasites remain common in chimpanzees. Some evidence suggests that the P. falciparum malaria may have originated in gorillas.References to the unique periodic fevers of malaria are found throughout history. Hippocrates described periodic fevers, labelling them tertian, quartan, subtertian and quotidian. The Roman Columella associated the disease with insects from swamps. Malaria may have contributed to the decline of the Roman Empire, and was so pervasive in Rome that it was known as the "Roman fever". Several regions in ancient Rome were considered at-risk for the disease because of the favourable conditions present for malaria vectors. This included areas such as southern Italy, the island of Sardinia, the Pontine Marshes, the lower regions of coastal Etruria and the city of Rome along the Tiber. The presence of stagnant water in these places was preferred by mosquitoes for breeding grounds. Irrigated gardens, swamp-like grounds, run-off from agriculture, and drainage problems from road construction led to the increase of standing water.
The term malaria originates from Mediaeval Italian: mala aria—"bad air"; the disease was formerly called ague or marsh fever due to its association with swamps and marshland. The term appeared in English at least as early as 1768. Malaria was once common in most of Europe and North America, where it is no longer endemic, though imported cases do occur.Malaria is not referenced in the medical books of the Mayans or Aztecs. European settlers and the West Africans they enslaved likely brought malaria to the Americas starting in the 16th century.Scientific studies on malaria made their first significant advance in 1880, when Charles Louis Alphonse Laveran—a French army doctor working in the military hospital of Constantine in Algeria—observed parasites inside the red blood cells of infected people for the first time. He, therefore, proposed that malaria is caused by this organism, the first time a protist was identified as causing disease. For this and later discoveries, he was awarded the 1907 Nobel Prize for Physiology or Medicine. A year later, Carlos Finlay, a Cuban doctor treating people with yellow fever in Havana, provided strong evidence that mosquitoes were transmitting disease to and from humans. This work followed earlier suggestions by Josiah C. Nott, and work by Sir Patrick Manson, the "father of tropical medicine", on the transmission of filariasis.
In April 1894, a Scottish physician, Sir Ronald Ross, visited Sir Patrick Manson at his house on Queen Anne Street, London. This visit was the start of four years of collaboration and fervent research that culminated in 1897 when Ross, who was working in the Presidency General Hospital in Calcutta, proved the complete life-cycle of the malaria parasite in mosquitoes. He thus proved that the mosquito was the vector for malaria in humans by showing that certain mosquito species transmit malaria to birds. He isolated malaria parasites from the salivary glands of mosquitoes that had fed on infected birds. For this work, Ross received the 1902 Nobel Prize in Medicine. After resigning from the Indian Medical Service, Ross worked at the newly established Liverpool School of Tropical Medicine and directed malaria-control efforts in Egypt, Panama, Greece and Mauritius. The findings of Finlay and Ross were later confirmed by a medical board headed by Walter Reed in 1900. Its recommendations were implemented by William C. Gorgas in the health measures undertaken during construction of the Panama Canal. This public-health work saved the lives of thousands of workers and helped develop the methods used in future public-health campaigns against the disease.In 1896, Amico Bignami discussed the role of mosquitoes in malaria. In 1898, Bignami, Giovanni Battista Grassi and Giuseppe Bastianelli succeeded in showing experimentally the transmission of malaria in humans, using infected mosquitoes to contract malaria themselves which they presented in November 1898 to the Accademia dei Lincei.
The first effective treatment for malaria came from the bark of cinchona tree, which contains quinine. This tree grows on the slopes of the Andes, mainly in Peru. The indigenous peoples of Peru made a tincture of cinchona to control fever. Its effectiveness against malaria was found and the Jesuits introduced the treatment to Europe around 1640; by 1677, it was included in the London Pharmacopoeia as an antimalarial treatment. It was not until 1820 that the active ingredient, quinine, was extracted from the bark, isolated and named by the French chemists Pierre Joseph Pelletier and Joseph Bienaimé Caventou.Quinine was the predominant malarial medication until the 1920s when other medications began to appear. In the 1940s, chloroquine replaced quinine as the treatment of both uncomplicated and severe malaria until resistance supervened, first in Southeast Asia and South America in the 1950s and then globally in the 1980s.The medicinal value of Artemisia annua has been used by Chinese herbalists in traditional Chinese medicines for 2,000 years. In 1596, Li Shizhen recommended tea made from qinghao specifically to treat malaria symptoms in his "Compendium of Materia Medica". Artemisinins, discovered by Chinese scientist Tu Youyou and colleagues in the 1970s from the plant Artemisia annua, became the recommended treatment for P. falciparum malaria, administered in severe cases in combination with other antimalarials. Tu says she was influenced by a traditional Chinese herbal medicine source, The Handbook of Prescriptions for Emergency Treatments, written in 340 by Ge Hong. For her work on malaria, Tu Youyou received the 2015 Nobel Prize in Physiology or Medicine.Plasmodium vivax was used between 1917 and the 1940s for malariotherapy—deliberate injection of malaria parasites to induce a fever to combat certain diseases such as tertiary syphilis. In 1927, the inventor of this technique, Julius Wagner-Jauregg, received the Nobel Prize in Physiology or Medicine for his discoveries. The technique was dangerous, killing about 15% of patients, so it is no longer in use.
The first pesticide used for indoor residual spraying was DDT. Although it was initially used exclusively to combat malaria, its use quickly spread to agriculture. In time, pest control, rather than disease control, came to dominate DDT use, and this large-scale agricultural use led to the evolution of pesticide-resistant mosquitoes in many regions. The DDT resistance shown by Anopheles mosquitoes can be compared to antibiotic resistance shown by bacteria. During the 1960s, awareness of the negative consequences of its indiscriminate use increased, ultimately leading to bans on agricultural applications of DDT in many countries in the 1970s. Before DDT, malaria was successfully eliminated or controlled in tropical areas like Brazil and Egypt by removing or poisoning the breeding grounds of the mosquitoes or the aquatic habitats of the larval stages, for example by applying the highly toxic arsenic compound Paris Green to places with standing water.Malaria vaccines have been an elusive goal of research. The first promising studies demonstrating the potential for a malaria vaccine were performed in 1967 by immunising mice with live, radiation-attenuated sporozoites, which provided significant protection to the mice upon subsequent injection with normal, viable sporozoites. Since the 1970s, there has been a considerable effort to develop similar vaccination strategies for humans. The first vaccine, called RTS,S, was approved by European regulators in 2015.
Names
Various types of malaria have been called by the names below:
Eradication efforts
Malaria has been successfully eliminated or significantly reduced in certain areas, but not globally. Malaria was once common in the United States, but the US eliminated malaria from most parts of the country in the early 20th century using vector control programs, which combined the monitoring and treatment of infected humans, draining of wetland breeding grounds for agriculture and other changes in water management practices, and advances in sanitation, including greater use of glass windows and screens in dwellings. The use of the pesticide DDT and other means eliminated malaria from the remaining pockets in southern states of the US the 1950s, as part of the National Malaria Eradication Program. Most of Europe, North America, Australia, North Africa and the Caribbean, and parts of South America, Asia and Southern Africa have also eliminated malaria. The WHO defines "elimination" (or "malaria-free") as having no domestic transmission (indigenous cases) for the past three years. They also define "pre-elimination" and "elimination" stages when a country has fewer than 5 or 1, respectively, cases per 1000 people at risk per year.
In 1955 the WHO launched the Global Malaria Eradication Program (GMEP), which supported substantial reductions in malaria cases in some countries, including India. However, due to vector and parasite resistance and other factors, the feasibility of eradicating malaria with the strategy used at the time and resources available led to waning support for the program. WHO suspended the program in 1969.
Target 6C of the Millennium Development Goals included reversal of the global increase in malaria incidence by 2015, with specific targets for children under 5 years old. Since 2000, support for malaria eradication increased, although some actors in the global health community (including voices within the WHO) view malaria eradication as a premature goal and suggest that the establishment of strict deadlines for malaria eradication may be counterproductive as they are likely to be missed.In 2006, the organization Malaria No More set a public goal of eliminating malaria from Africa by 2015, and the organization claimed they planned to dissolve if that goal was accomplished. In 2007, World Malaria Day was established by the 60th session of the World Health Assembly. As of 2018, they are still functioning.
As of 2012, The Global Fund to Fight AIDS, Tuberculosis, and Malaria has distributed 230 million insecticide-treated nets intended to stop mosquito-borne transmission of malaria. The U.S.-based Clinton Foundation has worked to manage demand and stabilize prices in the artemisinin market. Other efforts, such as the Malaria Atlas Project, focus on analysing climate and weather information required to accurately predict malaria spread based on the availability of habitat of malaria-carrying parasites. The Malaria Policy Advisory Committee (MPAC) of the World Health Organization (WHO) was formed in 2012, "to provide strategic advice and technical input to WHO on all aspects of malaria control and elimination". In November 2013, WHO and the malaria vaccine funders group set a goal to develop vaccines designed to interrupt malaria transmission with malaria eradications long-term goal.In 2015 the WHO targeted a 90% reduction in malaria deaths by 2030, and Bill Gates said in 2016 that he thought global eradication would be possible by 2040. According to the WHOs World Malaria Report 2015, the global mortality rate for malaria fell by 60% between 2000 and 2015. The WHO targeted a further 90% reduction between 2015 and 2030, with a 40% reduction and eradication in 10 countries by 2020. However, the 2020 goal was missed with a slight increase in cases compared to 2015.Before 2016, the Global Fund against HIV/AIDS, Tuberculosis and Malaria had provided 659 million ITN (insecticide treated bed nets), organise support and education to prevents malaria. The challenges are high due to the lack of funds, the fragile health structure and the remote indigenous population that could be hard to reach and educate. Most of indigenous population rely on self-diagnosis, self-treatment, healer, and traditional medicine. The WHO applied for fund to the Gates Foundation which favour the action of malaria eradication in 2007. Six countries, the United Arab Emirates, Morocco, Armenia, Turkmenistan, Kyrgyzstan, and Sri Lanka managed to have no endemic cases of malaria for three consecutive years and certified malaria-free by the WHO despite the stagnation of the funding in 2010. The funding is essential to finance the cost of medication and hospitalisation cannot be supported by the poor countries where the disease is widely spread. The goal of eradication has not been met nevertheless the decrease rate of the disease is considerable.
While 31 out of 92 endemic countries were estimated to be on track with the WHO goals for 2020, 15 countries reported an increase of 40% or more between 2015 and 2020. Between 2000 and 30 June 2021, twelve countries were certified by the WHO as being malaria-free. Argentina and Algeria were declared free of malaria in 2019. El Salvador and China were declared malaria-free in the first half of 2021.Regional disparities were evident: Southeast Asia was on track to meet WHOs 2020 goals, while Africa, Americas, Eastern Mediterranean and West Pacific regions were off-track. The six Greater Mekong Subregion countries aim for elimination of P. falciparum transmitted malaria by 2025 and elimination of all malaria by 2030, having achieved a 97% and 90% reduction of cases respectively since 2000. Ahead of World Malaria Day, 25 April 2021, WHO named 25 countries in which it is working to eliminate malaria by 2025 as part of its E-2025 initiative.A major challenge to malaria elimination is the persistence of malaria in border regions, making international cooperation crucial.One of the targets of Goal 3 of the UNs Sustainable Development Goals is to end the malaria epidemic in all countries by 2030.
In 2018, WHO announced that Paraguay was free of malaria, after a national malaria eradication effort that began in 1950.As of 2019, the eradication process is ongoing, but it will be difficult to achieve a world free of malaria with the current |
Malaria | approaches and tools. Only one malaria vaccine is licensed for use, and it shows relatively low effectiveness, while several other vaccine candidates in clinical trials aim to provide protection for children in endemic areas and reduce the speed of malaria transmission. Approaches may require investing more in research and greater primary health care. Continuing surveillance will also be important to prevent the return of malaria in countries where the disease has been eliminated.
Society and culture
Economic impact
Malaria is not just a disease commonly associated with poverty: some evidence suggests that it is also a cause of poverty and a major hindrance to economic development. Although tropical regions are most affected, malarias furthest influence reaches into some temperate zones that have extreme seasonal changes. The disease has been associated with major negative economic effects on regions where it is widespread. During the late 19th and early 20th centuries, it was a major factor in the slow economic development of the American southern states.A comparison of average per capita GDP in 1995, adjusted for parity of purchasing power, between countries with malaria and countries without malaria gives a fivefold difference (US$1,526 versus US$8,268). In the period 1965 to 1990, countries where malaria was common had an average per capita GDP that increased only 0.4% per year, compared to 2.4% per year in other countries.Poverty can increase the risk of malaria since those in poverty do not have the financial capacities to prevent or treat the disease. In its entirety, the economic impact of malaria has been estimated to cost Africa US$12 billion every year. The economic impact includes costs of health care, working days lost due to sickness, days lost in education, decreased productivity due to brain damage from cerebral malaria, and loss of investment and tourism. The disease has a heavy burden in some countries, where it may be responsible for 30–50% of hospital admissions, up to 50% of outpatient visits, and up to 40% of public health spending.
Cerebral malaria is one of the leading causes of neurological disabilities in African children. Studies comparing cognitive functions before and after treatment for severe malarial illness continued to show significantly impaired school performance and cognitive abilities even after recovery. Consequently, severe and cerebral malaria have far-reaching socioeconomic consequences that extend beyond the immediate effects of the disease.
Counterfeit and substandard drugs
Sophisticated counterfeits have been found in several Asian countries such as Cambodia, China, Indonesia, Laos, Thailand, and Vietnam, and are a major cause of avoidable death in those countries. The WHO said that studies indicate that up to 40% of artesunate-based malaria medications are counterfeit, especially in the Greater Mekong region. They have established a rapid alert system to rapidly report information about counterfeit drugs to relevant authorities in participating countries. There is no reliable way for doctors or lay people to detect counterfeit drugs without help from a laboratory. Companies are attempting to combat the persistence of counterfeit drugs by using new technology to provide security from source to distribution.Another clinical and public health concern is the proliferation of substandard antimalarial medicines resulting from inappropriate concentration of ingredients, contamination with other drugs or toxic impurities, poor quality ingredients, poor stability and inadequate packaging. A 2012 study demonstrated that roughly one-third of antimalarial medications in Southeast Asia and Sub-Saharan Africa failed chemical analysis, packaging analysis, or were falsified.
War
Throughout history, the contraction of malaria has played a prominent role in the fates of government rulers, nation-states, military personnel, and military actions. In 1910, Nobel Prize in Medicine-winner Ronald Ross (himself a malaria survivor), published a book titled The Prevention of Malaria that included a chapter titled "The Prevention of Malaria in War". The chapters author, Colonel C. H. Melville, Professor of Hygiene at Royal Army Medical College in London, addressed the prominent role that malaria has historically played during wars: "The history of malaria in war might almost be taken to be the history of war itself, certainly the history of war in the Christian era.... It is probably the case that many of the so-called camp fevers, and probably also a considerable proportion of the camp dysentery, of the wars of the sixteenth, seventeenth and eighteenth centuries were malarial in origin." In British-occupied India the cocktail gin and tonic may have come about as a way of taking quinine, known for its antimalarial properties.Malaria was the most significant health hazard encountered by U.S. troops in the South Pacific during World War II, where about 500,000 men were infected. According to Joseph Patrick Byrne, "Sixty thousand American soldiers died of malaria during the African and South Pacific campaigns."Significant financial investments have been made to procure existing and create new antimalarial agents. During World War I and World War II, inconsistent supplies of the natural antimalaria drugs cinchona bark and quinine prompted substantial funding into research and development of other drugs and vaccines. American military organisations conducting such research initiatives include the Navy Medical Research Center, Walter Reed Army Institute of Research, and the U.S. Army Medical Research Institute of Infectious Diseases of the US Armed Forces.Additionally, initiatives have been founded such as Malaria Control in War Areas (MCWA), established in 1942, and its successor, the Communicable Disease Center (now known as the Centers for Disease Control and Prevention, or CDC) established in 1946. According to the CDC, MCWA "was established to control malaria around military training bases in the southern United States and its territories, where malaria was still problematic".
Research
The Malaria Eradication Research Agenda (malERA) initiative was a consultative process to identify which areas of research and development (R&D) must be addressed for worldwide eradication of malaria.
Vaccine
A vaccine against malaria called RTS,S/AS01 (RTS,S) was approved by European regulators in 2015. As of 2019 it is undergoing pilot trials in 3 sub-Saharan African countries – Ghana, Kenya and Malawi – as part of the WHOs Malaria Vaccine Implementation Programme (MVIP).Immunity (or, more accurately, tolerance) to P. falciparum malaria does occur naturally, but only in response to years of repeated infection. An individual can be protected from a P. falciparum infection if they receive about a thousand bites from mosquitoes that carry a version of the parasite rendered non-infective by a dose of X-ray irradiation. The highly polymorphic nature of many P. falciparum proteins results in significant challenges to vaccine design. Vaccine candidates that target antigens on gametes, zygotes, or ookinetes in the mosquito midgut aim to block the transmission of malaria. These transmission-blocking vaccines induce antibodies in the human blood; when a mosquito takes a blood meal from a protected individual, these antibodies prevent the parasite from completing its development in the mosquito. Other vaccine candidates, targeting the blood-stage of the parasites life cycle, have been inadequate on their own. For example, SPf66 was tested extensively in areas where the disease was common in the 1990s, but trials showed it to be insufficiently effective.In 2021, researchers from the University of Oxford reported findings from a Phase IIb trial of a candidate malaria vaccine, R21/Matrix-M, which demonstrated efficacy of 77% over 12-months of follow-up. This vaccine is the first to meet the World Health Organizations Malaria Vaccine Technology Roadmap goal of a vaccine with at least 75% efficacy.
Medications
Malaria parasites contain apicoplasts, organelles related to the plastids found in plants, complete with their own genomes. These apicoplasts are thought to have originated through the endosymbiosis of algae and play a crucial role in various aspects of parasite metabolism, such as fatty acid biosynthesis. Over 400 proteins have been found to be produced by apicoplasts and these are now being investigated as possible targets for novel antimalarial drugs.With the onset of drug-resistant Plasmodium parasites, new strategies are being developed to combat the widespread disease. One such approach lies in the introduction of synthetic pyridoxal-amino acid adducts, which are taken up by the parasite and ultimately interfere with its ability to create several essential B vitamins. Antimalarial drugs using synthetic metal-based complexes are attracting research interest.
(+)-SJ733: Part of a wider class of experimental drugs called spiroindolone. It inhibits the ATP4 protein of infected red blood cells that cause the cells to shrink and become rigid like the aging cells. This triggers the immune system to eliminate the infected cells from the system as demonstrated in a mouse model. As of 2014, a Phase 1 clinical trial to assess the safety profile in human is planned by the Howard Hughes Medical Institute.
NITD246 and NITD609: Also belonged to the class of spiroindolone and target the ATP4 protein.On the basis of molecular docking outcomes, compounds 3j, 4b, 4h, 4m were exhibited selectivity towards PfLDH. The post docking analysis displayed stable dynamic behavior of all the selected compounds compared to Chloroquine. The end state thermodynamics analysis stated 3j compound as a selective and potent PfLDH inhibitor.
New targets
Targeting Plasmodium liver-stage parasites selectively is emerging as an alternative strategy in the face of resistance to the latest frontline combination therapies against blood stages of the parasite.In a research conducted in 2019, using experimental analysis with knockout (KO) mutants of Plasmodium berguei the authors were able to identify genes that are potentially essential in the liver stage. Moreover, they generated a computational model to analyse pre–erytrocytic development and liver–stage metabolism. Combining both methods they identified seven metabolic subsystems that become essential compared to the blood stage. Some of these metabolic pathways are fatty acid synthesis and elongation, tricarboxylic acid, amino acid and heme metabolism among others.Specifically, they studied 3 subsystems: fatty acid synthesis and elongation, and amino sugar biosynthesis. For the first two pathways they demonstrated a clear dependence of the liver stage on its own fatty acid metabolism.They proved for the first time the critical role of amino sugar biosynthesis in the liver stage of P. berghei. The uptake of N–acetyl–glucosamine appears to be limited in the liver stage, being its synthesis needed for the parasite development.These findings and the computational model provide a basis for the design of antimalarial therapies targeting metabolic proteins.
Other
A non-chemical vector control strategy involves genetic manipulation of malaria mosquitoes. Advances in genetic engineering technologies make it possible to introduce foreign DNA into the mosquito genome and either decrease the lifespan of the mosquito, or make it more resistant to the malaria parasite. Sterile insect technique is a genetic control method whereby large numbers of sterile male mosquitoes are reared and released. Mating with wild females reduces the wild population in the subsequent generation; repeated releases eventually eliminate the target population.Genomics is central to malaria research. With the sequencing of P. falciparum, one of its vectors Anopheles gambiae, and the human genome, the genetics of all three organisms in the malaria life cycle can be studied. Another new application of genetic technology is the ability to produce genetically modified mosquitoes that do not transmit malaria, potentially allowing biological control of malaria transmission.In one study, a genetically modified strain of Anopheles stephensi was created that no longer supported malaria transmission, and this resistance was passed down to mosquito offspring.Gene drive is a technique for changing wild populations, for instance to combat or eliminate insects so they cannot transmit diseases (in particular mosquitoes in the cases of malaria, zika, dengue and yellow fever).In December 2020, a review article found that malaria-endemic regions had lower reported COVID-19 case fatality rates on average than regions where malaria was not known to be endemic.
Other animals
While there are no animal reservoirs for the strains of malaria that cause human infections, nearly 200 parasitic Plasmodium species have been identified that infect birds, reptiles, and other mammals, and about 30 species naturally infect non-human primates. Some malaria parasites that affect non-human primates (NHP) serve as model organisms for human malarial parasites, such as P. coatneyi (a model for P. falciparum) and P. cynomolgi (P. vivax). Diagnostic techniques used to detect parasites in NHP are similar to those employed for humans. Malaria parasites that infect rodents are widely used as models in research, such as P. berghei. Avian malaria primarily affects species of the order Passeriformes, and poses a substantial threat to birds of Hawaii, the Galapagos, and other archipelagoes. The parasite P. relictum is known to play a role in limiting the distribution and abundance of endemic Hawaiian birds. Global warming is expected to increase the prevalence and global distribution of avian malaria, as elevated temperatures provide optimal conditions for parasite reproduction.
References
Citations
Sources
Further reading
External links
WHO site on malaria
CDC site on malaria
PAHO site on malaria |
Malaria prophylaxis | Malaria prophylaxis is the preventive treatment of malaria. Several malaria vaccines are under development.
For pregnant women who are living in malaria endemic areas, routine malaria chemoprevention is recommended. It improves anemia and parasite level in the blood for the pregnant women and the birthweight in their infants.
Strategies
Risk management
Bite prevention—clothes that cover as much skin as possible, insect repellent, insecticide-impregnated bed nets and indoor residual spraying
Chemoprophylaxis
Rapid diagnosis and treatmentRecent improvements in malaria prevention strategies have further enhanced its effectiveness in combating areas highly infected with the malaria parasite. Additional bite prevention measures include mosquito and insect repellents that can be directly applied to skin. This form of mosquito repellent is slowly replacing indoor residual spraying, which is considered to have high levels of toxicity by WHO (World Health Organization). Further additions to preventive care are sanctions on blood transfusions. Once the malaria parasite enters the erythrocytic stage, it can adversely affect blood cells, making it possible to contract the parasite through infected blood.
Chloroquine may be used where the parasite is still sensitive, however many malaria parasite strains are now resistant. Mefloquine (Lariam), or doxycycline (available generically), or the combination of atovaquone and proguanil hydrochloride (Malarone) are frequently recommended.
Medications
In choosing the agent, it is important to weigh the risk of infection against the risks and side effects associated with the medications.
Disruptive prophylaxis
An experimental approach involves preventing the parasite from binding with red blood cells by blocking calcium signalling between the parasite and the host cell. Erythrocyte-binding-like proteins (EBLs) and reticulocyte-binding protein homologues (RHs) are both used by specialized P. falciparum organelles known as rhoptries and micronemes to bind with the host cell. Disrupting the binding process can stop the parasite.Monoclonal antibodies were used to interrupt calcium signalling between PfRH1 (an RH protein), EBL protein EBA175 and the host cell. This disruption completely stopped the binding process.
Suppressive prophylaxis
Chloroquine, proguanil, mefloquine, and doxycycline are suppressive prophylactics. This means that they are only effective at killing the malaria parasite once it has entered the erythrocytic stage (blood stage) of its life cycle, and therefore have no effect until the liver stage is complete. That is why these prophylactics must continue to be taken for four weeks after leaving the area of risk.
Mefloquine, doxycycline, and atovaquone-proguanil appear to be equally effective at reducing the risk of malaria for short-term travelers and are similar with regard to their risk of serious side effects. Mefloquine is sometimes preferred due to its once a week dose, however mefloquine is not always as well tolerated when compared with atovaquone-proguanil. There is low-quality evidence suggesting that mefloquine and doxycycline are similar with regards to the number of people who discontinue treatments due to minor side effects. People who take mefloquine may be more likely to experience minor side effects such as sleep disturbances, depressed mood, and an increase in abnormal dreams. There is very low quality evidence indicating that doxycycline use may be associated with an increased risk of indigestion, photosensitivity, vomiting, and yeast infections, when compared with mefloquine and atovaquone-proguanil.
Causal prophylaxis
Causal prophylactics target not only the blood stages of malaria, but the initial liver stage as well. This means that the user can stop taking the drug seven days after leaving the area of risk. Malarone and primaquine are the only causal prophylactics in current use.
Regimens
Specific regimens are recommended by the WHO, UK HPA and CDC for prevention of P. falciparum infection. HPA and WHO advice are broadly in line with each other (although there are some differences). CDC guidance frequently contradicts HPA and WHO guidance.
These regimens include:
doxycycline 100 mg once daily (started one day before travel, and continued for four weeks after returning);
mefloquine 250 mg once weekly (started two-and-a-half weeks before travel, and continued for four weeks after returning);
atovaquone/proguanil (Malarone) 1 tablet daily (started one day before travel, and continued for 1 week after returning). Can also be used for therapy in some cases.In areas where chloroquine remains effective:
chloroquine 300 mg once weekly, and proguanil 200 mg once daily (started one week before travel, and continued for four weeks after returning);
hydroxychloroquine 400 mg once weekly (started one to two weeks before travel and continued for four weeks after returning)What regimen is appropriate depends on the person who is to take the medication as well as the country or region travelled to. This information is available from the UK HPA, WHO or CDC (links are given below). Doses depend also on what is available (e.g., in the US, mefloquine tablets contain 228 mg base, but 250 mg base in the UK). The data is constantly changing and no general advice is possible.
Doses given are appropriate for adults and children aged 12 and over.
Other chemoprophylactic regimens that have been used on occasion:
Dapsone 100 mg and pyrimethamine 12.5 mg once weekly (available as a combination tablet called Maloprim or Deltaprim): this combination is not routinely recommended because of the risk of agranulocytosis;
Primaquine 30 mg once daily (started the day before travel, and continuing for seven days after returning): this regimen is not routinely recommended because of the need for G-6-PD testing prior to starting primaquine (see the article on primaquine for more information).
Quinine sulfate 300 to 325 mg once daily: this regimen is effective but not routinely used because of the unpleasant side effects of quinine.Prophylaxis against Plasmodium vivax requires a different approach given the long liver stage of this parasite. This is a highly specialist area.
Vaccines
In November 2012, findings from a Phase III trials of an experimental malaria vaccine known as RTS,S reported that it provided modest protection against both clinical and severe malaria in young infants. The efficacy was about 30% in infants 6 to 12 weeks of age and about 50% in infants 5 to 17 months of age in the first year of the trial.The RTS,S vaccine was engineered using a fusion hepatitis B surface protein containing epitopes of the outer protein of Plasmodium falciparum malaria sporozite, which is produced in yeast cells. It also contains a chemical adjuvant to boost the immune system response. The vaccine is being developed by PATH and GlaxoSmithKline (GSK), which has spent about $300 million on the project, plus about $200 million more from the Bill and Melinda Gates Foundation.
Risk factors
Most adults from endemic areas have a degree of long-term infection, which tends to recur, and also possess partial immunity (resistance); the resistance reduces with time, and such adults may become susceptible to severe malaria if they have spent a significant amount of time in non-endemic areas. They are strongly recommended to take full precautions if they return to an endemic area.
History
Malaria is one of the oldest known pathogens, and began having a major impact on human survival about 10,000 years ago with the birth of agriculture. The development of virulence in the parasite has been demonstrated using genomic mapping of samples from this period, confirming the emergence of genes conferring a reduced risk of developing the malaria infection. References to the disease can be found in manuscripts from ancient Egypt, India and China, illustrating its wide geographical distribution. The first treatment identified is thought to be Quinine, one of four alkaloids from the bark of the Cinchona tree. Originally it was used by the tribes of Ecuador and Peru for treating fevers. Its role in treating malaria was recognised and recorded first by an Augustine monk from Lima, Peru in 1633. Seven years later the drug had reached Europe and was being used widely with the name the Jesuits bark. From this point onwards the use of Quinine and the public interest in malaria increased, although the compound was not isolated and identified as the active ingredient until 1820. By the mid-1880s the Dutch had grown vast plantations of cinchona trees and monopolised the world market.
Quinine remained the only available treatment for malaria until the early 1920s. During the First World War German scientists developed the first synthetic antimalarial compound—Atabrin and this was followed by Resochin and Sontochin derived from 4-aminoquinoline compounds. American troops, on capturing Tunisia during the Second World War, acquired, then altered the drugs to produce Chloroquine.
The development of new antimalarial drugs spurred the World Health Organization in 1955 to attempt a global malaria eradication program. This was successful in much of Brazil, the US and Egypt but ultimately failed elsewhere. Efforts to control malaria are still continuing, with the development of drug-resistant parasites presenting increasingly difficult problems.
The CDC publishes recommendations for travels advising about the risk of contracting malaria in various countries.Some of the factors in deciding whether to use chemotherapy as malaria pre-exposure prophylaxis include the specific itinerary, length of trip, cost of drug, previous adverse reactions to antimalarials, drug allergies, and current medical history.
See also
Malaria prevention
Mosquito control
== References == |
Pattern hair loss | Pattern hair loss (also known as androgenetic alopecia (AGA)) is a hair loss condition that primarily affects the top and front of the scalp. In male-pattern hair loss (MPHL), the hair loss typically presents itself as either a receding front hairline, loss of hair on the crown (vertex) of the scalp, or a combination of both. Female-pattern hair loss (FPHL) typically presents as a diffuse thinning of the hair across the entire scalp.Male pattern hair loss seems to be due to a combination of oxidative stress, the microbiome of the scalp, genetics, and circulating androgens; particularly dihydrotestosterone (DHT). Men with early onset androgenic alopecia (before the age of 35) have been deemed as the male phenotypic equivalent for polycystic ovary syndrome (PCOS). As an early clinical expression of insulin resistance and metabolic syndrome, AGA is related to being an increased risk factor for cardiovascular diseases, glucose metabolism disorders, type 2 diabetes, and enlargement of the prostate.The cause in female pattern hair loss remains unclear, androgenetic alopecia for women is associated with an increased risk of polycystic ovary syndrome (PCOS).Management may include simply accepting the condition or shaving ones head to improve the aesthetic aspect of the condition. Otherwise, common medical treatments include minoxidil, finasteride, dutasteride, or hair transplant surgery. Use of finasteride and dutasteride in women is not well-studied and may result in birth defects if taken during pregnancy.Pattern hair loss by the age of 50 affects about half of males and a quarter of females. It is the most common cause of hair loss. Both males aged 40–91 and younger male patients of early onset AGA (before the age of 35), had a higher likelihood of metabolic syndrome (MetS) and insulin resistance. With younger males, studies found metabolic syndrome to be at approximately a 4x increased frequency which is clinically deemed as significant. Abdominal obesity, hypertension and lowered high density lipoprotein were also significantly higher for younger groups.
Signs and symptoms
Pattern hair loss is classified as a form of non-scarring hair loss.
Male-pattern hair loss begins above the temples and at the vertex (calvaria) of the scalp. As it progresses, a rim of hair at the sides and rear of the head remains. This has been referred to as a Hippocratic wreath, and rarely progresses to complete baldness.Female-pattern hair loss more often causes diffuse thinning without hairline recession; similar to its male counterpart, female androgenic alopecia rarely leads to total hair loss. The Ludwig scale grades severity of female-pattern hair loss. These include Grades 1, 2, 3 of balding in women based on their scalp showing in the front due to thinning of hair.In most cases, receding hairline is the first starting point; the hairline starts moving backwards from the front of the head and the sides.
Causes
Hormones and genes
KRT37 is the only keratin that is regulated by androgens. This sensitivity to androgens was acquired by Homo sapiens and is not shared with their great ape cousins. Although Winter et al. found that KRT37 is expressed in all the hair follices of chimpanzees, it was not detected in the head hair of modern humans. As androgens are known to grow hair on the body, but decrease it on the scalp, this lack of scalp KRT37 may help explain the paradoxical nature of Androgenic alopecia as well as the fact that head hair anagen cycles are extremely long.
The initial programming of pilosebaceous units of hair follicles begins in utero. The physiology is primarily androgenic, with dihydrotestosterone (DHT) being the major contributor at the dermal papillae. Men with premature androgenic alopecia tend to have lower than normal values of sex hormone-binding globulin (SHBG), follicle stimulating hormone (FSH), testosterone, and epitestosterone when compared to men without pattern hair loss. Although hair follicles were previously thought to be permanently gone in areas of complete hair loss, they are more likely dormant, as recent studies have shown the scalp contains the stem cell progenitor cells from which the follicles arose.Transgenic studies have shown that growth and dormancy of hair follicles are related to the activity of insulin-like growth factor (IGF) at the dermal papillae, which is affected by DHT. Androgens are important in male sexual development around birth and at puberty. They regulate sebaceous glands, apocrine hair growth, and libido. With increasing age, androgens stimulate hair growth on the face, but can suppress it at the temples and scalp vertex, a condition that has been referred to as the androgen paradox.Men with androgenic alopecia typically have higher 5α-reductase, higher total testosterone, higher unbound/free testosterone, and higher free androgens, including DHT. 5-alpha-reductase converts free testosterone into DHT, and is highest in the scalp and prostate gland. DHT is most commonly formed at the tissue level by 5α-reduction of testosterone. The genetic corollary that codes for this enzyme has been discovered. Prolactin has also been suggested to have different effects on the hair follicle across gender.Also, crosstalk occurs between androgens and the Wnt-beta-catenin signaling pathway that leads to hair loss. At the level of the somatic stem cell, androgens promote differentiation of facial hair dermal papillae, but inhibit it at the scalp. Other research suggests the enzyme prostaglandin D2 synthase and its product prostaglandin D2 (PGD2) in hair follicles as contributive.These observations have led to study at the level of the mesenchymal dermal papillae. Types 1 and 2 5α reductase enzymes are present at pilosebaceous units in papillae of individual hair follicles. They catalyze formation of the androgens testosterone and DHT, which in turn regulate hair growth. Androgens have different effects at different follicles: they stimulate IGF-1 at facial hair, leading to growth, but can also stimulate TGF β1, TGF β2, dickkopf1, and IL-6 at the scalp, leading to catagenic miniaturization. Hair follicles in anaphase express four different caspases. Significant levels of inflammatory infiltrate have been found in transitional hair follicles. Interleukin 1 is suspected to be a cytokine mediator that promotes hair loss.The fact that hair loss is cumulative with age while androgen levels fall as well as the fact that finasteride does not reverse advanced stages of androgenetic alopecia remains a mystery, but possible explanations are higher conversion of testosterone to DHT locally with age as higher levels of 5-alpha reductase are noted in balding scalp, and higher levels of DNA damage in the dermal papilla as well as senescence of the dermal papilla due to androgen receptor activation and environmental stress. The mechanism by which the androgen receptor triggers dermal papilla permanent senescence is not known, but may involve IL6, TGFB-1 and oxidative stress. Senescence of the dermal papilla is measured by lack of mobility, different size and shape, lower replication and altered output of molecules and different expression of markers. The dermal papilla is the primary location of androgen action and its migration towards the hair bulge and subsequent signaling and size increase are required to maintain the hair follicle so senescence via the androgen receptor explains much of the physiology.
Inheritance
Male pattern baldness is an X-linked recessive condition, because of its "particularly strong signals on the X chromosome".
Metabolic syndrome
Multiple cross-sectional studies have found associations between early androgenic alopecia, insulin resistance, and metabolic syndrome, with low HDL being the component of metabolic syndrome with highest association. Linolenic and linoleic acids, two major dietary sources of HDL, are 5 alpha reductase inhibitors. Premature androgenic alopecia and insulin resistance may be a clinical constellation that represents the male homologue, or phenotype, of polycystic ovary syndrome. Others have found a higher rate of hyperinsulinemia in family members of women with polycystic ovarian syndrome. With early-onset AGA having an increased risk of metabolic syndrome, poorer metabolic profiles are noticed in those with AGA, including metrics for body mass index, waist circumference, fasting glucose, blood lipids, and blood pressure.In support of the association, finasteride improves glucose metabolism and decreases glycosylated hemoglobin HbA1c, a surrogate marker for diabetes mellitus. The low SHBG seen with premature androgenic alopecia is also associated with, and likely contributory to, insulin resistance, and for which it still is used as an assay for pediatric diabetes mellitus.Obesity leads to upregulation of insulin production and decrease in SHBG. Further reinforcing the relationship, SHBG is downregulated by insulin in vitro, although SHBG levels do not appear to affect insulin production. In vivo, insulin stimulates both testosterone production and SHBG inhibition in normal and obese men. The relationship between SHBG and insulin resistance has been known for some time; decades prior, ratios of SHBG and adiponectin were used before glucose to predict insulin resistance. Patients with Laron syndrome, with resultant deficient IGF, demonstrate varying degrees of alopecia and structural defects in hair follicles when examined microscopically.Because of its association with metabolic syndrome and altered glucose metabolism, both men and women with early androgenic hair loss should be screened for impaired glucose tolerance and diabetes mellitus II. Measurement of subcutaneous and visceral adipose stores by MRI, demonstrated inverse association between visceral adipose tissue and testosterone/DHT, while subcutaneous adipose correlated negatively with SHBG and positively with estrogen. SHBG association with fasting blood glucose is most dependent on intrahepatic fat, which can be measured by MRI in and out of phase imaging sequences. Serum indices of hepatic function and surrogate markers for diabetes, previously used, show less correlation with SHBG by comparison.Female patients with mineralocorticoid resistance present with androgenic alopecia.IGF levels have been found lower in those with metabolic syndrome. Circulating serum levels of IGF-1 are increased with vertex balding, although this study did not look at mRNA expression at the follicle itself. Locally, IGF is mitogenic at the dermal papillae and promotes elongation of hair follicles. The major site of production of IGF is the liver, although local mRNA expression at hair follicles correlates with increase in hair growth. IGF release is stimulated by growth hormone (GH). Methods of increasing IGF include exercise, hypoglycemia, low fatty acids, deep sleep (stage IV REM), estrogens, and consumption of amino acids such as arginine and leucine. Obesity and hyperglycemia inhibit its release. IGF also circulates in the blood bound to a large protein whose production is also dependent on GH. GH release is dependent on normal thyroid hormone. During the sixth decade of life, GH decreases in production. Because growth hormone is pulsatile and peaks during sleep, serum IGF is used as an index of overall growth hormone secretion. The surge of androgens at puberty drives an accompanying surge in growth hormone.
Age
A number of hormonal changes occur with aging:
Decrease in testosterone
Decrease in serum DHT and 5-alpha reductase
Decrease 3AAG, a peripheral marker of DHT metabolism
Increase in SHBG
Decrease in androgen receptors, 5-alpha reductase type I and II activity, and aromatase in the scalpThis decrease in androgens and androgen receptors, and the increase in SHBG are opposite the increase in androgenic alopecia with aging. This is not intuitive, as testosterone and its peripheral metabolite, DHT, accelerate hair loss, and SHBG is thought to be protective. The ratio of T/SHBG, DHT/SHBG decreases by as much as 80% by age 80, in numeric parallel to hair loss, and approximates the pharmacology of antiandrogens such as finasteride.Free testosterone decreases in men by age 80 to levels double that of a woman at age 20. About 30% of normal male testosterone level, the approximate level in females, is not enough to induce alopecia; 60%, closer to the amount found in elderly men, is sufficient. The testicular secretion of testosterone perhaps "sets the stage" for androgenic alopecia as a multifactorial diathesis stress model, related to hormonal predisposition, environment, and age. Supplementing eunuchs with testosterone during their second decade, for example, causes slow progression of androgenic alopecia over many years, while testosterone late in life causes rapid hair loss within a month.An example of premature age effect is Werners syndrome, a condition of accelerated aging from low-fidelity copying of mRNA. Affected children display premature androgenic alopecia.Permanent hair-loss is a result of reduction of the number of living hair matrixes. Long-term of insufficiency of nutrition is an important cause for the death of hair matrixes. Misrepair-accumulation aging theory suggests that dermal fibrosis is associated with the progressive hair-loss and hair-whitening in old people. With age, the dermal layer of the skin has progressive deposition of collagen fibers, and this is a result of accumulation of Misrepairs of derma. Fibrosis makes the derma stiff and makes the tissue have increased resistance to the walls of blood vessels. The tissue resistance to arteries will lead to the reduction of blood supply to the local tissue including the papillas. Dermal fibrosis is progressive; thus the insufficiency of nutrition to papillas is permanent. Senile hair-loss and hair-whitening are partially a consequence of the fibrosis of the skin.
Diagnosis
The diagnosis of androgenic alopecia can be usually established based on clinical presentation in men. In women, the diagnosis usually requires more complex diagnostic evaluation. Further evaluation of the differential requires exclusion of other causes of hair loss, and assessing for the typical progressive hair loss pattern of androgenic alopecia. Trichoscopy can be used for further evaluation. Biopsy may be needed to exclude other causes of hair loss, and histology would demonstrate perifollicular fibrosis. The Hamilton–Norwood scale has been developed to grade androgenic alopecia in males by severity.
Treatment
Androgen-dependent
Finasteride is a medication of the 5α-reductase inhibitors (5-ARIs) class. By inhibiting type II 5-AR, finasteride prevents the conversion of testosterone to dihydrotestosterone in various tissues including the scalp. Increased hair on the scalp can be seen within three months of starting finasteride treatment and longer-term studies have demonstrated increased hair on the scalp at 24 and 48 months with continued use. Treatment with finasteride more effectively treats male-pattern hair loss at the crown than male-pattern hair loss at the front of the head and temples.Dutasteride is a medication in the same class as finasteride but inhibits both type I and type II 5-alpha reductase. Dutasteride is approved for the treatment of male-pattern hair loss in Korea and Japan, but not in the United States. However, it is commonly used off-label to treat male-pattern hair loss.
Androgen-independent
Minoxidil dilates small blood vessels; it is not clear how this causes hair to grow. Other treatments include tretinoin combined with minoxidil, ketoconazole shampoo, dermarolling (Collagen induction therapy), spironolactone, alfatradiol, topilutamide (fluridil), topical melatonin, and intradermal and intramuscular botulinum toxin injections to the scalp.
Female pattern
There is evidence supporting the use of minoxidil as a safe and effective treatment for female pattern hair loss, and there is no significant difference in efficiency between 2% and 5% formulations. Finasteride was shown to be no more effective than placebo based on low-quality studies. The effectiveness of laser-based therapies is unclear. Bicalutamide, an antiandrogen, is another option for the treatment of female pattern hair loss.
Procedures
More advanced cases may be resistant or unresponsive to medical therapy and require hair transplantation. Naturally occurring units of one to four hairs, called follicular units, are excised and moved to areas of hair restoration. These follicular units are surgically implanted in the scalp in close proximity and in large numbers. The grafts are obtained from either follicular unit transplantation (FUT) or follicular unit extraction (FUE). In the former, a strip of skin with follicular units is extracted and dissected into individual follicular unit grafts, and in the latter individual hairs are extracted manually or robotically. The surgeon then implants the grafts into small incisions, called recipient sites. Cosmetic scalp tattoos can also mimic the appearance of a short, buzzed haircut.
Alternative therapies
Many people use unproven treatments. Regarding female pattern alopecia, there is no evidence for vitamins, minerals, or other dietary supplements. As of 2008, there is little evidence to support the use of lasers to treat male-pattern hair loss. The same applies to special lights. Dietary supplements are not typically recommended. A 2015 review found a growing number of papers in which plant extracts were studied but only one randomized controlled clinical trial, namely a study in 10 people of saw palmetto extract.
Prognosis
Androgenic alopecia is typically experienced as a "moderately stressful condition that diminishes body image satisfaction". However, although most men regard baldness as an unwanted and distressing experience, they usually are able to cope and retain integrity of personality.Although baldness is not as common in women as in men, the psychological effects of hair loss tend to be much greater. Typically, the frontal hairline is preserved, but the density of hair is decreased on all areas of the scalp. Previously, it was believed to be caused by testosterone just as in male baldness, but most women who lose hair have normal testosterone levels.
Epidemiology
Female androgenic alopecia has become a growing problem that, according to the American Academy of Dermatology, affects around 30 million women in the United States. Although hair loss in females normally occurs after the age of 50 or even later when it does not follow events like pregnancy, chronic illness, crash diets, and stress among others, it is now occurring at earlier ages with reported cases in women as young as 15 or 16.For male androgenic alopecia, by the age of 50 30-50% of men have it, hereditarily there is an 80% predisposition. Notably, the link between androgenetic alopecia and metabolic syndrome is strongest in non-obese men.
Society and culture
Studies have been inconsistent across cultures regarding how balding men rate on the attraction scale. While a 2001 South Korean study showed that most people rated balding men as less attractive, a 2002 survey of Welsh women found that they rated bald and gray-haired men quite desirable. One of the proposed social theories for male pattern hair loss is that men who embraced complete baldness by shaving their heads subsequently signaled dominance, high social status, and/or longevity.Biologists have hypothesized the larger sunlight-exposed area would allow more vitamin D to be synthesized, which might have been a "finely tuned mechanism to prevent prostate cancer" as the malignancy itself is also associated with higher levels of DHT.
Myths
Many myths exist regarding the possible causes of baldness and its relationship with ones virility, intelligence, ethnicity, job, social class, wealth, and many other characteristics.
Weight training and other types of physical activity cause baldness
Because it increases testosterone levels, many Internet forums have put forward the idea that weight training and other forms of exercise increase hair loss in predisposed individuals. Although scientific studies do support a correlation between exercise and testosterone, no direct study has found a link between exercise and baldness. However, a few have found a relationship between a sedentary life and baldness, suggesting exercise is causally relevant. The type or quantity of exercise may influence hair loss.
Testosterone levels are not a good marker of baldness, and many studies actually show paradoxical low testosterone in balding persons, although research on the implications is limited.
Baldness can be caused by emotional stress, sleep deprivation, etc.
Emotional stress has been shown to accelerate baldness in genetically susceptible individuals.
Stress due to sleep deprivation in military recruits lowered testosterone levels, but is not noted to have affected SHBG. Thus, stress due to sleep deprivation in fit males is unlikely to elevate DHT, which is one cause of male pattern baldness. Whether sleep deprivation can cause hair loss by some other mechanism is not clear.
Bald men are more virile or sexually active than others
Levels of free testosterone are strongly linked to libido and DHT levels, but unless free testosterone is virtually nonexistent, levels have not been shown to affect virility. Men with androgenic alopecia are more likely to have a higher baseline of free androgens. However, sexual activity is multifactoral, and androgenic profile is not the only determining factor in baldness. Additionally, because hair loss is progressive and free testosterone declines with age, a males hairline may be more indicative of his past than his present disposition.
Frequent ejaculation causes baldness
Many misconceptions exist about what can help prevent hair loss, one of these being that lack of sexual activity will automatically prevent hair loss. While a proven direct correlation exists between increased frequency of ejaculation and increased levels of DHT, as shown in a recent study by Harvard Medical School, the study suggests that ejaculation frequency may be a sign, rather than a cause, of higher DHT levels. Another study shows that although sexual arousal and masturbation-induced orgasm increase testosterone concentration around orgasm, they reduce testosterone concentration on average, and because about 5% of testosterone is converted to DHT, ejaculation does not elevate DHT levels.The only published study to test correlation between ejaculation frequency and baldness was probably large enough to detect an association (1,390 subjects) and found no correlation, although persons with only vertex androgenetic alopecia had fewer female sexual partners than those of other androgenetic alopecia categories (such as frontal or both frontal and vertex). One study may not be enough, especially in baldness, where there is a complex with age.
Other animals
Animal models of androgenic alopecia occur naturally and have been developed in transgenic mice; chimpanzees (Pan troglodytes); bald uakaris (Cacajao rubicundus); and stump-tailed macaques (Macaca speciosa and M. arctoides). Of these, macaques have demonstrated the greatest incidence and most prominent degrees of hair loss.Baldness is not a trait unique to human beings. One possible case study is about a maneless male lion in the Tsavo area. The Tsavo lion prides are unique in that they frequently have only a single male lion with usually seven or eight adult females, as opposed to four females in other lion prides. Male lions may have heightened levels of testosterone, which could explain their reputation for aggression and dominance, indicating that lack of mane may at one time have had an alpha correlation.Although primates do not go bald, their hairlines do undergo recession. In infancy the hairline starts at the top of the supraorbital ridge, but slowly recedes after puberty to create the appearance of a small forehead.
References
External links
NLM- Genetics Home Reference
Scow DT, Nolte RS, Shaughnessy AF (April 1999). "Medical treatments for balding in men". American Family Physician. 59 (8): 2189–94, 2196. PMID 10221304. |
Malignant hyperthermia | Malignant hyperthermia (MH) is a type of severe reaction that occurs in response to particular medications used during general anesthesia, among those who are susceptible. Symptoms include muscle rigidity, high fever, and a fast heart rate. Complications can include muscle breakdown and high blood potassium. Most people who are susceptible are generally otherwise unaffected when not exposed.The cause of MH is the use of certain volatile anesthetic agents or succinylcholine in those who are susceptible. Susceptibility can occur due to at least six genetic mutations, with the most common one being of the RYR1 gene. These genetic variations are often inherited from a persons parents in an autosomal dominant manner. The condition may also occur as a new mutation or be associated with a number of inherited muscle diseases, such as central core disease.In susceptible individuals, the medications induce the release of stored calcium ions within muscle cells. The resulting increase in calcium concentrations within the cells cause the muscle fibers to contract. This generates excessive heat and results in metabolic acidosis. Diagnosis is based on symptoms in the appropriate situation. Family members may be tested to see if they are susceptible by muscle biopsy or genetic testing.Treatment is with dantrolene and rapid cooling along with other supportive measures. The avoidance of potential triggers is recommended in susceptible people. The condition affects one in 5,000 to 50,000 cases where people are given anesthetic gases. Males are more often affected than females. The risk of death with proper treatment is about 5% while without it is around 75%. While cases that appear similar to MH have been documented since the early 20th century, the condition was only formally recognized in 1960.
Signs and symptoms
The typical signs of malignant hyperthermia are due to a hypercatabolic state, which presents as a very high temperature, an increased heart rate and abnormally rapid breathing, increased carbon dioxide production, increased oxygen consumption, mixed acidosis, rigid muscles, and rhabdomyolysis. These signs can develop any time during the administration of the anesthetic triggering agents. Rarely, signs may develop up to 40 minutes after the end of anaesthesia.
Causes
Malignant hyperthermia is a disorder that can be considered a gene–environment interaction. In most people with malignant hyperthermia susceptibility, they have few or no symptoms unless they are exposed to a triggering agent. The most common triggering agents are volatile anesthetic gases, such as halothane, sevoflurane, desflurane, isoflurane, enflurane or the depolarizing muscle relaxants suxamethonium and decamethonium used primarily in general anesthesia. In rare cases, the biological stresses of physical exercise or heat may be the trigger. In fact, malignant hyperthermia susceptibility (MHS), predisposed by mutations in the skeletal muscle calcium release channel (RYR1), is one of the most severe heat-related illnesses. The MHS-associated heat susceptibilities predominantly affect children and metabolically active young adults, often leading to life- threatening hypermetabolic responses to heat.Other anesthetic drugs do not trigger malignant hyperthermia. Some examples of drugs that dont cause MH include local anesthetics (lidocaine, bupivacaine, mepivacaine), opiates (morphine, fentanyl), ketamine, barbiturates, nitrous oxide, propofol, etomidate, and benzodiazepines. The nondepolarizing muscle relaxants pancuronium, cisatracurium, atracurium, mivacurium, vecuronium and rocuronium also do not cause MH.There is mounting evidence that some individuals with malignant hyperthermia susceptibility may develop MH with exercise and/or on exposure to hot environments.
Genetics
Malignant hyperthermias inheritance is autosomal dominant with variable penetrance. The defect is typically located on the long arm of chromosome 19 (19q13.2) involving the ryanodine receptor. More than 25 different mutations in this gene are linked with malignant hyperthermia. These mutations tend to cluster in one of three domains within the protein, designated MH1-3. MH1 and MH2 are located in the N-terminus of the protein, which interacts with L-type calcium channels and Ca2+. MH3 is located in the transmembrane forming C-terminus. This region is important for allowing Ca2+ passage through the protein following opening.Chromosome 7q and chromosome 17 have also been implicated. It has also been postulated that MH and central core disease may be allelic and thus can be co-inherited.
Pathophysiology
Disease mechanism
In a large proportion (50–70%) of cases, the propensity for malignant hyperthermia is due to a mutation of the ryanodine receptor (type 1), located on the sarcoplasmic reticulum (SR), the organelle within skeletal muscle cells that stores calcium. RYR1 opens in response to conformational changes in the L-type calcium channels following membrane depolarisation, thereby resulting in a drastic increase in intracellular calcium levels and muscle contraction. RYR1 has two sites believed to be important for reacting to changing Ca2+ concentrations: the A-site and the I-site. The A-site is a high affinity Ca2+ binding site that mediates RYR1 opening. The I-site is a lower affinity site that mediates the proteins closing. Caffeine, halothane, and other triggering agents act by drastically increasing the affinity of the A-site for Ca2+ and concomitantly decreasing the affinity of the I-site in mutant proteins. Mg2+ also affect RYR1 activity, causing the protein to close by acting at either the A- or I-sites. In MH mutant proteins, the affinity for Mg2+ at either one of these sites is greatly reduced. The result of these alterations is greatly increased Ca2+ release due to a lowered activation and heightened deactivation threshold. The process of sequestering this excess Ca2+ consumes large amounts of adenosine triphosphate (ATP), the main cellular energy carrier, and generates the excessive heat (hyperthermia) that is the hallmark of the disease. The muscle cell is damaged by the depletion of ATP and possibly the high temperatures, and cellular constituents "leak" into the circulation, including potassium, myoglobin, creatine, phosphate and creatine kinase.The other known causative gene for MH is CACNA1S, which encodes an L-type voltage-gated calcium channel α-subunit. There are two known mutations in this protein, both affecting the same residue, R1086. This residue is located in the large intracellular loop connecting domains 3 and 4, a domain possibly involved in negatively regulating RYR1 activity. When these mutant channels are expressed in human embryonic kidney (HEK 293) cells, the resulting channels are five times more sensitive to activation by caffeine (and presumably halothane) and activate at 5–10mV more hyperpolarized. Furthermore, cells expressing these channels have an increased basal cytosolic Ca2+ concentration. As these channels interact with and activate RYR1, these alterations result in a drastic increase of intracellular Ca2+, and, thereby, muscle excitability.Other mutations causing MH have been identified, although in most cases the relevant gene remains to be identified.
Animal model
Research into malignant hyperthermia was limited until the discovery of "porcine stress syndrome" (PSS) in Danish Landrace and other pig breeds selected for muscling, a condition in which stressed pigs develop "pale, soft, exudative" flesh (a manifestation of the effects of malignant hyperthermia) rendering their meat less marketable at slaughter. This "awake triggering" was not observed in humans, and initially cast doubts on the value of the animal model, but subsequently, susceptible humans were discovered to "awake trigger" (develop malignant hyperthermia) in stressful situations. This supported the use of the pig model for research. Pig farmers use halothane cones in swine yards to expose piglets to halothane. Those that die were MH-susceptible, thus saving the farmer the expense of raising a pig whose meat he would not be able to market. This also reduced the use of breeding stock carrying the genes for PSS. The condition in swine is also due to a defect in ryanodine receptors.Gillard et al. discovered the causative mutation in humans only after similar mutations had first been described in pigs.Horses also develop malignant hyperthermia. A causative mutated allele, ryanodine receptor 1 gene (RyR1) at nucleotide C7360G, generating a R2454G amino acid substitution. has been identified in the American Quarter Horse and breeds with Quarter Horse ancestry, inherited as an autosomal dominant. It can be caused by overwork, anesthesia, or stress. In dogs, its inheritance is autosomal recessive.An MH mouse has been constructed, bearing the R163C mutation prevalent in humans. These mice display signs similar to human MH patients, including sensitivity to halothane (increased respiration, body temperature, and death). Blockade of RYR1 by dantrolene prevents adverse reaction to halothane in these mice, as with humans. Muscle from these mice also shows increased K+-induced depolarization and an increased caffeine sensitivity.
Diagnosis
During an attack
The earliest signs may include: masseter muscle contracture following administration of succinylcholine, a rise in end-tidal carbon dioxide concentration (despite increased minute ventilation), unexplained tachycardia, and muscle rigidity. Despite the name, elevation of body temperature is often a late sign, but may appear early in severe cases. Respiratory acidosis is universally present and many patients have developed metabolic acidosis at the time of diagnosis. A fast rate of breathing (in a spontaneously breathing patient), cyanosis, hypertension, abnormal heart rhythms, and high blood potassium may also be seen. Core body temperatures should be measured in any patient undergoing general anesthesia longer than 30 minutes.
Malignant hyperthermia is diagnosed on clinical grounds, but various laboratory investigations may prove confirmatory. These include a raised creatine kinase level, elevated potassium, increased phosphate (leading to decreased calcium) and—if determined—raised myoglobin; this is the result of damage to muscle cells. Severe rhabdomyolysis may lead to acute kidney failure, so kidney function is generally measured on a frequent basis. Patients may also experience premature ventricular contractions due to the increased levels of potassium released from the muscles during episodes.
Susceptibility testing
Muscle testing
The main candidates for testing are those with a close relative who has had an episode of MH or have been shown to be susceptible. The standard procedure is the "caffeine-halothane contracture test", CHCT. A muscle biopsy is carried out at an approved research center, under local anesthesia. The fresh biopsy is bathed in solutions containing caffeine or halothane and observed for contraction; under good conditions, the sensitivity is 97% and the specificity 78%. Negative biopsies are not definitive, so any patient who is suspected of MH by their medical history or that of blood relatives is generally treated with non-triggering anesthetics, even if the biopsy was negative. Some researchers advocate the use of the "calcium-induced calcium release" test in addition to the CHCT to make the test more specific.Less invasive diagnostic techniques have been proposed. Intramuscular injection of halothane 6 vol% has been shown to result in higher than normal increases in local pCO2 among patients with known malignant hyperthermia susceptibility. The sensitivity was 100% and specificity was 75%. For patients at similar risk to those in this study, this leads to a positive predictive value of 80% and negative predictive value of 100%. This method may provide a suitable alternative to more invasive techniques.
A 2002 study examined another possible metabolic test. In this test, intramuscular injection of caffeine was followed by local measurement of the pCO2; those with known MH susceptibility had a significantly higher pCO2 (63 versus 44 mmHg). The authors propose larger studies to assess the tests suitability for determining MH risk.
Genetic testing
Genetic testing is being performed in a limited fashion to determine susceptibility to MH. In people with a family history of MH, analysis for RYR1 mutations may be useful.
Criteria
A 1994 consensus conference led to the formulation of a set of diagnostic criteria. The higher the score (above 6), the more likely a reaction constituted MH:
Respiratory acidosis (end-tidal CO2 above 55 mmHg/7.32 kPa or arterial pCO2 above 60 mmHg/7.98 kPa)
Heart involvement (unexplained sinus tachycardia, ventricular tachycardia or ventricular fibrillation)
Metabolic acidosis (base excess lower than -8, pH <7.25)
Muscle rigidity (generalized rigidity including severe masseter muscle rigidity)
Muscle breakdown (CK >20,000/L units, cola colored urine or excess myoglobin in urine or serum, potassium above 6 mmol/L)
Temperature increase (rapidly increasing temperature, T >38.8 °C)
Other (rapid reversal of MH signs with dantrolene, elevated resting serum CK levels)
Family history (autosomal dominant pattern)
Prevention
In the past, the prophylactic use of dantrolene was recommended for MH-susceptible patients undergoing general anesthesia. However, multiple retrospective studies have demonstrated the safety of trigger-free general anesthesia in these patients in the absence of prophylactic dantrolene administration. The largest of these studies looked at the charts of 2214 patients who underwent general or regional anesthesia for an elective muscle biopsy. About half (1082) of the patients were muscle biopsy positive for MH. Only five of these patients exhibited signs consistent with MH, four of which were treated successfully with parenteral dantrolene, and the remaining one recovered with only symptomatic therapy. After weighing its questionable benefits against its possible adverse effects (including nausea, vomiting, muscle weakness and prolonged duration of action of nondepolarizing neuromuscular blocking agents), experts no longer recommend the use of prophylactic dantrolene prior to trigger-free general anesthesia in MH-susceptible patients.
Anesthesia machine preparation
Anesthesia for people with known MH susceptible requires avoidance of triggering agent concentrations above 5 parts per million (all volatile anesthetic agents and succinylcholine). Most other drugs are safe (including nitrous oxide), as are regional anesthetic techniques. Where general anesthesia is planned, it can be provided safely by either flushing the machine or using charcoal filters.To flush the machine, first remove or disable the vaporizers and then flush the machine with 10 L/min or greater fresh gas flow rate for at least 20 minutes. While flushing the machine the ventilator should be set to periodically ventilate a new breathing circuit. The soda lime should also be replaced. After machine preparation, anesthesia should be induced and maintained with non-triggering agents. The time required to flush a machine varies for different machines and volatile anesthetics. This prevention technique was optimized to prepare older generation anesthesia machines. Modern anesthetic machines have more rubber and plastic components which provide a reservoir for volatile anesthetics, and should be flushed for 60 minutes.Charcoal filters can be used to prepare an anesthesia machine in less than 60 seconds for people at risk of malignant hyperthermia. These filters prevent residual anesthetic from triggering malignant hyperthermia for up to 12 hours, even at low fresh gas flows. Prior to placing the charcoal filters, the machine should be flushed with fresh gas flows greater than 10 L/min for 90 seconds.
Treatment
The current treatment of choice is the intravenous administration of dantrolene, the only known antidote, discontinuation of triggering agents, and supportive therapy directed at correcting hyperthermia, acidosis, and organ dysfunction. Treatment must be instituted rapidly on clinical suspicion of the onset of malignant hyperthermia.
Dantrolene
Dantrolene is a muscle relaxant that appears to work directly on the ryanodine receptor to prevent the release of calcium. After the widespread introduction of treatment with dantrolene, the mortality of malignant hyperthermia fell from 80% in the 1960s to less than 5%. Dantrolene remains the only drug known to be effective in the treatment of MH. The recommended dose of dantrolene is 2.5 mg/kg, repeated as necessary. It is recommended that each hospital keeps a minimum stock of 36 dantrolene vials (720 mg), sufficient for four doses in a 70-kg person.
Training
Fast recognition and treatment of MH utilizes skills and procedures that are utilized with a low-frequency and high-risk. Conducting MH crisis training for perioperative teams can identify system failures as well as improve response to these events. Simulation techniques to include the use of cognitive aids have also been shown to improve communication in clinical treatment of MH.
Prognosis
Prognosis is poor if this condition is not aggressively treated. In the 1970s, mortality was greater than 80%; however, with the current management mortality is now less than 5%.
Epidemiology
It occurs in between 1:5,000 and 1:100,000 in procedures involving general anaesthesia. This disorder occurs worldwide and affects all racial groups.
In the Manawatu region of New Zealand, up to 1 in 200 people are at high risk of the condition.
History
The syndrome was first recognized in Royal Melbourne Hospital, Australia in an affected family by Denborough et al. in 1962. Denborough did much of his subsequent work on the condition at the Royal Canberra Hospital. Similar reactions were found in pigs. The efficacy of dantrolene as a treatment was discovered by South African anesthesiologist Gaisford Harrison and reported in a 1975 article published in the British Journal of Anaesthesia. After further animal studies corroborated the possible benefit from dantrolene, a 1982 study confirmed its usefulness in humans.In 1981, the Malignant Hyperthermia Association of the United States (MHAUS) hotline was established to provide telephone support to clinical teams treating patients with suspected malignant hyperthermia. The hotline became active in 1982 and since that time MHAUS has provided continuous access to board-certified anesthesiologists to assist teams in treatment.
Other animals
Other animals, including certain pig breeds, dogs, and horses, are susceptible to malignant hyperthermia.In dogs its inheritance is autosomal dominant. The syndrome has been reported in Pointers, Greyhounds, Labrador Retrievers, Saint Bernards, Springer Spaniels, Bichon Frises, Golden Retrievers, and Border Collies.In pigs its inheritance is autosomal recessive.In horses its inheritance is autosomal dominant, and most associated with the American Quarter Horse although it can occur in other breeds.
Research
Azumolene is a 30-fold more water-soluble analog of dantrolene that also works to decrease the release of intracellular calcium by its action on the ryanodine receptor. In MH-susceptible swine, azumolene was as potent as dantrolene. It has yet to be studied in vivo in humans, but may present a suitable alternative to dantrolene in the treatment of MH.
References
External links
GeneReview/NIH/UW entry on Malignant Hyperthermia Susceptibility |
Breast pain | Breast pain is the symptom of discomfort in the breast. Pain that involves both breasts and which occurs repeatedly before the menstrual period is generally not serious. Pain that involves only one part of a breast is more concerning. It is particularly concerning if a hard mass or nipple discharge is also present.Causes may be related to the menstrual cycle, birth control pills, hormone therapy, or psychiatric medication. Pain may also occur in those with large breasts, during menopause, and in early pregnancy. In about 2% of cases breast pain is related to breast cancer. Diagnosis involves examination, with medical imaging if only a specific part of the breast hurts.In more than 75% of people the pain resolves without any specific treatment. Otherwise treatments may include paracetamol or NSAIDs. A well fitting bra may also help. In those with severe pain tamoxifen or danazol may be used. About 70% of women have breast pain at some point in time. Breast pain is one of the most common breast symptoms, along with breast masses and nipple discharge.
Causes
Cyclical breast pain is often associated with fibrocystic breast changes or duct ectasia and thought to be caused by changes of prolactin response to thyrotropin. Some degree of cyclical breast tenderness is normal in the menstrual cycle, and is usually associated with menstruation and/or premenstrual syndrome (PMS).Noncyclical breast pain has various causes and is harder to diagnose and frequently the root cause is outside the breast. Some degree of non-cyclical breast tenderness can normally be present due to hormonal changes in puberty (both in girls and boys), in menopause and during pregnancy. After pregnancy, breast pain can be caused by breastfeeding. Other causes of non-cyclical breast pain include alcoholism with liver damage (likely due to abnormal steroid metabolism), mastitis and medications such as digitalis, methyldopa (an antihypertensive), spironolactone, certain diuretics, oxymetholone (an anabolic steroid), and chlorpromazine (a typical antipsychotic). Also, shingles can cause a painful blistering rash on the skin of the breasts.
Breast cancer
Some women who have pain in one or both breasts may fear breast cancer. However, breast pain is not a common symptom of cancer. The great majority of breast cancer cases do not present with symptoms of pain, though breast pain in older women is more likely to be associated with cancer.
Diagnosis
Diagnosis involves breast examination, with medical imaging if only a specific part of the breast hurts. Medical imaging by ultrasound is recommended for all ages, well in those over 30 it is recommended together with mammography.Ruling out the other possible causes of the pain is one way to differentiate the source of the pain. Breast pain can be due to:
Medications can be associated with breast pain and include:
Diagnostic testing can be useful. Typical tests used are mammogram, excisional biopsy for solid lumps, fine-needle aspiration and biopsy, pregnancy test, ultrasonography, and magnetic resonance imaging (MRI).
Treatment
In more than 75% of people the pain resolves without any specific treatment. Otherwise treatments may include paracetamol or NSAIDs. A well fitting bra may also help. In those with severe pain tamoxifen or danazol may be used.Bromocriptine may be used as well.Spironolactone, low dose oral contraceptives, and low-dose estrogen have helped to relieve pain. Topical anti-inflammatory medications can be used for localized pain. Vitamin E is not effective in relieving pain nor is evening primrose oil. Vitamin B6 and vitamin A have not been consistently found to be beneficial. Flaxseed has shown some activity in the treatment of cyclic mastalgia.Pain may be relieved by the use of nonsteroidal anti-inflammatory drugs or, for more severe localized pain, by local anaesthetic. Pain may be relieved by reassurance that it does not signal a serious underlying problem, and an active life style can also effect an improvement.Information regarding how the pain is real but not necessarily caused by disease can help to understand the problem. Counseling can also be to describe changes that vary during the monthly cycle. Women on hormone replacement therapy may benefit from a dose adjustment. Another non-pharmacological measure to help relieve symptoms of pain may be to use good bra support. Breasts change during adolescence and menopause and refitting may be beneficial. Applying heat and/or ice can bring relief. Dietary changes may also help with the pain. Methylxanthines can be eliminated from the diet to see if a sensitivity is present. Some clinicians recommending a reduction in salt, though no evidence supports this practice.
See also
Galactagogue
Mammoplasia
Pain management
References
== External links == |
Megaloblastic anemia | Megaloblastic anemia is a type of macrocytic anemia. An anemia is a red blood cell defect that can lead to an undersupply of oxygen. Megaloblastic anemia results from inhibition of DNA synthesis during red blood cell production. When DNA synthesis is impaired, the cell cycle cannot progress from the G2 growth stage to the mitosis (M) stage. This leads to continuing cell growth without division, which presents as macrocytosis.
Megaloblastic anemia has a rather slow onset, especially when compared to that of other anemias.
The defect in red cell DNA synthesis is most often due to hypovitaminosis, specifically vitamin B12 deficiency or folate deficiency. Loss of micronutrients may also be a cause.
Megaloblastic anemia not due to hypovitaminosis may be caused by antimetabolites that poison DNA production directly, such as some chemotherapeutic or antimicrobial agents (for example azathioprine or trimethoprim).
The pathological state of megaloblastosis is characterized by many large immature and dysfunctional red blood cells (megaloblasts) in the bone marrow and also by hypersegmented neutrophils (defined as the presence of neutrophils with six or more lobes or the presence of more than 3% of neutrophils with at least five lobes). These hypersegmented neutrophils can be detected in the peripheral blood (using a diagnostic smear of a blood sample).
Causes
Vitamin B12 deficiency:
Achlorhydria-induced malabsorption
Deficient intake
Deficient intrinsic factor, a molecule produced by cells in the stomach that is required for B12 absorption (pernicious anemia or gastrectomy)
Coeliac disease
Biological competition for vitamin B12 by diverticulosis, fistula, intestinal anastomosis, or infection by the marine parasite Diphyllobothrium latum (fish tapeworm)
Selective vitamin B12 malabsorption (congenital—juvenile megaloblastic anemia 1—and drug-induced)
Chronic pancreatitis
Ileal resection and bypass
Nitrous oxide anesthesia (usually requires repeated instances).
Folate deficiency:
Alcoholism
Deficient intake
Increased needs: pregnancy, infant, rapid cellular proliferation, and cirrhosis
Malabsorption (congenital and drug-induced)
Intestinal and jejunal resection
(indirect) Deficient thiamine and factors (e.g., enzymes) responsible for abnormal folate metabolism.
Combined Deficiency: vitamin B12 & folate.
Inherited Pyrimidine Synthesis Disorders: Orotic aciduria
Inherited DNA Synthesis Disorders
Toxins and Drugs:
Folic acid antagonists (methotrexate)
Purine synthesis antagonists (6-mercaptopurine, azathioprine)
Pyrimidine antagonists (cytarabine)
Phenytoin
Nitrous Oxide
Erythroleukemia
Inborn genetic mutations of the Methionine synthase gene
Di Guglielmos syndrome
Congenital dyserythropoietic anemia
Copper deficiency resulting from an excess of zinc from unusually high oral consumption of zinc-containing denture-fixation creams has been found to be a cause.
Pathophysiology
There is a defect in DNA synthesis in the rapidly dividing cells and to a lesser extent, RNA and protein synthesis are also impaired. Therefore, unbalanced cell proliferation and impaired cell division occur as a result of arrested nuclear maturation so the cells show nuclear-cytoplasmic asynchrony.
In the bone marrow, most megaloblasts are destroyed prior to entering the peripheral blood (intramedullary hemolysis). Some can escape the bone marrow (macrocytes) to peripheral blood but they are destroyed by the reticulo-endothelial system (extramedullary hemolysis).
Diagnosis
The gold standard for the diagnosis of Vitamin B12 deficiency is a low blood level of Vitamin B12. A low level of blood Vitamin B12 is a finding that normally can and should be treated by injections, supplementation, or dietary or lifestyle advice, but it is not a diagnosis. Hypovitaminosis B12 can result from a number of mechanisms, including those listed above. For determination of cause, further patient history, testing, and empirical therapy may be clinically indicated.
A measurement of methylmalonic acid (methylmalonate) can provide an indirect method for partially differentiating Vitamin B12 and folate deficiencies. The level of methylmalonic acid is not elevated in folic acid deficiency. Direct measurement of blood cobalamin remains the gold standard because the test for elevated methylmalonic acid is not specific enough. Vitamin B12 is one necessary prosthetic group to the enzyme methylmalonyl-coenzyme A mutase. Vitamin B12 deficiency is but one among the conditions that can lead to dysfunction of this enzyme and a buildup of its substrate, methylmalonic acid, the elevated level of which can be detected in the urine and blood.
Due to the lack of available radioactive Vitamin B12, the Schilling test is now largely a historical artifact. The Schilling test was performed in the past to help determine the nature of the vitamin B12 deficiency. An advantage of the Schilling test was that it often included Vitamin B12 with intrinsic factor.
Blood findings
The blood film can point towards vitamin deficiency:
Decreased red blood cell (RBC) count and hemoglobin levels
Increased mean corpuscular volume (MCV, >100 fL) and mean corpuscular hemoglobin (MCH)
Normal mean corpuscular hemoglobin concentration (MCHC, 32–36 g/dL)
Decreased reticulocyte count due to destruction of fragile and abnormal megaloblastic erythroid precursor.
The platelet count may be reduced.
Neutrophil granulocytes may show multisegmented nuclei ("senile neutrophil"). This is thought to be due to decreased production and a compensatory prolonged lifespan for circulating neutrophils, which increase numbers of nuclear segments with age.
Anisocytosis (increased variation in RBC size) and poikilocytosis (abnormally shaped RBCs).
Macrocytes (larger than normal RBCs) are present.
Ovalocytes (oval-shaped RBCs) are present.
Howell-Jolly bodies (chromosomal remnant) also present.Blood chemistries will also show:
An increased lactic acid dehydrogenase (LDH) level. The isozyme is LDH-2 which is typical of the serum and hematopoietic cells.
Increased homocysteine and methylmalonic acid in Vitamin B12 deficiency
Increased homocysteine in folate deficiencyNormal levels of both methylmalonic acid and total homocysteine rule out clinically significant cobalamin deficiency with virtual certainty.Bone marrow (not normally checked in a patient suspected of megaloblastic anemia) shows megaloblastic hyperplasia.
See also
List of circulatory system conditions
List of hematologic conditions
References
External links
GeneReview/NCBI/NIH/UW entry on Thiamine-Responsive Megaloblastic Anemia Syndrome
Rare Anemias Foundation |
Mercury poisoning | Mercury poisoning is a type of metal poisoning due to exposure to mercury. Symptoms depend upon the type, dose, method, and duration of exposure. They may include muscle weakness, poor coordination, numbness in the hands and feet, skin rashes, anxiety, memory problems, trouble speaking, trouble hearing, or trouble seeing. High-level exposure to methylmercury is known as Minamata disease. Methylmercury exposure in children may result in acrodynia (pink disease) in which the skin becomes pink and peels. Long-term complications may include kidney problems and decreased intelligence. The effects of long-term low-dose exposure to methylmercury are unclear.Forms of mercury exposure include metal, vapor, salt, and organic compound. Most exposure is from eating fish, amalgam-based dental fillings, or exposure at a workplace. In fish, those higher up in the food chain generally have higher levels of mercury, a process known as biomagnification. Less commonly, poisoning may occur as a method of attempted suicide. Human activities that release mercury into the environment include the burning of coal and mining of gold. Tests of the blood, urine, and hair for mercury are available but do not relate well to the amount in the body.Prevention includes eating a diet low in mercury, removing mercury from medical and other devices, proper disposal of mercury, and not mining further mercury. In those with acute poisoning from inorganic mercury salts, chelation with either dimercaptosuccinic acid (DMSA) or dimercaptopropane sulfonate (DMPS) appears to improve outcomes if given within a few hours of exposure. Chelation for those with long-term exposure is of unclear benefit. In certain communities that survive on fishing, rates of mercury poisoning among children have been as high as 1.7 per 100.
Signs and symptoms
Common symptoms of mercury poisoning include peripheral neuropathy, presenting as paresthesia or itching, burning, pain, or even a sensation that resembles small insects crawling on or under the skin (formication); skin discoloration (pink cheeks, fingertips and toes); swelling; and desquamation (shedding or peeling of skin).Mercury irreversibly inhibits selenium-dependent enzymes (see below) and may also inactivate S-adenosyl-methionine, which is necessary for catecholamine catabolism by catechol-O-methyl transferase. Due to the bodys inability to degrade catecholamines (e.g. adrenaline), a person with mercury poisoning may experience profuse sweating, tachycardia (persistently faster-than-normal heart beat), increased salivation, and hypertension (high blood pressure).Affected children may show red cheeks, nose and lips, loss of hair, teeth, and nails, transient rashes, hypotonia (muscle weakness), and increased sensitivity to light. Other symptoms may include kidney dysfunction (e.g. Fanconi syndrome) or neuropsychiatric symptoms such as emotional lability, memory impairment, or insomnia.Thus, the clinical presentation may resemble pheochromocytoma or Kawasaki disease. Desquamation (skin peeling) can occur with severe mercury poisoning acquired by handling elemental mercury.
Causes
Consumption of fish containing mercury is by far the most significant source of ingestion-related mercury exposure in humans, although plants and livestock also contain mercury due to bioconcentration of organic mercury from seawater, freshwater, marine and lacustrine sediments, soils, and atmosphere, and due to biomagnification by ingesting other mercury-containing organisms. Exposure to mercury can occur from breathing contaminated air, from eating foods that have acquired mercury residues during processing, from exposure to mercury vapor in mercury amalgam dental restorations, and from improper use or disposal of mercury and mercury-containing objects, for example, after spills of elemental mercury or improper disposal of fluorescent lamps.All of these, except elemental liquid mercury, produce toxicity or death with less than a gram. Mercurys zero oxidation state (Hg0) exists as vapor or as liquid metal, its mercurous state (Hg+) exists as inorganic salts, and its mercuric state (Hg2+) may form either inorganic salts or organomercury compounds.Consumption of whale and dolphin meat, as is the practice in Japan, is a source of high levels of mercury poisoning. Tetsuya Endo, a professor at the Health Sciences University of Hokkaido, has tested whale meat purchased in the whaling town of Taiji and found mercury levels more than 20 times the acceptable Japanese standard.Human-generated sources, such as coal-burning power plants emit about half of atmospheric mercury, with natural sources such as volcanoes responsible for the remainder. A 2021 publication investigating the mercury distribution in European soils found that high mercury concentrations are found close to abandoned mines Almadén (Castilla-La Mancha, Spain), Mt. Amiata (Italy), Idrija (Slovenia) and Rudnany (Slovakia)] and coal-fired power plants. An estimated two-thirds of human-generated mercury comes from stationary combustion, mostly of coal. Other important human-generated sources include gold production, nonferrous metal production, cement production, waste disposal, human crematoria, caustic soda production, pig iron and steel production, mercury production (mostly for batteries), and biomass burning.Small independent gold-mining operation workers are at higher risk of mercury poisoning because of crude processing methods. Such is the danger for the galamsey in Ghana and similar workers known as orpailleurs in neighboring francophone countries. While no official government estimates of the labor force have been made, observers believe 20,000–50,000 work as galamseys in Ghana, a figure including many women, who work as porters. Similar problems have been reported amongst the gold miners of Indonesia.Some mercury compounds, especially organomercury compounds, can also be readily absorbed through direct skin contact. Mercury and its compounds are commonly used in chemical laboratories, hospitals, dental clinics, and facilities involved in the production of items such as fluorescent light bulbs, batteries, and explosives.Many traditional medicines, including ones used in Ayurvedic medicine and Traditional Chinese medicine, contain mercury and other heavy metals.
Sources
Compounds of mercury tend to be much more toxic than either the elemental form or the salts. These compounds have been implicated in causing brain and liver damage. The most dangerous mercury compound, dimethylmercury, is so toxic that even a few microliters spilled on the skin, or even on a latex glove, can cause death.
Methylmercury and related organomercury compounds
Methylmercury is the major source of organic mercury for all individuals. Due to bioaccumulation it works its way up through the food web and thus biomagnifies, resulting in high concentrations among populations of some species. Top predatory fish, such as tuna or swordfish, are usually of greater concern than smaller species. The US FDA and the EPA advise women of child-bearing age, nursing mothers, and young children to completely avoid swordfish, shark, king mackerel and tilefish from the Gulf of Mexico, and to limit consumption of albacore ("white") tuna to no more than 170 g (6 oz) per week, and of all other fish and shellfish to no more than 340 g (12 oz) per week. A 2006 review of the risks and benefits of fish consumption found, for adults, the benefits of one to two servings of fish per week outweigh the risks, even (except for a few fish species) for women of childbearing age, and that avoidance of fish consumption could result in significant excess coronary heart disease deaths and suboptimal neural development in children.Because the process of mercury-dependent sequestration of selenium is slow, the period between exposure to methylmercury and the appearance of symptoms in adult poisoning cases tends to be extended. The longest recorded latent period is five months after a single exposure, in the Dartmouth case (see History); other latent periods in the range of weeks to months have also been reported. When the first symptom appears, typically paresthesia (a tingling or numbness in the skin), it is followed rapidly by more severe effects, sometimes ending in coma and death. The toxic damage appears to be determined by the peak value of mercury, not the length of the exposure.Methylmercury exposure during rodent gestation, a developmental period that approximately models human neural development during the first two trimesters of gestation, has long-lasting behavioral consequences that appear in adulthood and, in some cases, may not appear until aging. Prefrontal cortex or dopamine neurotransmission could be especially sensitive to even subtle gestational methylmercury exposure and suggests that public health assessments of methylmercury based on intellectual performance may underestimate the impact of methylmercury in public health.
Ethylmercury is a breakdown product of the antibacteriological agent ethylmercurithiosalicylate, which has been used as a topical antiseptic and a vaccine preservative (further discussed under Thiomersal below). Its characteristics have not been studied as extensively as those of methylmercury. It is cleared from the blood much more rapidly, with a half-life of seven to ten days, and it is metabolized much more quickly than methylmercury. It is presumed not to have methylmercurys ability to cross the blood–brain barrier via a transporter, but instead relies on simple diffusion to enter the brain. Other exposure sources of organic mercury include phenylmercuric acetate and phenylmercuric nitrate. These compounds were used in indoor latex paints for their antimildew properties, but were removed in 1990 because of cases of toxicity.
Inorganic mercury compounds
Mercury occurs as salts such as mercuric chloride (HgCl2) and mercurous chloride (Hg2Cl2), the latter also known as calomel. Because they are more soluble in water, mercuric salts are usually more acutely toxic than mercurous salts. Their higher solubility lets them be more readily absorbed from the gastrointestinal tract. Mercury salts affect primarily the gastrointestinal tract and the kidneys, and can cause severe kidney damage; however, as they cannot cross the blood–brain barrier easily, these salts inflict little neurological damage without continuous or heavy exposure. Mercuric cyanide (Hg(CN)2) is a particularly toxic mercury compound that has been used in murders, as it contains not only mercury but also cyanide, leading to simultaneous cyanide poisoning. The drug n-acetyl penicillamine has been used to treat mercury poisoning with limited success.
Elemental mercury
Quicksilver (liquid metallic mercury) is poorly absorbed by ingestion and skin contact. Its vapor is the most hazardous form. Animal data indicate less than 0.01% of ingested mercury is absorbed through the intact gastrointestinal tract, though it may not be true for individuals with ileus. Cases of systemic toxicity from accidental swallowing are rare, and attempted suicide via intravenous injection does not appear to result in systemic toxicity, though it still causes damage by physically blocking blood vessels both at the site of injection and the lungs. Though not studied quantitatively, the physical properties of liquid elemental mercury limit its absorption through intact skin and in light of its very low absorption rate from the gastrointestinal tract, skin absorption would not be high. Some mercury vapor is absorbed dermally, but uptake by this route is only about 1% of that by inhalation.In humans, approximately 80% of inhaled mercury vapor is absorbed via the respiratory tract, where it enters the circulatory system and is distributed throughout the body. Chronic exposure by inhalation, even at low concentrations in the range 0.7–42 μg/m3, has been shown in case–control studies to cause effects such as tremors, impaired cognitive skills, and sleep disturbance in workers.Acute inhalation of high concentrations causes a wide variety of cognitive, personality, sensory, and motor disturbances. The most prominent symptoms include tremors (initially affecting the hands and sometimes spreading to other parts of the body), emotional lability (characterized by irritability, excessive shyness, confidence loss, and nervousness), insomnia, memory loss, neuromuscular changes (weakness, muscle atrophy, muscle twitching), headaches, polyneuropathy (paresthesia, stocking-glove sensory loss, hyperactive tendon reflexes, slowed sensory and motor nerve conduction velocities), and performance deficits in tests of cognitive function.
Mechanism
The toxicity of mercury sources can be expected to depend on its nature, i.e., salts vs. organomercury compounds vs. elemental mercury.
The primary mechanism of mercury toxicity involves its irreversible inhibition of selenoenzymes, such as thioredoxin reductase (IC50 = 9 nM). Although it has many functions, thioredoxin reductase restores vitamins C and E, as well as a number of other important antioxidant molecules, back into their reduced forms, enabling them to counteract oxidative damage. Since the rate of oxygen consumption is particularly high in brain tissues, production of reactive oxygen species (ROS) is accentuated in these vital cells, making them particularly vulnerable to oxidative damage and especially dependent upon the antioxidant protection provided by selenoenzymes. High mercury exposures deplete the amount of cellular selenium available for the biosynthesis of thioredoxin reductase and other selenoenzymes that prevent and reverse oxidative damage, which, if the depletion is severe and long lasting, results in brain cell dysfunctions that can ultimately cause death.
Mercury in its various forms is particularly harmful to fetuses as an environmental toxin in pregnancy, as well as to infants. Women who have been exposed to mercury in substantial excess of dietary selenium intakes during pregnancy are at risk of giving birth to children with serious birth defects. Mercury exposures in excess of dietary selenium intakes in young children can have severe neurological consequences, preventing nerve sheaths from forming properly.
Exposure to methylmercury causes increased levels of antibodies sent to myelin basic protein (MBP), which is involved in the myelination of neurons, and glial fibrillary acidic protein (GFAP), which is essential to many functions in the central nervous system (CNS). This causes an autoimmmune response against MBP and GFAP and results in the degradation of neural myelin and general decline in function of the CNS.
Diagnosis
Diagnosis of elemental or inorganic mercury poisoning involves determining the history of exposure, physical findings, and an elevated body burden of mercury. Although whole-blood mercury concentrations are typically less than 6 μg/L, diets rich in fish can result in blood mercury concentrations higher than 200 μg/L; it is not that useful to measure these levels for suspected cases of elemental or inorganic poisoning because of mercurys short half-life in the blood. If the exposure is chronic, urine levels can be obtained; 24-hour collections are more reliable than spot collections. It is difficult or impossible to interpret urine samples of people undergoing chelation therapy, as the therapy itself increases mercury levels in the samples.Diagnosis of organic mercury poisoning differs in that whole-blood or hair analysis is more reliable than urinary mercury levels.
Prevention
Mercury poisoning can be prevented or minimized by eliminating or reducing exposure to mercury and mercury compounds. To that end, many governments and private groups have made efforts to heavily regulate the use of mercury, or to issue advisories about the use of mercury. Most countries have signed the Minamata Convention on Mercury.
The export from the European Union of mercury and some mercury compounds has been prohibited since 15 March 2010. The European Union has banned most uses of mercury. Mercury is allowed for fluorescent light bulbs because of pressure from countries such as Germany, the Netherlands and Hungary, which are connected to the main producers of fluorescent light bulbs: General Electric, Philips and Osram.
The United States Environmental Protection Agency (EPA) issued recommendations in 2004 regarding exposure to mercury in fish and shellfish. The EPA also developed the "Fish Kids" awareness campaign for children and young adults on account of the greater impact of mercury exposure to that population.
Cleaning spilled mercury
Mercury thermometers and mercury light bulbs are not as common as they used to be, and the amount of mercury they contain is unlikely to be a health concern if handled carefully. However, broken items still require careful cleanup, as mercury can be hard to collect and it is easy to accidentally create a much larger exposure problem. If available, powdered sulfur may be applied to the spill, in order to create a solid compound that is more easily removed from surfaces than liquid mercury.
Treatment
Identifying and removing the source of the mercury is crucial. Decontamination requires removal of clothes, washing skin with soap and water, and flushing the eyes with saline solution as needed.
Chelation therapy
Chelation therapy for acute inorganic mercury poisoning, a formerly common method, was done with DMSA, 2,3-dimercapto-1-propanesulfonic acid (DMPS), D-penicillamine (DPCN), or dimercaprol (BAL). Only DMSA is FDA-approved for use in children for treating mercury poisoning. However, several studies found no clear clinical benefit from DMSA treatment for poisoning due to mercury vapor. No chelator for methylmercury or ethylmercury is approved by the FDA; DMSA is the most frequently used for severe methylmercury poisoning, as it is given orally, has fewer side-effects, and has been found to be superior to BAL, DPCN, and DMPS. α-Lipoic acid (ALA) has been shown to be protective against acute mercury poisoning in several mammalian species when it is given soon after exposure; correct dosage is required, as inappropriate dosages increase toxicity. Although it has been hypothesized that frequent low dosages of ALA may have potential as a mercury chelator, studies in rats have been contradictory. Glutathione and N-acetylcysteine (NAC) are recommended by some physicians, but have been shown to increase mercury concentrations in the kidneys and the brain.Chelation therapy can be hazardous if administered incorrectly. In August 2005, an incorrect form of EDTA (edetate disodium) used for chelation therapy resulted in hypocalcemia, causing cardiac arrest that killed a five-year-old autistic boy.
Other
Experimental animal and epidemiological study findings have confirmed the interaction between selenium and methylmercury. Instead of causing a decline in neurodevelopmental outcomes, epidemiological studies have found that improved nutrient (i.e., omega-3 fatty acids, selenium, iodine, vitamin D) intakes as a result of ocean fish consumption during pregnancy improves maternal and fetal outcomes. For example, increased ocean fish consumption during pregnancy was associated with 4-6 point increases in child IQs.
Prognosis
Some of the toxic effects of mercury are partially or wholly reversible provided specific therapy is able to restore selenium availability to normal before tissue damage from oxidation becomes too extensive. Autopsy findings point to a half-life of inorganic mercury in human brains of 27.4 years. Heavy or prolonged exposure can do irreversible damage, in particular in fetuses, infants, and young children. Youngs syndrome is believed to be a long-term consequence of early childhood mercury poisoning.Mercuric chloride may cause cancer as it has caused increases in several types of tumors in rats and mice, while methyl mercury has caused kidney tumors in male rats. The EPA has classified mercuric chloride and methyl mercury as possible human carcinogens (ATSDR, EPA)
Detection in biological fluids
Mercury may be measured in blood or urine to confirm a diagnosis of poisoning in hospitalized people or to assist in the forensic investigation in a case of fatal over dosage. Some analytical techniques are capable of distinguishing organic from inorganic forms of the metal. The concentrations in both fluids tend to reach high levels early after exposure to inorganic forms, while lower but very persistent levels are observed following exposure to elemental or organic mercury. Chelation therapy can cause a transient elevation of urine mercury levels.
History
Neolithic artists using cinnabar show signs of mercury poisoning.
Several Chinese emperors and other Chinese nobles are known or suspected to have died or been sickened by mercury poisoning after alchemists administered them "elixirs" to promote health, longevity, or immortality that contained either elemental mercury or (more commonly) cinnabar. Among the most prominent examples:
The first emperor of unified China, Qin Shi Huang, it is reported, died in 210 BC of ingesting mercury pills that were intended to give him eternal life.
Emperor Xuānzong of Tang, one of the emperors of the late Tang dynasty of China, was prescribed "cinnabar that had been treated and subdued by fire" to achieve immortality. Concerns that the prescription was having ill effects on the emperors health and sanity were waved off by the imperial alchemists, who cited medical texts listing a number of the emperors conditions (including itching, formication, swelling, and muscle weakness), today recognized as signs and symptoms of mercury poisoning, as evidence that the elixir was effectively treating the emperors latent ailments. Xuānzong became irritable and paranoid, and he seems to have ultimately died in 859 from the poisoning.
The phrase mad as a hatter is likely a reference to mercury poisoning among milliners (so-called "mad hatter disease"), as mercury-based compounds were once used in the manufacture of felt hats in the 18th and 19th century. (The Mad Hatter character of Alice in Wonderland was, it is presumed, inspired by an eccentric furniture dealer named Theophilus Carter. Carter was not a victim of mad hatter disease although Lewis Carroll would have been familiar with the phenomenon of dementia that occurred among hatters.)
In 1810, two British ships, HMS Triumph and HMS Phipps, salvaged a large load of elemental mercury from a wrecked Spanish vessel near Cadiz, Spain. The bladders containing the mercury soon ruptured. The element spread about the ships in liquid and vapor forms. The sailors presented with neurologic compromises: tremor, paralysis, and excessive salivation as well as tooth loss, skin problems, and pulmonary complaints. In 1823 William Burnet, MD published a report on the effects of mercurial vapor. Triumphs surgeon, Henry Plowman, had concluded that the ailments had arisen from inhaling the mercurialized atmosphere. His treatment was to order the lower deck gun ports to be opened, when it was safe to do so; sleeping on the orlop was forbidden; and no men slept in the lower deck if they were at all symptomatic. Windsails were set to channel fresh air into the lower decks day and night.
Historically, gold-mercury amalgam was widely used in gilding, applied to the object and then heated to vaporize the mercury and deposit the gold, leading to numerous casualties among the workers. It is estimated that during the construction of Saint Isaacs Cathedral alone, 60 men died from the gilding of the main dome.
For years, including the early part of his presidency, Abraham Lincoln took a common medicine of his time called "blue mass", which contained significant amounts of mercury.
On September 5, 1920, silent movie actress Olive Thomas ingested mercury capsules dissolved in an alcoholic solution at the Hotel Ritz in Paris. There is still controversy over whether it was suicide, or whether she consumed the external preparation by mistake. Her husband, Jack Pickford (the brother of Mary Pickford), had syphilis, and the mercury was used as a treatment of the venereal disease at the time. She died a few days later at the American Hospital in Neuilly.
An early scientific study of mercury poisoning was in 1923–1926 by the German inorganic chemist, Alfred Stock, who himself became poisoned, together with his colleagues, by breathing mercury vapor that was being released by his laboratory equipment—diffusion pumps, float valves, and manometers—all of which contained mercury, and also from mercury that had been accidentally spilt and remained in cracks in the linoleum floor covering. He published a number of papers on mercury poisoning, founded a committee in Berlin to study cases of possible mercury poisoning, and introduced the term micromercurialism.
The term Hunter-Russell syndrome derives from a study of mercury poisoning among workers in a seed-packaging factory in Norwich, England in the late 1930s who breathed methylmercury that was being used as a seed disinfectant and pesticide.
Outbreaks of methylmercury poisoning occurred in several places in Japan during the 1950s due to industrial discharges of mercury into rivers and coastal waters. The best-known instances were in Minamata and Niigata. In Minamata alone, more than 600 people died due to what became known as Minamata disease. More than 21,000 people filed claims with the Japanese government, of which almost 3000 became certified as having the disease. In 22 documented cases, pregnant women who consumed contaminated fish showed mild or no symptoms but gave birth to infants with severe developmental disabilities.
Mercury poisoning of generations of Grassy Narrows and Whitedog native people in Ontario, Canada who were exposed to high levels of mercury by consuming mercury-contaminated fish when Dryden Chemical Company discharged over 9,000 kilograms (20,000 lb) of mercury directly into the Wabigoon–English River system and continued with mercury air pollution until 1975.
Widespread mercury poisoning occurred in rural Iraq in 1971–1972, when grain treated with a methylmercury-based fungicide that was intended for planting only was used by the rural population to make bread, causing at least 6530 cases of mercury poisoning and at least 459 deaths (see Basra poison grain disaster).
On August 14, 1996, Karen Wetterhahn, a chemistry professor working at Dartmouth College, spilled a small amount of dimethylmercury on her latex glove. She began experiencing the symptoms of mercury poisoning five months later and, despite aggressive chelation therapy, died a few months later from a mercury induced neurodegenerative disease
In April 2000, Alan Chmurny attempted to kill a former employee, Marta Bradley, by pouring mercury into the ventilation system of her car.
On March 19, 2008, Tony Winnett, 55, inhaled mercury vapors while trying to extract gold from computer parts (by using liquid mercury to separate gold from the rest of the alloy), and died ten days later. His Oklahoma residence became so contaminated that it had to be gutted.
In December 2008, actor Jeremy Piven was diagnosed with mercury poisoning possibly resulting from eating sushi twice a day for twenty years or from taking herbal remedies.
In India, a study by Centre for Science and Environment and Indian Institute of Toxicology Research has found that in the countrys energy capital Singrauli, mercury is slowly entering peoples homes, food, water and even blood.
The Minamata Convention on Mercury in 2016 announced that the signing of the "international treaty designed to protect human health and the environment from anthropogenic releases and emission of mercury and mercury compounds" on April 22, 2016—Earth Day. It was the sixtieth anniversary of the discovery of the disease.
Infantile acrodynia
Infantile acrodynia (also known as "calomel disease", "erythredemic polyneuropathy", and "pink disease") is a type of mercury poisoning in children characterized by pain and pink discoloration of the hands and feet. The word is derived from the Greek, where άκρο means end or extremity, and οδυνη means pain. Acrodynia resulted primarily from calomel in teething powders and decreased greatly after calomel was excluded from most teething powders in 1954.Acrodynia is difficult to diagnose; "it is most often postulated that the etiology of this syndrome is an idiosyncratic hypersensitivity reaction to mercury because of the lack of correlation with mercury levels, many of the symptoms resemble recognized mercury poisoning."
Medicine
Mercury was once prescribed as a purgative.
Many mercury-containing compounds were once used in medicines. These include calomel (mercurous chloride), and mercuric chloride.
Thiomersal
In 1999, the Centers for Disease Control (CDC) and the American Academy of Pediatrics (AAP) asked vaccine makers to remove the organomercury compound thiomersal (spelled "thimerosal" in the US) from vaccines as quickly as possible, and thiomersal has been phased out of US and European vaccines, except for some preparations of influenza vaccine. The CDC and the AAP followed the precautionary principle, |
Mercury poisoning | which assumes that there is no harm in exercising caution even if it later turns out to be unwarranted, but their 1999 action sparked confusion and controversy that thiomersal was a cause of autism.Since 2000, the thiomersal in child vaccines has been alleged to contribute to autism, and thousands of parents in the United States have pursued legal compensation from a federal fund. A 2004 Institute of Medicine (IOM) committee favored rejecting any causal relationship between thiomersal-containing vaccines and autism. Autism incidence rates increased steadily even after thiomersal was removed from childhood vaccines. Currently there is no accepted scientific evidence that exposure to thiomersal is a factor in causing autism.
Dental amalgam toxicity
Dental amalgam is a possible cause of low-level mercury poisoning due to its use in dental fillings. Discussion on the topic includes debates on whether amalgam should be used, with critics arguing that its toxic effects make it unsafe.
Cosmetics
Some skin whitening products contain the toxic mercury(II) chloride as the active ingredient. When applied, the chemical readily absorbs through the skin into the bloodstream. The use of mercury in cosmetics is illegal in the United States. However, cosmetics containing mercury are often illegally imported. Following a certified case of mercury poisoning resulting from the use of an imported skin whitening product, the United States Food and Drug Administration warned against the use of such products. Symptoms of mercury poisoning have resulted from the use of various mercury-containing cosmetic products. The use of skin whitening products is especially popular amongst Asian women. In Hong Kong in 2002, two products were discovered to contain between 9,000 and 60,000 times the recommended dose.
Fluorescent lamps
Fluorescent lamps contain mercury, which is released when bulbs break. Mercury in bulbs is typically present as either elemental mercury liquid, vapor, or both, since the liquid evaporates at ambient temperature. When broken indoors, bulbs may emit sufficient mercury vapor to present health concerns, and the U.S. Environmental Protection Agency recommends evacuating and airing out a room for at least 15 minutes after breaking a fluorescent light bulb. Breakage of multiple bulbs presents a greater concern. A 1987 report described a 23-month-old toddler who had anorexia, weight loss, irritability, profuse sweating, and peeling and redness of fingers and toes. This case of acrodynia was traced to exposure of mercury from a carton of 8-foot fluorescent light bulbs that had broken in a potting shed adjacent to the main nursery. The glass was cleaned up and discarded, but the child often used the area to play in.
Assassination attempts
Mercury has, allegedly, been used at various times to assassinate people. In 2008, Russian lawyer Karinna Moskalenko claimed to have been poisoned by mercury left in her car, while in 2010 journalists Viktor Kalashnikov and Marina Kalashnikova accused Russias FSB of trying to poison them.
See also
References
External links
Hazardous Substances: Mercury at Curlie
Toxic Substances: Mercury at Curlie |
Mesothelioma | Mesothelioma is a type of cancer that develops from the thin layer of tissue that covers many of the internal organs (known as the mesothelium). The most common area affected is the lining of the lungs and chest wall. Less commonly the lining of the abdomen and rarely the sac surrounding the heart, or the sac surrounding the testis may be affected. Signs and symptoms of mesothelioma may include shortness of breath due to fluid around the lung, a swollen abdomen, chest wall pain, cough, feeling tired, and weight loss. These symptoms typically come on slowly.More than 80% of mesothelioma cases are caused by exposure to asbestos. The greater the exposure the greater the risk. As of 2013, about 125 million people worldwide have been exposed to asbestos at work. High rates of disease occur in people who mine asbestos, produce products from asbestos, work with asbestos products, live with asbestos workers, or work in buildings containing asbestos. Asbestos exposure and the onset of cancer are generally separated by about 40 years. Washing the clothing of someone who worked with asbestos also increases the risk. Other risk factors include genetics and infection with the simian virus 40. The diagnosis may be suspected based on chest X-ray and CT scan findings, and is confirmed by either examining fluid produced by the cancer or by a tissue biopsy of the cancer.Prevention focuses on reducing exposure to asbestos. Treatment often includes surgery, radiation therapy, and chemotherapy. A procedure known as pleurodesis, which involves using substances such as talc to scar together the pleura, may be used to prevent more fluid from building up around the lungs. Chemotherapy often includes the medications cisplatin and pemetrexed. The percentage of people that survive five years following diagnosis is on average 8% in the United States.In 2015, about 60,800 people had mesothelioma, and 32,000 died from the disease. Rates of mesothelioma vary in different areas of the world. Rates are higher in Australia, the United Kingdom, and lower in Japan. It occurs in about 3,000 people per year in the United States. It occurs more often in males than females. Rates of disease have increased since the 1950s. Diagnosis typically occurs after the age of 65 and most deaths occur around 70 years old. The disease was rare before the commercial use of asbestos.
Signs and symptoms
Lungs
Symptoms or signs of mesothelioma may not appear until 20 to 50 years (or more) after exposure to asbestos. Shortness of breath, cough, and pain in the chest due to an accumulation of fluid in the pleural space (pleural effusion) are often symptoms of pleural mesothelioma.Mesothelioma that affects the pleura can cause these signs and symptoms:
Chest wall pain
Pleural effusion, or fluid surrounding the lung
Shortness of breath – which could be due to a collapsed lung
Fatigue or anemia
Wheezing, hoarseness, or a cough
Blood in the sputum (fluid) coughed up (hemoptysis)In severe cases, the person may have many tumor masses. The individual may develop a pneumothorax, or collapse of the lung. The disease may metastasize, or spread to other parts of the body.
Abdomen
The most common symptoms of peritoneal mesothelioma are abdominal swelling and
pain due to ascites (a buildup of fluid in the abdominal cavity). Other features may include weight loss, fever, night sweats, poor appetite, vomiting, constipation, and umbilical hernia. If the cancer has spread beyond the mesothelium to other parts of the body, symptoms may include pain, trouble swallowing, or swelling of the neck or face. These symptoms may be caused by mesothelioma or by other, less serious conditions.Tumors that affect the abdominal cavity often do not cause symptoms until they are at a late stage. Symptoms include:
Abdominal pain
Ascites, or an abnormal buildup of fluid in the abdomen
A mass in the abdomen
Problems with bowel function
Weight loss
Heart
Pericardial mesothelioma is not well characterized, but observed cases have included cardiac symptoms, specifically constrictive pericarditis, heart failure, pulmonary embolism, and cardiac tamponade. They have also included nonspecific symptoms, including substernal chest pain, orthopnea (shortness of breath when lying flat), and cough. These symptoms are caused by the tumor encasing or infiltrating the heart.
End stage
In severe cases of the disease, the following signs and symptoms may be present:
Blood clots in the veins, which may cause thrombophlebitis
Disseminated intravascular coagulation, a disorder causing severe bleeding in many body organs
Jaundice, or yellowing of the eyes and skin
Low blood sugar
Pleural effusion
Pulmonary embolism, or blood clots in the arteries of the lungs
Severe ascitesIf a mesothelioma forms metastases, these most commonly involve the liver, adrenal gland, kidney, or other lung.
Causes
Working with asbestos is the most common risk factor for mesothelioma. However, mesothelioma has been reported in some individuals without any known exposure to asbestos. Tentative evidence also raises concern about carbon nanotubes.
Asbestos
The incidence of mesothelioma has been found to be higher in populations living near naturally occurring asbestos. People can be exposed to naturally occurring asbestos in areas where mining or road construction is occurring, or when the asbestos-containing rock is naturally weathered. Another common route of exposure is through asbestos-containing soil, which is used to whitewash, plaster, and roof houses in Greece. In central Cappadocia, Turkey, mesothelioma was causing 50% of all deaths in three small villages—Tuzköy, Karain, and Sarıhıdır. Initially, this was attributed to erionite. Environmental exposure to asbestos has caused mesothelioma in places other than Turkey, including Corsica, Greece, Cyprus, China, and California. In the northern Greek mountain town of Metsovo, this exposure had resulted in mesothelioma incidence around 300 times more than expected in asbestos-free populations, and was associated with very frequent pleural calcification known as Metsovo lung.The documented presence of asbestos fibers in water supplies and food products has fostered concerns about the possible impact of long-term and, as yet, unknown exposure of the general population to these fibers.Exposure to talc is also a risk factor for mesothelioma; exposure can affect those who live near talc mines, work in talc mines, or work in talc mills.In the United States, asbestos is considered the major cause of malignant mesothelioma and has been considered "indisputably" associated with the development of mesothelioma. Indeed, the relationship between asbestos and mesothelioma is so strong that many consider mesothelioma a "signal" or "sentinel" tumor. A history of asbestos exposure exists in most cases.
Pericardial mesothelioma may not be associated with asbestos exposure.Asbestos was known in antiquity, but it was not mined and widely used commercially until the late 19th century. The dangers were not unknown in antiquity. Pliny the Elder, a Roman author and naturalist, observed that quarry slaves from asbestos mines tended to die young. Its use greatly increased during World War II. Since the early 1940s, millions of American workers have been exposed to asbestos dust. Initially, the risks associated with asbestos exposure were not publicly known. However, an increased risk of developing mesothelioma was later found among naval personnel (e.g., Navy, Marine Corps, and Coast Guard), shipyard workers, people who work in asbestos mines and mills, producers of asbestos products, workers in the heating and construction industries, and other tradespeople. Today, the official position of the U.S. Occupational Safety and Health Administration (OSHA) and the U.S. EPA is that protections and "permissible exposure limits" required by U.S. regulations, while adequate to prevent most asbestos-related non-malignant disease, are not adequate to prevent or protect against asbestos-related cancers such as mesothelioma. Likewise, the British Governments Health and Safety Executive (HSE) states formally that any threshold for exposure to asbestos must be at a very low level and it is widely agreed that if any such threshold does exist at all, then it cannot currently be quantified. For practical purposes, therefore, HSE assumes that no such "safe" threshold exists. Others have noted as well that there is no evidence of a threshold level below which there is no risk of mesothelioma. There appears to be a linear, dose–response relationship, with increasing dose producing increasing risk of disease. Nevertheless, mesothelioma may be related to brief, low level or indirect exposures to asbestos. The dose necessary for effect appears to be lower for asbestos-induced mesothelioma than for pulmonary asbestosis or lung cancer. Again, there is no known safe level of exposure to asbestos as it relates to increased risk of mesothelioma.
The time from first exposure to onset of the disease, is between 25 and 70 years. It is virtually never less than fifteen years and peaks at 30–40 years. The duration of exposure to asbestos causing mesothelioma can be short. For example, cases of mesothelioma have been documented with only 1–3 months of exposure.
Occupational
Exposure to asbestos fibers has been recognized as an occupational health hazard since the early 20th century. Numerous epidemiological studies have associated occupational exposure to asbestos with the development of pleural plaques, diffuse pleural thickening, asbestosis, carcinoma of the lung and larynx, gastrointestinal tumors, and diffuse malignant mesothelioma of the pleura and peritoneum. Asbestos has been widely used in many industrial products, including cement, brake linings, gaskets, roof shingles, flooring products, textiles, and insulation.Commercial asbestos mining at Wittenoom, Western Australia, took place from 1937 to 1966. The first case of mesothelioma in the town occurred in 1960. The second case was in 1969, and new cases began to appear more frequently thereafter. The lag time between initial exposure to asbestos and the development of mesothelioma varied from 12 years 9 months up to 58 years. A cohort study of miners employed at the mine reported that 85 deaths attributable to mesothelioma had occurred by 1985. By 1994, 539 reported deaths due to mesothelioma had been reported in Western Australia.Occupational exposure to asbestos in the United States mainly occurs when people are maintaining buildings that already have asbestos. Approximately 1.3 million US workers are exposed to asbestos annually; in 2002, an estimated 44,000 miners were potentially exposed to asbestos.
Paraoccupational secondary exposure
Family members and others living with asbestos workers have an increased risk of developing mesothelioma, and possibly other asbestos-related diseases. This risk may be the result of exposure to asbestos dust brought home on the clothing and hair of asbestos workers via washing a workers clothes or coming into contact with asbestos-contaminated work clothing. To reduce the chance of exposing family members to asbestos fibres, asbestos workers are usually required to shower and change their clothing before leaving the workplace.
Asbestos in buildings
Many building materials used in both public and domestic premises prior to the banning of asbestos may contain asbestos. Those performing renovation works or DIY activities may expose themselves to asbestos dust. In the UK, use of chrysotile asbestos was banned at the end of 1999. Brown and blue asbestos were banned in the UK around 1985. Buildings built or renovated prior to these dates may contain asbestos materials.Therefore, it is a legal requirement that all who may come across asbestos in their day-to-day work have been provided with the relevant asbestos training.
Genetic disposition
In a recent research carried on white American population in 2012, it was found that people with a germline mutation in their BAP1 gene are at higher risk of developing mesothelioma and uveal melanoma.
Erionite
Erionite is a zeolite mineral with similar properties to asbestos and is known to cause mesothelioma. Detailed epidemiological investigation has shown that erionite causes mesothelioma mostly in families with a genetic predisposition. Erionite is found in deposits in the Western United States, where it is used in gravel for road surfacing, and in Turkey, where it is used to construct homes. In Turkey, the United States, and Mexico, erionite has been associated with mesothelioma and has thus been designated a "known human carcinogen" by the US National Toxicology Program.
Other
In rare cases, mesothelioma has also been associated with irradiation of the chest or abdomen, intrapleural thorium dioxide (thorotrast) as a contrast medium, and inhalation of other fibrous silicates, such as erionite or talc. Some studies suggest that simian virus 40 (SV40) may act as a cofactor in the development of mesothelioma. This has been confirmed in animal studies, but studies in humans are inconclusive.
Pathophysiology
Systemic
The mesothelium consists of a single layer of flattened to cuboidal cells forming the epithelial lining of the serous cavities of the body including the peritoneal, pericardial and pleural cavities. Deposition of asbestos fibers in the parenchyma of the lung may result in the penetration of the visceral pleura from where the fiber can then be carried to the pleural surface, thus leading to the development of malignant mesothelial plaques. The processes leading to the development of peritoneal mesothelioma remain unresolved, although it has been proposed that asbestos fibers from the lung are transported to the abdomen and associated organs via the lymphatic system. Additionally, asbestos fibers may be deposited in the gut after ingestion of sputum contaminated with asbestos fibers.Pleural contamination with asbestos or other mineral fibers has been shown to cause cancer. Long thin asbestos fibers (blue asbestos, amphibole fibers) are more potent carcinogens than "feathery fibers" (chrysotile or white asbestos fibers). However, there is now evidence that smaller particles may be more dangerous than the larger fibers. They remain suspended in the air where they can be inhaled, and may penetrate more easily and deeper into the lungs. "We probably will find out a lot more about the health aspects of asbestos from [the World Trade Center attack], unfortunately," said Dr. Alan Fein, chief of pulmonary and critical-care medicine at North Shore-Long Island Jewish Health System.Mesothelioma development in rats has been demonstrated following intra-pleural inoculation of phosphorylated chrysotile fibers. It has been suggested that in humans, transport of fibers to the pleura is critical to the pathogenesis of mesothelioma. This is supported by the observed recruitment of significant numbers of macrophages and other cells of the immune system to localized lesions of accumulated asbestos fibers in the pleural and peritoneal cavities of rats. These lesions continued to attract and accumulate macrophages as the disease progressed, and cellular changes within the lesion culminated in a morphologically malignant tumor.Experimental evidence suggests that asbestos acts as a complete carcinogen with the development of mesothelioma occurring in sequential stages of initiation and promotion. The molecular mechanisms underlying the malignant transformation of normal mesothelial cells by asbestos fibers remain unclear despite the demonstration of its oncogenic capabilities (see next-but-one paragraph). However, complete in vitro transformation of normal human mesothelial cells to a malignant phenotype following exposure to asbestos fibers has not yet been achieved. In general, asbestos fibers are thought to act through direct physical interactions with the cells of the mesothelium in conjunction with indirect effects following interaction with inflammatory cells such as macrophages.
Intracellular
Analysis of the interactions between asbestos fibers and DNA has shown that phagocytosed fibers are able to make contact with chromosomes, often adhering to the chromatin fibers or becoming entangled within the chromosome. This contact between the asbestos fiber and the chromosomes or structural proteins of the spindle apparatus can induce complex abnormalities. The most common abnormality is monosomy of chromosome 22. Other frequent abnormalities include structural rearrangement of 1p, 3p, 9p and 6q chromosome arms.Common gene abnormalities in mesothelioma cell lines include deletion of the tumor suppressor genes:
Neurofibromatosis type 2 at 22q12
P16INK4A
P14ARFAsbestos has also been shown to mediate the entry of foreign DNA into target cells. Incorporation of this foreign DNA may lead to mutations and oncogenesis by several possible mechanisms:
Inactivation of tumor suppressor genes
Activation of oncogenes In tumor cells, these genes are often mutated, or expressed at high levels.
Activation of proto-oncogenes due to incorporation of foreign DNA containing a promoter region
Activation of DNA repair enzymes, which may be prone to error
Activation of telomerase
Prevention of apoptosisSeveral genes are commonly mutated in mesothelioma, and may be prognostic factors. These include epidermal growth factor receptor (EGFR) and C-Met, receptor tyrosine kinases which are overexpressed in many mesotheliomas. Some association has been found with EGFR and epithelioid histology but no clear association has been found between EGFR overexpression and overall survival. Expression of AXL receptor tyrosine kinase is a negative prognostic factor. Expression of PDGFRB is a positive prognostic factor. In general, mesothelioma is characterized by loss of function in tumor suppressor genes, rather than by an overexpression or gain of function in oncogenes.As an environmentally triggered malignancy, mesothelioma tumors have been found to be polyclonal in origin, by performing an X-inactivation based assay on epitheloid and biphasic tumors obtained from female patients. These results suggest that an environmental factor, most likely asbestos exposure, may damage and transform a group of cells in the tissue, resulting in a population of tumor cells that are, albeit only slightly, genetically different.
Immune system
Asbestos fibers have been shown to alter the function and secretory properties of macrophages, ultimately creating conditions which favour the development of mesothelioma. Following asbestos phagocytosis, macrophages generate increased amounts of hydroxyl radicals, which are normal by-products of cellular anaerobic metabolism. However, these free radicals are also known clastogenic (chromosome-breaking)
and membrane-active agents thought to promote asbestos carcinogenicity. These oxidants can participate in the oncogenic process by directly and indirectly interacting with DNA, modifying membrane-associated cellular events, including oncogene activation and perturbation of cellular antioxidant defences.Asbestos also may possess immunosuppressive properties. For example, chrysotile fibres have been shown to depress the in vitro proliferation of phytohemagglutinin-stimulated peripheral blood lymphocytes, suppress natural killer cell lysis and significantly reduce lymphokine-activated killer cell viability and recovery. Furthermore, genetic alterations in asbestos-activated macrophages may result in the release of potent mesothelial cell mitogens such as platelet-derived growth factor (PDGF) and transforming growth factor-β (TGF-β) which in turn, may induce the chronic stimulation and proliferation of mesothelial cells after injury by asbestos fibres.
Diagnosis
Diagnosis of mesothelioma can be suspected with imaging but is confirmed with biopsy. It must be clinically and histologically differentiated from other pleural and pulmonary malignancies, including reactive pleural disease, primary lung carcinoma, pleural metastases of other cancers, and other primary pleural cancers.
Primary pericardial mesothelioma is often diagnosed after it has metastasized to lymph nodes or the lungs.
Imaging
Diagnosing mesothelioma is often difficult because the symptoms are similar to those of a number of other conditions. Diagnosis begins with a review of the patients medical history. A history of exposure to asbestos may increase clinical suspicion for mesothelioma. A physical examination is performed, followed by chest X-ray and often lung function tests. The X-ray may reveal pleural thickening commonly seen after asbestos exposure and increases suspicion of mesothelioma. A CT (or CAT) scan or an MRI is usually performed. If a large amount of fluid is present, abnormal cells may be detected by cytopathology if this fluid is aspirated with a syringe. For pleural fluid, this is done by thoracentesis or tube thoracostomy (chest tube); for ascites, with paracentesis or ascitic drain; and for pericardial effusion with pericardiocentesis. While absence of malignant cells on cytology does not completely exclude mesothelioma, it makes it much more unlikely, especially if an alternative diagnosis can be made (e.g., tuberculosis, heart failure). However, with primary pericardial mesothelioma, pericardial fluid may not contain malignant cells and a tissue biopsy is more useful in diagnosis. Using conventional cytology diagnosis of malignant mesothelioma is difficult, but immunohistochemistry has greatly enhanced the accuracy of cytology.
Biopsy
Generally, a biopsy is needed to confirm a diagnosis of malignant mesothelioma. A doctor removes a sample of tissue for examination under a microscope by a pathologist. A biopsy may be done in different ways, depending on where the abnormal area is located. If the cancer is in the chest, the doctor may perform a thoracoscopy. In this procedure, the doctor makes a small cut through the chest wall and puts a thin, lighted tube called a thoracoscope into the chest between two ribs. Thoracoscopy allows the doctor to look inside the chest and obtain tissue samples. Alternatively, the cardiothoracic surgeon might directly open the chest (thoracotomy). If the cancer is in the abdomen, the doctor may perform a laparoscopy. To obtain tissue for examination, the doctor makes a small incision in the abdomen and inserts a special instrument into the abdominal cavity. If these procedures do not yield enough tissue, an open surgical procedure may be necessary.
Immunochemistry
Immunohistochemical studies play an important role for the pathologist in differentiating malignant mesothelioma from neoplastic mimics, such as breast or lung cancer that has metastasized to the pleura. There are numerous tests and panels available, but no single test is perfect for distinguishing mesothelioma from carcinoma or even benign versus malignant. The positive markers indicate that mesothelioma is present; if other markers are positive it may indicate another type of cancer, such as breast or lung adenocarcinoma. Calretinin is a particularly important marker in distinguishing mesothelioma from metastatic breast or lung cancer.
Subtypes
There are three main histological subtypes of malignant mesothelioma: epithelioid, sarcomatous, and biphasic. Epithelioid and biphasic mesothelioma make up approximately 75-95% of mesotheliomas and have been well characterized histologically, whereas sarcomatous mesothelioma has not been studied extensively. Most mesotheliomas express high levels of cytokeratin 5 regardless of subtype.Epithelioid mesothelioma is characterized by high levels of calretinin.Sarcomatous mesothelioma does not express high levels of calretinin.Other morphological subtypes have been described:
Desmoplastic
Clear cell
Deciduoid
Adenomatoid
Glandular
Mucohyaline
Cartilaginous and osseous metaplasia
Lymphohistiocytic
Differential diagnosis
Metastatic adenocarcinoma
Pleural sarcoma
Synovial sarcoma
Thymoma
Metastatic clear cell renal cell carcinoma
Metastatic osteosarcoma
Staging
Staging of mesothelioma is based on the recommendation by the International Mesothelioma Interest Group. TNM classification of the primary tumor, lymph node involvement, and distant metastasis is performed. Mesothelioma is staged Ia–IV (one-A to four) based on the TNM status.
Prevention
Mesothelioma can be prevented in most cases by preventing exposure to asbestos. The US National Institute for Occupational Safety and Health maintains a recommended exposure limit of 0.1 asbestos fiber per cubic centimeter.
Screening
There is no universally agreed protocol for screening people who have been exposed to asbestos. Screening tests might diagnose mesothelioma earlier than conventional methods thus improving the survival prospects for patients. The serum osteopontin level might be useful in screening asbestos-exposed people for mesothelioma. The level of soluble mesothelin-related protein is elevated in the serum of about 75% of patients at diagnosis and it has been suggested that it may be useful for screening. Doctors have begun testing the Mesomark assay, which measures levels of soluble mesothelin-related proteins (SMRPs) released by mesothelioma cells.
Treatment
Mesothelioma is generally resistant to radiation and chemotherapy treatment. Long-term survival and cures are exceedingly rare. Treatment of malignant mesothelioma at earlier stages has a better prognosis. Clinical behavior of the malignancy is affected by several factors including the continuous mesothelial surface of the pleural cavity which favors local metastasis via exfoliated cells, invasion to underlying tissue and other organs within the pleural cavity, and the extremely long latency period between asbestos exposure and development of the disease. The histological subtype and the patients age and health status also help predict prognosis. The epithelioid histology responds better to treatment and has a survival advantage over sarcomatoid histology.The effectiveness of radiotherapy compared to chemotherapy or surgery for malignant pleural mesothelioma is not known.
Surgery
Surgery, by itself, has proved disappointing. In one large series, the median survival with surgery (including extrapleural pneumonectomy) was only 11.7 months. However, research indicates varied success when used in combination with radiation and chemotherapy (Duke, 2008), or with one of the latter. A pleurectomy/decortication is the most common surgery, in which the lining of the chest is removed. Less common is an extrapleural pneumonectomy (EPP), in which the lung, lining of the inside of the chest, the hemi-diaphragm and the pericardium are removed. In localized pericardial mesothelioma, pericardectomy can be curative; when the tumor has metastasized, pericardectomy is a palliative care option. It is often not possible to remove the entire tumor.
Radiation
For patients with localized disease, and who can tolerate a radical surgery, radiation can be given post-operatively as a consolidative treatment. The entire hemithorax is treated with radiation therapy, often given simultaneously with chemotherapy. Delivering radiation and chemotherapy after a radical surgery has led to extended life expectancy in selected patient populations. It can also induce severe side-effects, including fatal pneumonitis. As part of a curative approach to mesothelioma, radiotherapy is commonly applied to the sites of chest drain insertion, in order to prevent growth of the tumor along the track in the chest wall.Although mesothelioma is generally resistant to curative treatment with radiotherapy alone, palliative treatment regimens are sometimes used to relieve symptoms arising from tumor growth, such as obstruction of a major blood vessel. Radiation therapy, when given alone with curative intent, has never been shown to improve survival from mesothelioma. The necessary radiation dose to treat mesothelioma that has not been surgically removed would be beyond human tolerance. Radiotherapy is of some use in pericardial mesothelioma.
Chemotherapy
Chemotherapy is the only treatment for mesothelioma that has been proven to improve survival in randomised and controlled trials. The landmark study published in 2003 by Vogelzang and colleagues compared cisplatin chemotherapy alone with a combination of cisplatin and pemetrexed (brand name Alimta) chemotherapy in patients who had not received chemotherapy for malignant pleural mesothelioma previously and were not candidates for more aggressive |
Mesothelioma | "curative" surgery. This trial was the first to report a survival advantage from chemotherapy in malignant pleural mesothelioma, showing a statistically significant improvement in median survival from 10 months in the patients treated with cisplatin alone to 13.3 months in the group of patients treated with cisplatin in the combination with pemetrexed and who also received supplementation with folate and vitamin B12. Vitamin supplementation was given to most patients in the trial and pemetrexed related side effects were significantly less in patients receiving pemetrexed when they also received daily oral folate 500mcg and intramuscular vitamin B12 1000mcg every 9 weeks compared with patients receiving pemetrexed without vitamin supplementation. The objective response rate increased from 20% in the cisplatin group to 46% in the combination pemetrexed group. Some side effects such as nausea and vomiting, stomatitis, and diarrhoea were more common in the combination pemetrexed group but only affected a minority of patients and overall the combination of pemetrexed and cisplatin was well tolerated when patients received vitamin supplementation; both quality of life and lung function tests improved in the combination pemetrexed group. In February 2004, the United States Food and Drug Administration (FDA) approved pemetrexed for treatment of malignant pleural mesothelioma. However, there are still unanswered questions about the optimal use of chemotherapy, including when to start treatment, and the optimal number of cycles to give. Cisplatin and pemetrexed together give patients a median survival of 12.1 months.Cisplatin in combination with raltitrexed has shown an improvement in survival similar to that reported for pemetrexed in combination with cisplatin, but raltitrexed is no longer commercially available for this indication. For patients unable to tolerate pemetrexed, cisplatin in combination with gemcitabine or vinorelbine is an alternative, or vinorelbine on its own, although a survival benefit has not been shown for these drugs. For patients in whom cisplatin cannot be used, carboplatin can be substituted but non-randomised data have shown lower response rates and high rates of haematological toxicity for carboplatin-based combinations, albeit with similar survival figures to patients receiving cisplatin. Cisplatin in combination with premetrexed disodium, folic acid, and vitamin B12 may also improve survival for people who are responding to chemotherapy.In January 2009, the United States FDA approved using conventional therapies such as surgery in combination with radiation and or chemotherapy on stage I or II Mesothelioma after research conducted by a nationwide study by Duke University concluded an almost 50 point increase in remission rates.In pericardial mesothelioma, chemotherapy – typically adriamycin or cisplatin – is primarily used to shrink the tumor and is not curative.
Immunotherapy
Treatment regimens involving immunotherapy have yielded variable results. For example, intrapleural inoculation of Bacillus Calmette-Guérin (BCG) in an attempt to boost the immune response, was found to be of no benefit to the patient (while it may benefit patients with bladder cancer). Mesothelioma cells proved susceptible to in vitro lysis by LAK cells following activation by interleukin-2 (IL-2), but patients undergoing this particular therapy experienced major side effects. Indeed, this trial was suspended in view of the unacceptably high levels of IL-2 toxicity and the severity of side effects such as fever and cachexia. Nonetheless, other trials involving interferon alpha have proved more encouraging with 20% of patients experiencing a greater than 50% reduction in tumor mass combined with minimal side effects.In October 2020, the FDA approved the combination of nivolumab (Opdivo) with ipilimumab (Yervoy) for the first-line treatment of adults with malignant pleural mesothelioma (MPM) that cannot be removed by surgery. Nivolumab and ipilimumab are both monoclonal antibodies that, when combined, decrease tumor growth by enhancing T-cell function. The combination therapy was evaluated through a randomized, open-label trial in which participants who received nivolumab in combination with ipilimumab survived a median of 18.1 months while participants who underwent chemotherapy survived a median of 14.1 months.
Hyperthermic intrathoracic chemotherapy
Hyperthermic intrathoracic chemotherapy is used in conjunction with surgery, including in patients with malignant pleural mesothelioma. The surgeon removes as much of the tumor as possible followed by the direct administration of a chemotherapy agent, heated to between 40 and 48 °C, in the abdomen. The fluid is perfused for 60 to 120 minutes and then drained. High concentrations of selected drugs are then administered into the pleural cavity. Heating the chemotherapy treatment increases the penetration of the drugs into tissues. Also, heating itself damages the malignant cells more than the normal cells.
Multimodality therapy
Multimodal therapy, which includes a combined approach of surgery, radiation or photodynamic therapy, and chemotherapy, is not suggested for routine practice for treating malignant pleural mesothelioma. The effectiveness and safety of multimodal therapy is not clear (not enough research has been performed) and one clinical trial has suggested a possible increased risk of adverse effects.Large series of examining multimodality treatment have only demonstrated modest improvement in survival (median survival 14.5 months and only 29.6% surviving 2 years). Reducing the bulk of the tumor with cytoreductive surgery is key to extending survival. Two surgeries have been developed: extrapleural pneumonectomy and pleurectomy/decortication. The indications for performing these operations are unique. The choice of operation namely depends on the size of the patients tumor. This is an important consideration because tumor volume has been identified as a prognostic factor in mesothelioma. Pleurectomy/decortication spares the underlying lung and is performed in patients with early stage disease when the intention is to remove all gross visible tumor (macroscopic complete resection), not simply palliation. Extrapleural pneumonectomy is a more extensive operation that involves resection of the parietal and visceral pleurae, underlying lung, ipsilateral (same side) diaphragm, and ipsilateral pericardium. This operation is indicated for a subset of patients with more advanced tumors, who can tolerate a pneumonectomy.
Prognosis
Mesothelioma usually has a poor prognosis. Typical survival despite surgery is between 12 and 21 months depending on the stage of disease at diagnosis with about 7.5% of people surviving for 5 years.Women, young people, people with low-stage cancers, and people with epithelioid cancers have better prognoses. Negative prognostic factors include sarcomatoid or biphasic histology, high platelet counts (above 400,000), age over 50 years, white blood cell counts above 15.5, low glucose levels in the pleural fluid, low albumin levels, and high fibrinogen levels. Several markers are under investigation as prognostic factors, including nuclear grade, and serum c-reactive protein. Long-term survival is rare.Pericardial mesothelioma has a 10-month median survival time.In peritoneal mesothelioma, high expression of WT-1 protein indicates a worse prognosis.
Epidemiology
Although reported incidence rates have increased in the past 20 years, mesothelioma is still a relatively rare cancer. The incidence rate varies from one country to another, from a low rate of less than 1 per 1,000,000 in Tunisia and Morocco, to the highest rate in Britain, Australia and Belgium: 30 per 1,000,000 per year. For comparison, populations with high levels of smoking can have a lung cancer incidence of over 1,000 per 1,000,000. Incidence of malignant mesothelioma currently ranges from about 7 to 40 per 1,000,000 in industrialized Western nations, depending on the amount of asbestos exposure of the populations during the past several decades. Worldwide incidence is estimated at 1-6 per 1,000,000. Incidence of mesothelioma lags behind that of asbestosis due to the longer time it takes to develop; due to the cessation of asbestos use in developed countries, mesothelioma incidence is expected to decrease. Incidence is expected to continue increasing in developing countries due to continuing use of asbestos. Mesothelioma occurs more often in men than in women and risk increases with age, but this disease can appear in either men or women at any age. Approximately one fifth to one third of all mesotheliomas are peritoneal. Less than 5% of mesotheliomas are pericardial. The prevalence of pericardial mesothelioma is less than 0.002%; it is more common in men than women. It typically occurs in a persons 50s-70s.Between 1940 and 1979, approximately 27.5 million people were occupationally exposed to asbestos in the United States. Between 1973 and 1984, the incidence of pleural mesothelioma among Caucasian males increased 300%. From 1980 to the late 1990s, the death rate from mesothelioma in the USA increased from 2,000 per year to 3,000, with men four times more likely to acquire it than women. More than 80% of mesotheliomas are caused by asbestos exposure.The incidence of peritoneal mesothelioma is 0.5–3.0 per million per year in men, and 0.2–2.0 per million per year in women.
UK
Mesothelioma accounts for less than 1% of all cancers diagnosed in the UK, (around 2,600 people were diagnosed with the disease in 2011), and it is the seventeenth most common cause of cancer death (around 2,400 people died in 2012).
History
The connection between asbestos exposure and mesothelioma was discovered in the 1970s. In the United States, asbestos manufacture stopped in 2002. Asbestos exposure thus shifted from workers in asbestos textile mills, friction product manufacturing, cement pipe fabrication, and insulation manufacture and installation to maintenance workers in asbestos-containing buildings.
Society and culture
Notable cases
Mesothelioma, though rare, has had a number of notable patients:
Malcolm McLaren, musician and manager of the punk rock band the Sex Pistols, was diagnosed with peritoneal mesothelioma in October 2009 and died on 8 April 2010 in Switzerland.
Steve McQueen, American actor, was diagnosed with peritoneal mesothelioma on December 22, 1979. He was not offered surgery or chemotherapy because doctors felt the cancer was too advanced. McQueen subsequently sought alternative treatments at clinics in Mexico. He died of a heart attack on November 7, 1980, in Juárez, Mexico, following cancer surgery. He may have been exposed to asbestos while serving with the U.S. Marines as a young adult—asbestos was then commonly used to insulate ships piping—or from its use as an insulating material in automobile racing suits (McQueen was an avid racing driver and fan).
Mickie Most, record producer, died of peritoneal mesothelioma in May 2003; however, it has been questioned whether this was due to asbestos exposure.
Warren Zevon, American musician, was diagnosed with pleural mesothelioma in 2002, and died on September 7, 2003. It is believed that this was caused through childhood exposure to asbestos insulation in the attic of his fathers shop.
David Martin, Australian sailor and politician, died on 10 August 1990 of pleural mesothelioma. It is believed that this was caused by his exposure to asbestos on military ships during his career in the Royal Australian Navy.
Paul Kraus, diagnosed in 1997, is considered the longest currently living (as of 2017) mesothelioma survivor in the world.
F. W. De Klerk, South African retired politician, was diagnosed with mesothelioma on March 19, 2021, and died in November 2021.
Paul Gleason, American actor, died on May 27, 2006, just a few months after diagnosis.Although life expectancy with this disease is typically limited, there are notable survivors. In July 1982, Stephen Jay Gould, a well-regarded paleontologist, was diagnosed with peritoneal mesothelioma. After his diagnosis, Gould wrote "The Median Isnt the Message", in which he argued that statistics such as median survival are useful abstractions, not destiny. Gould lived for another 20 years, eventually succumbing to cancer not linked to his mesothelioma.
Legal issues
Some people who were exposed to asbestos have collected damages for an asbestos-related disease, including mesothelioma. Compensation via asbestos funds or class action lawsuits is an important issue in law practices regarding mesothelioma.The first lawsuits against asbestos manufacturers were in 1929. Since then, many lawsuits have been filed against asbestos manufacturers and employers, for neglecting to implement safety measures after the links between asbestos, asbestosis, and mesothelioma became known (some reports seem to place this as early as 1898). The liability resulting from the sheer number of lawsuits and people affected has reached billions of dollars. The amounts and method of allocating compensation have been the source of many court cases, reaching up to the United States Supreme Court, and government attempts at resolution of existing and future cases. However, to date, the US Congress has not stepped in and there are no federal laws governing asbestos compensation.
In 2013, the "Furthering Asbestos Claim Transparency (FACT) Act of 2013" passed the US House of representatives and was sent to the US Senate, where it was referred to the Senate Judiciary Committee. As the Senate did not vote on it before the end of the 113th Congress, it died in committee. It was revived in the 114th Congress, where it has not yet been brought before the House for a vote.
History
The first lawsuit against asbestos manufacturers was brought in 1929. The parties settled that lawsuit, and as part of the agreement, the attorneys agreed not to pursue further cases. In 1960, an article published by Wagner et al. was seminal in establishing mesothelioma as a disease arising from exposure to asbestos. The article referred to over 30 case studies of people who had had mesothelioma in South Africa. Some exposures were transient and some were mine workers. Before the use of advanced microscopy techniques, malignant mesothelioma was often diagnosed as a variant form of lung cancer. In 1962, McNulty reported the first diagnosed case of malignant mesothelioma in an Australian asbestos worker. The worker had worked in the mill at the asbestos mine in Wittenoom from 1948 to 1950.In the town of Wittenoom, asbestos-containing mine waste was used to cover schoolyards and playgrounds. In 1965, an article in the British Journal of Industrial Medicine established that people who lived in the neighbourhoods of asbestos factories and mines, but did not work in them, had contracted mesothelioma.Despite proof that the dust associated with asbestos mining and milling causes asbestos-related disease, mining began at Wittenoom in 1943 and continued until 1966. In 1974, the first public warnings of the dangers of blue asbestos were published in a cover story called "Is this Killer in Your Home?" in Australias Bulletin magazine. In 1978, the Western Australian Government decided to phase out the town of Wittenoom, following the publication of a Health Dept. booklet, "The Health Hazard at Wittenoom", containing the results of air sampling and an appraisal of worldwide medical information.By 1979, the first writs for negligence related to Wittenoom were issued against CSR and its subsidiary ABA, and the Asbestos Diseases Society was formed to represent the Wittenoom victims.In Leeds, England the Armley asbestos disaster involved several court cases against Turner & Newall where local residents who contracted mesothelioma demanded compensation because of the asbestos pollution from the companys factory. One notable case was that of June Hancock, who contracted the disease in 1993 and died in 1997.
Research
The WT-1 protein is overexpressed in mesothelioma and is being researched as a potential target for drugs.There are two high-confidence miRNAs that can potentially serve as biomarkers of asbestos exposure and malignant mesothelioma. Validation studies are needed to assess their relevance.
References
This article includes information from a public domain U.S. National Cancer Institute fact sheet.
External links
Mesothelioma at Curlie |
Methanol toxicity | Methanol toxicity (also methanol poisoning) is poisoning from methanol, characteristically via ingestion. Symptoms may include a decreased level of consciousness, poor or no coordination, hypothermia, vomiting, abdominal pain, and a specific smell on the breath. Decreased vision may start as early as twelve hours after exposure. Long-term outcomes may include blindness and kidney failure. Toxicity and death may occur after drinking in large quantities.Methanol poisoning most commonly occurs following the drinking of windshield washer fluid. This may be accidental or as part of an attempted suicide. Toxicity may also rarely occur through extensive skin exposure or breathing in fumes. When methanol is broken down by the body it results in formaldehyde, formic acid, and formate which cause much of the toxicity. The diagnosis may be suspected when there is acidosis or an increased osmol gap and confirmed by directly measuring blood levels. Other conditions that can produce similar symptoms include infections, exposure to other toxic alcohols, serotonin syndrome, and diabetic ketoacidosis.Early treatment increases the chance of a good outcome. Treatment consists of stabilizing the person, followed by the use of an antidote. The preferred antidote is fomepizole, with ethanol used if this is not available. Hemodialysis may also be used in those where there is organ damage or a high degree of acidosis. Other treatments may include sodium bicarbonate, folate, and thiamine.Outbreaks of methanol ingestion have occurred due to contamination of drinking alcohol. This is more common in the developing world. In 2013 more than 1700 cases occurred in the United States. Those affected are usually adult and male. Toxicity to methanol has been described as early as 1856.
Signs and symptoms
The initial symptoms of methanol intoxication include central nervous system depression, headache, dizziness, nausea, lack of coordination, and confusion. Sufficiently large doses cause unconsciousness and death. The initial symptoms of methanol exposure are usually less severe than the symptoms from the ingestion of a similar quantity of ethanol. Once the initial symptoms have passed, a second set of symptoms arises, from 10 to as many as 30 hours after the initial exposure, that may include blurring or complete loss of vision, acidosis, and putaminal hemorrhages, an uncommon but serious complication. These symptoms result from the accumulation of toxic levels of formate in the blood, and may progress to death by respiratory failure. Physical examination may show tachypnea, and eye examination may show dilated pupils with hyperemia of the optic disc and retinal edema.
Cause
Methanol has a moderate to high toxicity in humans. As little as 10 mL of pure methanol when drunk is metabolized into formic acid, which can cause permanent blindness by destruction of the optic nerve. 15 mL is potentially fatal, although the median lethal dose is typically 100 mL (3.4 fl oz) (i.e. 1–2 mL/kg body weight of pure methanol). Reference dose for methanol is 0.5 mg/kg/day.Ethanol is sometimes denatured (adulterated), and made poisonous, by the addition of methanol. The result is known as methylated spirit, "meths" (British use) or "metho" (Australian slang). This is not to be confused with "meth", a common abbreviation for methamphetamine and for methadone in Britain and the United States.
Mechanism
Methanol is toxic by two mechanisms. First, methanol (whether it enters the body by ingestion, inhalation, or absorption through the skin) can be fatal due to its CNS depressant properties in the same manner as ethanol poisoning. Second, in a process of toxication, it is metabolized to formic acid (which is present as the formate ion) via formaldehyde in a process initiated by the enzyme alcohol dehydrogenase in the liver. Methanol is converted to formaldehyde via alcohol dehydrogenase and formaldehyde is converted to formic acid (formate) via aldehyde dehydrogenase. The conversion to formate via ALDH proceeds completely, with no detectable formaldehyde remaining. Formate is toxic because it inhibits mitochondrial cytochrome c oxidase, causing hypoxia at the cellular level, and metabolic acidosis, among a variety of other metabolic disturbances.
Treatment
Methanol poisoning can be treated with fomepizole, or if unavailable, ethanol may be used. Both drugs act to reduce the action of alcohol dehydrogenase on methanol by means of competitive inhibition. Ethanol, the active ingredient in alcoholic beverages, acts as a competitive inhibitor by more effectively binding and saturating the alcohol dehydrogenase enzyme in the liver, thus blocking the binding of methanol. Methanol is excreted by the kidneys without being converted into the very toxic metabolites formaldehyde and formic acid. Alcohol dehydrogenase instead enzymatically converts ethanol to acetaldehyde, a much less toxic organic molecule. Additional treatment may include sodium bicarbonate for metabolic acidosis, and hemodialysis or hemodiafiltration to remove methanol and formate from the blood. Folinic acid or folic acid is also administered to enhance the metabolism of formate.
History
There are cases of methanol resistance, such as that of Mike Malloy, whom someone tried and failed to poison by methanol in the early 1930s.In December 2016, 78 people died in Irkutsk, Russia from methanol poisoning after ingesting a counterfeit body lotion that was primarily methanol rather than ethanol as labeled. The body lotion, prior to the event, had been used as a cheap substitute for vodka by the impoverished people in the region despite warnings on the lotions bottles that it was not safe for drinking and long-standing problems with alcohol poisoning across the country.During the COVID-19 pandemic, Iranian media reported that nearly 300 people had died and over a thousand became ill due to methanol poisoning in the belief that drinking the alcohol could help with the disease. In the United States, the Food and Drug Administration discovered that a number of brands of hand sanitizer manufactured in Mexico during the pandemic contained methanol, and urged the public to avoid using the affected products.
See also
Ethylene glycol poisoning
References
== External links == |
Mevalonate kinase deficiency | Mevalonate kinase deficiency (MKD) is an autosomal recessive metabolic disorder that disrupts the biosynthesis of cholesterol and isoprenoids. It is a very rare genetic disease.
It is characterized by an elevated level of immunoglobulin D in the blood.
Mevalonate kinase (MVK) is an enzyme involved in biosynthesis of cholesterols and isoprenoids and is necessary for the conversion of mevalonate to mevalonate-5-phosphate in the presence of Mg2+. MKD is due to a mutation in the gene that encodes mevalonate kinase which results in a reduced or deficient activity of this enzyme. Because of this deficiency, mevalonic acid can build up in the body, with high levels found in the urine.
The severity of MKD depends on the level of this deficiency with hyperimmunoglobulinemia D syndrome (first described as HIDS in 1984) being less severe, but more common, and mevalonic aciduria (MVA); a more severe, but rarer form.
Genetics
Mevalonate kinase deficiency is inherited in an autosomal recessive manner, meaning that a child must inherit a defective copy of the gene from both parents to be affected. It is an example of a loss-of-function mutation. The gene which codes for mevalonate kinase consists of 10 exons at locus 12q14. About 63 pathological sequence variations in the gene have been characterized. The most common of these are V377I, I268T, H20P/N and P167L, present in 70% of affected individuals.
Immunoglobulin D
Immunoglobulin D (IgD) is a protein produced by a certain type of white blood cells. There are five classes of Immunoglobulin: IgG, IgA, IgM, IgE and IgD. They each play an important role in the immune system. The function of IgD is still unclear, although one of its many effects is to active the immune system.
Biochemistry
There is an increased secretion of the fever promoting cytokine interleukin 1 beta (IL-1β) in MKD, most likely mediated by defective protein prenylation. Prenylation refers to addition of hydrophobic isoprenoids to proteins, such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). When isoprenoids such as these are coupled to a target protein, this affects the proteins cellular location and function. In a human monocytic MKD model it was found that the deficiency of GGPP leads to overproduction of IL-1β and defective prenylation of RhoA. This causes an increased level of Rac1 and PKB which in turn affects GTPases and B7-glycoproteins. It was earlier found that Rac1/PI3K/PKB pathway had been linked to the pathogenesis of MKD. The inactivation of RhoA acts an inducer of IL-1β mRNA transcription independent of NLRP3- or caspase-1 activity. Due to defective RhoA there is a formation of defective mitochondria (elongated and instable) in the cell. Normally, defective mitochondria are cleared in the cell by the mechanism of autophagy. But, in MKD the clearance of defective mitochondria from the cytosol is disrupted. As a result, mitochondrial DNA starts accumulating in the cytosol, binding and activating NLRP3, which is responsible for the production of IL-1β. The activation can be direct or indirect. It can also be activated by reactive oxygen species (ROS).
It is known that monocytes and macrophages in affected individuals also produce higher levels of tumor necrosis factor alpha (TNF-α), interleukin 6 (IL-6) other than IL-Iβ During febrile (fever) attacks, C-reactive protein (CRP) also increases. CRP is released by liver which causes inflammation.
Hyper-IgD syndrome
Hyperimmunoglobulinemia D with recurrent fever is a periodic fever syndrome originally described in 1984 by the internist Jos van der Meer, then at Leiden University Medical Centre. No more than 300 cases have been described worldwide. It is now recognised as an allelic variant of MKD.
Signs and symptoms
HIDS is one of a number of periodic fever syndromes. It is characterised by attacks of fever, arthralgia, skin lesions including cyclical mouth ulcers, and diarrhea. Laboratory features include an acute phase response (elevated CRP and ESR) and markedly elevated IgD (and often IgA), although cases with normal IgD have been described.It has mainly been described in the Netherlands and France, although the international registry includes a number of cases from other countries.The differential diagnosis includes fever of unknown origin, familial Mediterranean fever (FMF) and familial Hibernian fever (or TNFα reception associated periodic syndrome/TRAPS).
Cause
Virtually all people with the syndrome have mutations in the gene for mevalonate kinase, which is part of the HMG-CoA reductase pathway, an important cellular metabolic pathway. Indeed, similar fever attacks (but normal IgD) have been described in patients with mevalonic aciduria – an inborn error of metabolism now seen as a severe form of HIDS.
Pathophysiology
It is not known how mevalonate kinase mutations cause the febrile episodes, although it is presumed that other products of the cholesterol biosynthesis pathway, the prenylation chains (geranylgeraniol and farnesol) might play a role.
Diagnosis
Mevalonate kinase deficiency causes an accumulation of mevalonic acid in the urine, resulting from insufficient activity of the enzyme mevalonate kinase (ATP:mevalonate 5-phosphotransferase; EC 2.7.1.36).
The disorder was first described in 1985.Classified as an inborn error of metabolism, mevalonate kinase deficiency usually results in developmental delay, hypotonia, anemia, hepatosplenomegaly, various dysmorphic features, mental retardation, an overall failure to thrive and several other features.
Treatment
There is no treatment for MKD. But, the inflammation and the other effects can be reduced to a certain extent.
IL-1 targeting drugs can be used to reduce the effects of the disorder. Anakinra is antagonist to IL-1 receptors. Anakinra binds the IL-1 receptor, preventing the actions of both IL-1α and IL-1β, and it has been proved to reduce the clinical and biochemical inflammation in MKD. It can effectively decreases the frequency as well as the severity of inflammatory attacks when used on a daily basis. Disadvantages with the usage of this drug are occurrence of painful injection site reaction and as the drug is discontinued in the near future the febrile attacks start. (Examined in a 12-year-old patient).
Canakinumab is a long acting monoclonal antibody which is directed against IL-1β has shown to be effective in reducing both frequency and severity in patients with mild and severe MKD in case reports and observational case series. It reduces the physiological effects but the biochemical parameter still remain elevated (Galeotti et al. demonstrated that it is more effective than anakinra –considered 6 patients with MKD).
Anti-TNF therapy might be effective in MKD, but the effect is mostly partial and therapy failure and clinical deterioration have been described frequently in patients on infliximab or etanercept. A beneficial effect of human monoclonal anti-TNFα antibody adalimumab was seen in a small number of MKD patients.
Most MKD patients are benefited by anti-IL-1 therapy. However, anti-IL-1-resistant disease may also occur. Example. tocilizumab (a humanized monoclonal antibody against the interleukin-6 (IL-6) receptor). This drug is used when the patients are unresponsive towards Anakinra. (Shendi et al. treated a young woman in whom anakinra was ineffective with tocilizumab). It was found that it was effective in reducing the biochemical and clinical inflammation [30].Stoffels et al. observed reduction of frequency and severity of the inflammatory attacks, although after several months of treatment one of these two patients persistently showed mild inflammatory symptoms in the absence of biochemical inflammatory markers.
A beneficial effect of hematopoietic stem cell transplantation can be used in severe mevalonate kinase deficiency conditions (Improvement of cerebral myelinisation on MRI after allogenic stem cell transplantation was observed in one girl). But, liver transplantation did not influence febrile attacks in this patient.
Treatment for HIDS
Canakinumab has been approved for treatment of HIDS and has shown to be effective. The immunosuppressant drugs etanercept and anakinra have also shown to be effective. Statin drugs might decrease the level of mevalonate and are presently being investigated. A recent single case report highlighted bisphosphonates as a potential therapeutic option.
Epidemiology
Globally, less than 1 in 100,000 people have HIDS, and of these, ~200 individuals have MKD. This categorises the condition as a rare genetic disease.
Additional images
References
External links
Mevalonic aciduria at NIHs Office of Rare Diseases |
Microscopic polyangiitis | Microscopic polyangiitis is an ill-defined autoimmune disease characterized by a systemic, pauci-immune, necrotizing, small-vessel vasculitis without clinical or pathological evidence of necrotizing granulomatous inflammation.
Signs and symptoms
Clinical features may include constitutional symptoms like fever, loss of appetite, weight loss, fatigue, and kidney failure. A majority of patients may have blood in the urine and protein in the urine. Rapidly progressive glomerulonephritis may occur. Because many different organ systems may be involved, a wide range of symptoms are possible in MPA. Purpura and livedo racemosa may be present.
Cause
While the mechanism of disease has yet to be fully elucidated, the leading hypothesis is that the process is begun with an autoimmune process of unknown cause that triggers production of p-ANCA. These antibodies will circulate at low levels until a pro-inflammatory trigger—such as infection, malignancy, or drug therapy. The trigger upregulates production of p-ANCA. Then, the large number of antibodies make it more likely that they will bind a neutrophil. Once bound, the neutrophil degranulates. The degranulation releases toxins that cause endothelial injury. Most recently, two different groups of investigators have demonstrated that anti-MPO antibodies alone can cause necrotizing and crescentic glomerulonephritis.
Diagnosis
Laboratory tests may reveal an increased sedimentation rate, elevated CRP, anemia and elevated creatinine due to kidney impairment. An important diagnostic test is the presence of perinuclear antineutrophil cytoplasmic antibodies (p-ANCA) with myeloperoxidase specificity (a constituent of neutrophil granules), and protein and red blood cells in the urine.
In patients with neuropathy, electromyography may reveal a sensorimotor peripheral neuropathy.
Differential diagnosis
The signs and symptoms of microscopic polyangiitis may resemble those of granulomatosis with polyangiitis (GPA) (another form of small-vessel vasculitis) but typically lacks the significant upper respiratory tract involvement (e.g., sinusitis) frequently seen in people affected by GPA.
Treatment
The customary treatment involves long term dosage of prednisone, alternated or combined with cytotoxic drugs, such as cyclophosphamide or azathioprine. Plasmapheresis may also be indicated in the acute setting to remove ANCA antibodies.Rituximab has been investigated, and in April 2011 approved by the FDA when used in combination with glucocorticoids in adult patients.
See also
ANCA-associated vasculitides
Polyarteritis nodosa
List of cutaneous conditions
Granulomatosis with polyangitis
References
== External links == |
Microsporidiosis | Microsporidiosis is an opportunistic intestinal infection that causes diarrhea and wasting in immunocompromised individuals (HIV, for example). It results from different species of microsporidia, a group of microbial (unicellular) fungi.In HIV infected individuals, microsporidiosis generally occurs when CD4+ T cell counts fall below 150.
Microsporidia have emerged with significant mortality risk in immunocompromised individuals. These are small, single-celled, obligately intracellular parasites linked to water sources as well as wild, and domestic animals. They were once considered protozoans or protists, but are now known to be fungi, or a sister group to fungi. The most common causes of microsporidiosis is Enterocytozoon bieneusi and Encephalitozoon intestinalis.
Cause
At least 15 microsporidian species have been recognized as human pathogens, spread across nine genera:
Anncaliia
A. algerae, A. connori, A. vesicularum
Encephalitozoon
E. cuniculi, E. hellem, E. intestinalis
Enterocytozoon
E. bieneusi
Microsporidium
M. ceylonensis, M. africanum
Nosema
N. ocularum
Pleistophora sp.
Trachipleistophora
T. hominis, T. anthropophthera
Vittaforma
V. corneae.
Tubulinosema
T. acridophagusThe primary causes are Enterocytozoon bieneusi and Encephalitozoon intestinalis.
Life cycle
(Coded to image at right).
The infective form of microsporidia is the resistant spore and it can survive for an extended period of time in the environment.
The spore extrudes its polar tubule and infects the host cell.
The spore injects the infective sporoplasm into the eukaryotic host cell through the polar tubule.
Inside the cell, the sporoplasm undergoes extensive multiplication either by merogony (binary fission) or schizogony (multiple fission).
This development can occur either in direct contact with the host cell cytoplasm (E. bieneusi) or inside a vacuole called a parasitophorous vacuole (E. intestinalis). Either free in the cytoplasm or inside a parasitophorous vacuole, microsporidia develop by sporogony to mature spores.
During sporogony, a thick wall is formed around the spore, which provides resistance to adverse environmental conditions. When the spores increase in number and completely fill the host cell cytoplasm, the cell membrane is disrupted and releases the spores to the surroundings.
These free mature spores can infect new cells thus continuing the cycle.
Diagnosis
The best option for diagnosis is using PCR.Diagnosis with Microsporidia can be done through gram-positive, acid-fast spores in stool and biopsy material with morphologic demonstration of the organism. Initial detection through light microscopic examination of tissue sections, stools, duodenal aspirates, nasal discharges, bronchoalveolar lavage fluids, and conjunctival smears. Definitive diagnosis can also be achieved through fluorescein-tagged antibody immunofluorescence or electron microscopy, and species identification can be done through PCR.
Classification
Although it is classified as a protozoal disease in ICD-10, their phylogenetic placement has been resolved to be within the Fungi, and some sources classify microsporidiosis as a mycosis, however, they are highly divergent and rapidly evolving.
Treatment
Fumagillin has been used in the treatment. Another agent used is albendazole.Because of its severe mortality risk in immunocompromised individuals, two main agents are used: Albendazole, which inhibits tubulin, and Fumagillin, which inhibits methionine aminopeptidase type two.
References
External links
CDCs microsporidiosis info page. |
Miscarriage | Miscarriage, also known in medical terms as a spontaneous abortion and pregnancy loss, is the death of an embryo or fetus before it is able to survive independently. Some use the cutoff of 20 weeks of gestation, after which fetal death is known as a stillbirth. The most common symptom of a miscarriage is vaginal bleeding with or without pain. Sadness, anxiety, and guilt may occur afterwards. Tissue and clot-like material may leave the uterus and pass through and out of the vagina. Recurrent miscarriage may also be considered a form of infertility.Risk factors for miscarriage include being an older parent, previous miscarriage, exposure to tobacco smoke, obesity, diabetes, thyroid problems, and drug or alcohol use. About 80% of miscarriages occur in the first 12 weeks of pregnancy (the first trimester). The underlying cause in about half of cases involves chromosomal abnormalities. Diagnosis of a miscarriage may involve checking to see if the cervix is open or closed, testing blood levels of human chorionic gonadotropin (hCG), and an ultrasound. Other conditions that can produce similar symptoms include an ectopic pregnancy and implantation bleeding.Prevention is occasionally possible with good prenatal care. Avoiding drugs, alcohol, infectious diseases, and radiation may decrease the risk of miscarriage. No specific treatment is usually needed during the first 7 to 14 days. Most miscarriages will complete without additional interventions. Occasionally the medication misoprostol or a procedure such as vacuum aspiration is used to remove the remaining tissue. Women who have a blood type of rhesus negative (Rh negative) may require Rho(D) immune globulin. Pain medication may be beneficial. Emotional support may help with processing the loss.Miscarriage is the most common complication of early pregnancy. Among women who know they are pregnant, the miscarriage rate is roughly 10% to 20%, while rates among all fertilisation is around 30% to 50%. In those under the age of 35 the risk is about 10% while it is about 45% in those over the age of 40. Risk begins to increase around the age of 30. About 5% of women have two miscarriages in a row. Some recommend not using the term "abortion" in discussions with those experiencing a miscarriage in an effort to decrease distress. In Britain, the term "miscarriage" has replaced any use of the term "spontaneous abortion" in relation to pregnancy loss and in response to complaints of insensitivity towards women who had suffered such loss. An additional benefit of this change is reducing confusion among medical laymen, who may not realize that the term "spontaneous abortion" refers to a naturally-occurring medical phenomenon, and not the intentional termination of pregnancy.
Signs and symptoms
Signs of a miscarriage include vaginal spotting, abdominal pain, cramping, and fluid, blood clots, and tissue passing from the vagina. Bleeding can be a symptom of miscarriage, but many women also have bleeding in early pregnancy and do not miscarry. Bleeding during the first half of pregnancy may be referred to as a threatened miscarriage. Of those who seek treatment for bleeding during pregnancy, about half will miscarry. Miscarriage may be detected during an ultrasound exam, or through serial human chorionic gonadotropin (HCG) testing.
Risk factors
Miscarriage may occur for many reasons, not all of which can be identified. Risk factors are those things that increase the likelihood of having a miscarriage but do not necessarily cause a miscarriage. Up to 70 conditions, infections, medical procedures, lifestyle factors, occupational exposures, chemical exposure, and shift work are associated with increased risk for miscarriage. Some of these risks include endocrine, genetic, uterine, or hormonal abnormalities, reproductive tract infections, and tissue rejection caused by an autoimmune disorder.
Trimesters
First trimester
Most clinically apparent miscarriages (two-thirds to three-quarters in various studies) occur during the first trimester. About 30% to 40% of all fertilized eggs miscarry, often before the pregnancy is known. The embryo typically dies before the pregnancy is expelled; bleeding into the decidua basalis and tissue necrosis causes uterine contractions to expel the pregnancy. Early miscarriages can be due to a developmental abnormality of the placenta or other embryonic tissues. In some instances an embryo does not form but other tissues do. This has been called a "blighted ovum".Successful implantation of the zygote into the uterus is most likely eight to ten days after fertilization. If the zygote has not implanted by day ten, implantation becomes increasingly unlikely in subsequent days.A chemical pregnancy is a pregnancy that was detected by testing but ends in miscarriage before or around the time of the next expected period.Chromosomal abnormalities are found in more than half of embryos miscarried in the first 13 weeks. Half of embryonic miscarriages (25% of all miscarriages) have an aneuploidy (abnormal number of chromosomes). Common chromosome abnormalities found in miscarriages include an autosomal trisomy (22–32%), monosomy X (5–20%), triploidy (6–8%), tetraploidy (2–4%), or other structural chromosomal abnormalities (2%). Genetic problems are more likely to occur with older parents; this may account for the higher rates observed in older women.Luteal phase progesterone deficiency may or may not be a contributing factor to miscarriage.
Second and third trimesters
Second trimester losses may be due to maternal factors such as uterine malformation, growths in the uterus (fibroids), or cervical problems. These conditions also may contribute to premature birth. Unlike first-trimester miscarriages, second-trimester miscarriages are less likely to be caused by a genetic abnormality; chromosomal aberrations are found in a third of cases. Infection during the third trimester can cause a miscarriage.
Age
The age of the pregnant woman is a significant risk factor. Miscarriage rates increase steadily with age, with more substantial increases after age 35. In those under the age of 35 the risk is about 10% while it is about 45% in those over the age of 40. Risk begins to increase around the age of 30. Paternal age is associated with increased risk.
Obesity, eating disorders and caffeine
Not only is obesity associated with miscarriage; it can result in sub-fertility and other adverse pregnancy outcomes. Recurrent miscarriage is also related to obesity. Women with bulimia nervosa and anorexia nervosa may have a greater risk for miscarriage. Nutrient deficiencies have not been found to impact miscarriage rates but hyperemesis gravidarum sometimes precedes a miscarriage.Caffeine consumption also has been correlated to miscarriage rates, at least at higher levels of intake. However, such higher rates are statistically significant only in certain circumstances.
Vitamin supplementation has generally not shown to be effective in preventing miscarriage. Chinese traditional medicine has not been found to prevent miscarriage.
Endocrine disorders
Disorders of the thyroid may affect pregnancy outcomes. Related to this, iodine deficiency is strongly associated with an increased risk of miscarriage. The risk of miscarriage is increased in those with poorly controlled insulin-dependent diabetes mellitus. Women with well-controlled diabetes have the same risk of miscarriage as those without diabetes.
Food poisoning
Ingesting food that has been contaminated with listeriosis, toxoplasmosis, and salmonella is associated with an increased risk of miscarriage.
Amniocentesis and chorionic villus sampling
Amniocentesis and chorionic villus sampling (CVS) are procedures conducted to assess the fetus. A sample of amniotic fluid is obtained by the insertion of a needle through the abdomen and into the uterus. Chorionic villus sampling is a similar procedure with a sample of tissue removed rather than fluid. These procedures are not associated with pregnancy loss during the second trimester but they are associated with miscarriages and birth defects in the first trimester. Miscarriage caused by invasive prenatal diagnosis (chorionic villus sampling (CVS) and amniocentesis) is rare (about 1%).
Surgery
The effects of surgery on pregnancy are not well-known including the effects of bariatric surgery. Abdominal and pelvic surgery are not risk factors for miscarriage. Ovarian tumours and cysts that are removed have not been found to increase the risk of miscarriage. The exception to this is the removal of the corpus luteum from the ovary. This can cause fluctuations in the hormones necessary to maintain the pregnancy.
Medications
There is no significant association between antidepressant medication exposure and spontaneous abortion. The risk of miscarriage is not likely decreased by discontinuing SSRIs prior to pregnancy. Some available data suggest that there is a small increased risk of miscarriage for women taking any antidepressant, though this risk becomes less statistically significant when excluding studies of poor quality.Medicines that increase the risk of miscarriage include:
retinoids
nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen
misoprostol
methotrexate
statins
Immunizations
Immunizations have not been found to cause miscarriage. Live vaccinations, like the MMR vaccine, can theoretically cause damage to the fetus as the live virus can cross the placenta and potentially increase the risk for miscarriage. Therefore, the Center for Disease Control (CDC) recommends against pregnant women receiving live vaccinations. However, there is no clear evidence that has shown live vaccinations to increase the risk for miscarriage or fetal abnormalities.Some live vaccinations include: MMR, varicella, certain types of the influenza vaccine, and rotavirus.
Treatments for cancer
Ionizing radiation levels given to a woman during cancer treatment cause miscarriage. Exposure can also impact fertility. The use of chemotherapeutic drugs used to treat childhood cancer increases the risk of future miscarriage.
Pre-existing diseases
Several pre-existing diseases in pregnancy can potentially increase the risk of miscarriage, including diabetes, polycystic ovary syndrome (PCOS), hypothyroidism, certain infectious diseases, and autoimmune diseases. PCOS may increase the risk of miscarriage. Two studies suggested treatment with the drug metformin significantly lowers the rate of miscarriage in women with PCOS, but the quality of these studies has been questioned. Metformin treatment in pregnancy has not been shown to be safe. In 2007 the Royal College of Obstetricians and Gynaecologists also recommended against use of the drug to prevent miscarriage. Thrombophilias or defects in coagulation and bleeding were once thought to be a risk in miscarriage but have been subsequently questioned. Severe cases of hypothyroidism increase the risk of miscarriage. The effect of milder cases of hypothyroidism on miscarriage rates has not been established. A condition called luteal phase defect (LPD) is a failure of the uterine lining to be fully prepared for pregnancy. This can keep a fertilized egg from implanting or result in miscarriage.Mycoplasma genitalium infection is associated with increased risk of preterm birth and miscarriage.Infections can increase the risk of a miscarriage: rubella (German measles), cytomegalovirus, bacterial vaginosis, HIV, chlamydia, gonorrhoea, syphilis, and malaria.
Immune status
Autoimmunity is a possible cause of recurrent or late-term miscarriages. In the case of an autoimmune-induced miscarriage, the womans body attacks the growing fetus or prevents normal pregnancy progression. Autoimmune disease may cause abnormalities in embryos, which in turn may lead to miscarriage. As an example, Celiac disease increases the risk of miscarriage by an odds ratio of approximately 1.4. A disruption in normal immune function can lead to the formation of antiphospholipid antibody syndrome. This will affect the ability to continue the pregnancy, and if a woman has repeated miscarriages, she can be tested for it. Approximately 15% of recurrent miscarriages are related to immunologic factors. The presence of anti-thyroid autoantibodies is associated with an increased risk with an odds ratio of 3.73 and 95% confidence interval 1.8–7.6. Having lupus also increases the risk for miscarriage. Immunohistochemical studies on decidual basalis and chorionic villi found that the imbalance of the immunological environment could be associated with recurrent pregnancy loss.
Anatomical defects and trauma
Fifteen per cent of women who have experienced three or more recurring miscarriages have some anatomical defect that prevents the pregnancy from being carried for the entire term. The structure of the uterus affects the ability to carry a child to term. Anatomical differences are common and can be congenital.
In some women, cervical incompetence or cervical insufficiency occurs with the inability of the cervix to stay closed during the entire pregnancy. It does not cause first trimester miscarriages. In the second trimester, it is associated with an increased risk of miscarriage. It is identified after a premature birth has occurred at about 16–18 weeks into the pregnancy. During the second trimester, major trauma can result in a miscarriage.
Smoking
Tobacco (cigarette) smokers have an increased risk of miscarriage. There is an increased risk regardless of which parent smokes, though the risk is higher when the gestational mother smokes.
Morning sickness
Nausea and vomiting of pregnancy (NVP, or morning sickness) is associated with a decreased risk. Several possible causes have been suggested for morning sickness but there is still no agreement. NVP may represent a defense mechanism which discourages the mothers ingestion of foods that are harmful to the fetus; according to this model, a lower frequency of miscarriage would be an expected consequence of the different food choices made by women experiencing NVP.
Chemicals and occupational exposure
Chemical and occupational exposures may have some effect in pregnancy outcomes. A cause and effect relationship almost can never be established. Those chemicals that are implicated in increasing the risk for miscarriage are DDT, lead, formaldehyde, arsenic, benzene and ethylene oxide. Video display terminals and ultrasound have not been found to have an effect on the rates of miscarriage. In dental offices where nitrous oxide is used with the absence of anesthetic gas scavenging equipment, there is a greater risk of miscarriage. For women who work with cytotoxic antineoplastic chemotherapeutic agents there is a small increased risk of miscarriage. No increased risk for cosmetologists has been found.
Other
Alcohol increases the risk of miscarriage. Cocaine use increases the rate of miscarriage. Some infections have been associated with miscarriage. These include Ureaplasma urealyticum, Mycoplasma hominis, group B streptococci, HIV-1, and syphilis. Infections of Chlamydia trachomatis, Camphylobacter fetus, and Toxoplasma gondii have not been found to be linked to miscarriage. Subclinical infections of the lining of the womb, commonly known as chronic endometritis are also associated with poor pregnancy outcomes, compared to women with treated chronic endometritis or no chronic endometritis.
Diagnosis
In the case of blood loss, pain, or both, transvaginal ultrasound is performed. If a viable intrauterine pregnancy is not found with ultrasound, blood tests (serial βHCG tests) can be performed to rule out ectopic pregnancy, which is a life-threatening situation.If hypotension, tachycardia, and anemia are discovered, exclusion of an ectopic pregnancy is important.A miscarriage may be confirmed by an obstetric ultrasound and by the examination of the passed tissue. When looking for microscopic pathologic symptoms, one looks for the products of conception. Microscopically, these include villi, trophoblast, fetal parts, and background gestational changes in the endometrium. When chromosomal abnormalities are found in more than one miscarriage, genetic testing of both parents may be done.
Ultrasound criteria
A review article in The New England Journal of Medicine based on a consensus meeting of the Society of Radiologists in Ultrasound in America (SRU) has suggested that miscarriage should be diagnosed only if any of the following criteria are met upon ultrasonography visualization:
Classification
A threatened miscarriage is any bleeding during the first half of pregnancy. At investigation it may be found that the fetus remains viable and the pregnancy continues without further problems.An anembryonic pregnancy (also called an "empty sac" or "blighted ovum") is a condition where the gestational sac develops normally, while the embryonic part of the pregnancy is either absent or stops growing very early. This accounts for approximately half of miscarriages. All other miscarriages are classified as embryonic miscarriages, meaning that there is an embryo present in the gestational sac. Half of embryonic miscarriages have aneuploidy (an abnormal number of chromosomes).An inevitable miscarriage occurs when the cervix has already dilated, but the fetus has yet to be expelled. This usually will progress to a complete miscarriage. The fetus may or may not have cardiac activity.
A complete miscarriage is when all products of conception have been expelled; these may include the trophoblast, chorionic villi, gestational sac, yolk sac, and fetal pole (embryo); or later in pregnancy the fetus, umbilical cord, placenta, amniotic fluid, and amniotic membrane. The presence of a pregnancy test that is still positive, as well as an empty uterus upon transvaginal ultrasonography, does, however, fulfil the definition of pregnancy of unknown location. Therefore, there may be a need for follow-up pregnancy tests to ensure that there is no remaining pregnancy, including ectopic pregnancy.
An incomplete miscarriage occurs when some products of conception have been passed, but some remains inside the uterus. However, an increased distance between the uterine walls on transvaginal ultrasonography may also simply be an increased endometrial thickness and/or a polyp. The use of a Doppler ultrasound may be better in confirming the presence of significant retained products of conception in the uterine cavity. In cases of uncertainty, ectopic pregnancy must be excluded using techniques like serial beta-hCG measurements.
A missed miscarriage is when the embryo or fetus has died, but a miscarriage has not yet occurred. It is also referred to as delayed miscarriage, silent miscarriage, or missed abortion.A septic miscarriage occurs when the tissue from a missed or incomplete miscarriage becomes infected, which carries the risk of spreading infection (septicaemia) and can be fatal.Recurrent miscarriage ("recurrent pregnancy loss" (RPL) or "habitual abortion") is the occurrence of multiple consecutive miscarriages; the exact number used to diagnose recurrent miscarriage varies. If the proportion of pregnancies ending in miscarriage is 15% and assuming that miscarriages are independent events, then the probability of two consecutive miscarriages is 2.25% and the probability of three consecutive miscarriages is 0.34%. The occurrence of recurrent pregnancy loss is 1%. A large majority (85%) of those who have had two miscarriages will conceive and carry normally afterward.The physical symptoms of a miscarriage vary according to the length of pregnancy, though most miscarriages cause pain or cramping. The size of blood clots and pregnancy tissue that are passed become larger with longer gestations. After 13 weeks gestation, there is a higher risk of placenta retention.
Prevention
Prevention of a miscarriage can sometimes be accomplished by decreasing risk factors. This may include good prenatal care, avoiding drugs and alcohol, preventing infectious diseases, and avoiding x-rays. Identifying the cause of the miscarriage may help prevent future pregnancy loss, especially in cases of recurrent miscarriage. Often there is little a person can do to prevent a miscarriage. Vitamin supplementation before or during pregnancy has not been found to affect the risk of miscarriage. Progesterone has been shown to prevent miscarriage in women with 1) vaginal bleeding early in their current pregnancy and 2) a previous history of miscarriage.
Non-modifiable risk factors
Preventing a miscarriage in subsequent pregnancies may be enhanced with assessments of:
Modifiable risk factors
Maintaining a healthy weight and good prenatal care can reduce the risk of miscarriage. Some risk factors can be minimized by avoiding the following:
Smoking
Cocaine use
Alcohol
Poor nutrition
Occupational exposure to agents that can cause miscarriage
Medications associated with miscarriage
Drug abuse
Management
Women who miscarry early in their pregnancy usually do not require any subsequent medical treatment but they can benefit from support and counseling. Most early miscarriages will complete on their own; in other cases, medication treatment or aspiration of the products of conception can be used to remove remaining tissue. While bed rest has been advocated to prevent miscarriage, this has not been found to be of benefit. Those who are experiencing or who have experienced a miscarriage benefit from the use of careful medical language. Significant distress can often be managed by the ability of the clinician to clearly explain terms without suggesting that the woman or couple are somehow to blame.Evidence to support Rho(D) immune globulin after a spontaneous miscarriage is unclear. In the UK, Rho(D) immune globulin is recommended in Rh-negative women after 12 weeks gestational age and before 12 weeks gestational age in those who need surgery or medication to complete the miscarriage.
Methods
No treatment is necessary for a diagnosis of complete miscarriage (so long as ectopic pregnancy is ruled out). In cases of an incomplete miscarriage, empty sac, or missed abortion there are three treatment options: watchful waiting, medical management, and surgical treatment. With no treatment (watchful waiting), most miscarriages (65–80%) will pass naturally within two to six weeks. This treatment avoids the possible side effects and complications of medications and surgery, but increases the risk of mild bleeding, need for unplanned surgical treatment, and incomplete miscarriage. Medical treatment usually consists of using misoprostol (a prostaglandin) alone or in combination with mifepristone pre-treatment. These medications help the uterus to contract and expel the remaining tissue out of the body. This works within a few days in 95% of cases. Vacuum aspiration or sharp curettage can be used, with vacuum aspiration being lower-risk and more common.
Delayed and incomplete miscarriage
In delayed or incomplete miscarriage, treatment depends on the amount of tissue remaining in the uterus. Treatment can include surgical removal of the tissue with vacuum aspiration or misoprostol. Studies looking at the methods of anaesthesia for surgical management of incomplete miscarriage have not shown that any adaptation from normal practice is beneficial.
Induced miscarriage
An induced abortion may be performed by a qualified healthcare provider for women who cannot continue the pregnancy. Self-induced abortion performed by a woman or non-medical personnel can be dangerous and is still a cause of maternal mortality in some countries. In some locales it is illegal or carries heavy social stigma. However, in the United States, many choose to self-induce or self-manage their abortion and have done so safely.
Sex
Some organizations recommend delaying sex after a miscarriage until the bleeding has stopped to decrease the risk of infection. However, there is not sufficient evidence for the routine use of antibiotic to try to avoid infection in incomplete abortion. Others recommend delaying attempts at pregnancy until one period has occurred to make it easier to determine the dates of a subsequent pregnancy. There is no evidence that getting pregnant in that first cycle affects outcomes and an early subsequent pregnancy may actually improve outcomes.
Support
Organizations exist that provide information and counselling to help those who have had a miscarriage. Family and friends often conduct a memorial or burial service. Hospitals also can provide support and help memorialize the event. Depending on locale others desire to have a private ceremony. Providing appropriate support with frequent discussions and sympathetic counselling are part of evaluation and treatment. Those who experience unexplained miscarriage can be treated with emotional support.
Miscarriage leave
Miscarriage leave is leave of absence in relation to miscarriage. The following countries offer paid or unpaid leave to women who have had a miscarriage.
The Philippines – 60 days fully paid leave for miscarriages (before 20 weeks of gestation) or emergency termination of the pregnancy (on the 20th week or after) The husband of the mother gets seven days fully paid leave up to the 4th pregnancy.
India – six weeks leave
New Zealand – three days bereavement leave for both parents
Mauritius – two weeks leave
Indonesia – six weeks leave
Taiwan – five days, one week or four weeks, depending on how advanced the pregnancy was
Outcomes
Psychological and emotional effects
Every womans personal experience of miscarriage is different, and women who have more than one miscarriage may react differently to each event.In Western cultures since the 1980s, medical providers assume that experiencing a miscarriage "is a major loss for all pregnant women". A miscarriage can result in anxiety, depression or stress for those involved. It can have an effect on the whole family. Many of those experiencing a miscarriage go through a grieving process. "Prenatal attachment" often exists that can be seen as parental sensitivity, love and preoccupation directed toward the unborn child. Serious emotional impact is usually experienced immediately after the miscarriage. Some may go through the same loss when an ectopic pregnancy is terminated. In some, the realization of the loss can take weeks. Providing family support to those experiencing the loss can be challenging because some find comfort in talking about the miscarriage while others may find the event painful to discuss. The father can have the same sense of loss. Expressing feelings of grief and loss can sometimes be harder for men. Some women are able to begin planning their next pregnancy after a few weeks of having the miscarriage. For others, planning another pregnancy can be difficult. Some facilities acknowledge the loss. Parents can name and hold their infant. They may be given mementos such as photos and footprints. Some conduct a funeral or memorial service. They may express the loss by planting a tree.Some health organizations recommend that sexual activity be delayed after the miscarriage. The menstrual cycle should resume after about three to four months. Women report that they were dissatisfied with the care they received from physicians and nurses.
Subsequent pregnancies
Some parents want to try to have a baby very soon after the miscarriage. The decision of trying to become pregnant again can be difficult. Reasons exist that may prompt parents to consider another pregnancy. For older mothers, there may be some sense of urgency. Other parents are optimistic that future pregnancies are likely to be successful. Many are hesitant and want to know about the risk of having another or more miscarriages. Some clinicians recommend that the women have one menstrual cycle before attempting another pregnancy. This is because the date of conception may be hard to determine. Also, the first menstrual cycle after a miscarriage can be much longer or shorter than expected. Parents may be advised to wait even longer if they have experienced late miscarriage or molar pregnancy, or are undergoing tests. Some parents wait for six months based upon recommendations from their health care provider.The risks of having another miscarriage vary according to the cause. The risk of having another miscarriage after a molar pregnancy is very low. The risk of another miscarriage is highest after the third miscarriage. Pre-conception care is available in some locales.
Later cardiovascular disease
There is a significant association between miscarriage and later development of coronary artery disease, but not of cerebrovascular disease.
Epidemiology
Among women who know they are pregnant, the miscarriage rate is roughly 10% to 20%, while rates among all fertilized zygotes are around 30% to 50%. A 2012 review found the risk of miscarriage between 5 and 20 weeks from 11% to 22%. Up to the 13th week of pregnancy, the risk of miscarriage each week was around 2%, dropping to 1% in week 14 and reducing slowly between 14 and 20 weeks.The precise rate is not known because a large number of miscarriages occur before pregnancies become established and before the woman is aware she is pregnant. Additionally, those with bleeding in early pregnancy may seek medical care more often than those not experiencing bleeding. Although some studies attempt to account for this by recruiting women who are planning pregnancies and testing for very early pregnancy, they still are not representative of the wider population.The prevalence of miscarriage increases with the age of both parents. In a Danish register-based study where the prevalence of miscarriage was 11%, the prevalence rose from |
Miscarriage | 9% at 22 years of age to 84% by 48 years of age. Another, later study in 2013 found that when either parent was over the age of 40, the rate of known miscarriages doubled.In 2010, 50,000 inpatient admissions for miscarriage occurred in the UK.
Terminology
Most affected women and family members refer to miscarriage as the loss of a baby, rather than an embryo or fetus, and healthcare providers are expected to respect and use the language that the person chooses. Clinical terms can suggest blame, increase distress, and even cause anger. Terms that are known to cause distress in those experiencing miscarriage include:
abortion (including spontaneous abortion) rather than miscarriage,
habitual aborter rather than a woman experiencing recurrent pregnancy loss,
products of conception rather than baby,
blighted ovum rather than early pregnancy loss or delayed miscarriage,
cervical incompetence rather than cervical weakness, and
evacuation of retained products of conception (ERPC) rather than surgical management of miscarriage.Pregnancy loss is a broad term that is used for miscarriage, ectopic and molar pregnancies. The term fetal death applies variably in different countries and contexts, sometimes incorporating weight, and gestational age from 16 weeks in Norway, 20 weeks in the US and Australia, 24 weeks in the UK to 26 weeks in Italy and Spain. A fetus that died before birth after this gestational age may be referred to as a stillbirth. Under UK law, all stillbirths should be registered, although this does not apply to miscarriages.
History
The medical terminology applied to experiences during early pregnancy has changed over time. Before the 1980s, health professionals used the phrase spontaneous abortion for a miscarriage and induced abortion for a termination of the pregnancy. In the late 1980s and 1990s, doctors became more conscious of their language in relation to early pregnancy loss. Some medical authors advocated change to use of miscarriage instead of spontaneous abortion because they argued this would be more respectful and help ease a distressing experience. The change was being recommended by some in the profession in Britain in the late 1990s. In 2005 the European Society for Human Reproduction and Embryology (ESHRE) published a paper aiming to facilitate a revision of nomenclature used to describe early pregnancy events.
Society and culture
Societys reactions to miscarriage have changed over time. In the early 20th century, the focus was on the mothers physical health and the difficulties and disabilities that miscarriage could produce. Other reactions, such as the expense of medical treatments and relief at ending an unwanted pregnancy, were also heard. In the 1940s and 1950s, people were more likely to express relief, not because the miscarriage ended an unwanted or mistimed pregnancy, but because people believed that miscarriages were primarily caused by birth defects, and miscarrying meant that the family would not raise a child with disabilities. The dominant attitude in the mid-century was that a miscarriage, although temporarily distressing, was a blessing in disguise for the family, and that another pregnancy and a healthier baby would soon follow, especially if women trusted physicians and reduced their anxieties. Media articles were illustrated with pictures of babies, and magazine articles about miscarriage ended by introducing the healthy baby—usually a boy—that had shortly followed it.Beginning in the 1980s, miscarriage in the US was primarily framed in terms of the individual womans personal emotional reaction, and especially her grief over a tragic outcome. The subject was portrayed in the media with images of an empty crib or an isolated, grieving woman, and stories about miscarriage were published in general-interest media outlets, not just womens magazines or health magazines. Family members were encouraged to grieve, to memorialize their losses through funerals and other rituals, and to think of themselves as being parents. This shift to recognizing these emotional responses was partly due to medical and political successes, which created an expectation that pregnancies are typically planned and safe, and to womens demands that their emotional reactions no longer be dismissed by the medical establishments. It also reinforces the anti-abortion movement’s belief that human life begins at conception or early in pregnancy, and that motherhood is a desirable life goal. The modern one-size-fits-all model of grief does not fit every womans experience, and an expectation to perform grief creates unnecessary burdens for some women. The reframing of miscarriage as a private emotional experience brought less awareness of miscarriage and a sense of silence around the subject, especially compared to the public discussion of miscarriage during campaigns for access to birth control during the early 20th century, or the public campaigns to prevent miscarriages, stillbirths, and infant deaths by reducing industrial pollution during the 1970s.In places where induced abortion is illegal or carries social stigma, suspicion may surround miscarriage, complicating an already sensitive issue.
In the 1960s, the use of the word miscarriage in Britain (instead of spontaneous abortion) occurred after changes in legislation.
Developments in ultrasound technology (in the early 1980s) allowed them to identify earlier miscarriages.According to French statutes, an infant born before the age of viability, determined to be 28 weeks, is not registered as a child. If birth occurs after this, the infant is granted a certificate that allows women who have given birth to a stillborn child, to have a symbolic record of that child. This certificate can include a registered and given name to allow a funeral and acknowledgement of the event.
Other animals
Miscarriage occurs in all animals that experience pregnancy, though in such contexts it is more commonly referred to as a spontaneous abortion (the two terms are synonymous). There are a variety of known risk factors in non-human animals. For example, in sheep, miscarriage may be caused by crowding through doors, or being chased by dogs. In cows, spontaneous abortion may be caused by contagious disease, such as brucellosis or Campylobacter, but often can be controlled by vaccination. In many species of sharks and rays, stress induced miscarriage occurs frequently on capture.Other diseases are also known to make animals susceptible to miscarriage. Spontaneous abortion occurs in pregnant prairie voles when their mate is removed and they are exposed to a new male, an example of the Bruce effect, although this effect is seen less in wild populations than in the laboratory. Female mice who had spontaneous abortions showed a sharp rise in the amount of time spent with unfamiliar males preceding the abortion than those who did not.
See also
Pregnancy and Infant Loss Remembrance Day
Citations
General and cited references
Hoffman, Barbara; J. Whitridge Williams (2012). Williams Gynecology (2nd ed.). New York: McGraw-Hill Medical. ISBN 978-0071716727.
== External links == |
Hurler syndrome | Hurler syndrome, also known as mucopolysaccharidosis Type IH (MPS-IH), Hurlers disease, and formerly gargoylism, is a genetic disorder that results in the buildup of large sugar molecules called glycosaminoglycans (GAGs) in lysosomes. The inability to break down these molecules results in a wide variety of symptoms caused by damage to several different organ systems, including but not limited to the nervous system, skeletal system, eyes, and heart.
The underlying mechanism is a deficiency of alpha-L iduronidase, an enzyme responsible for breaking down GAGs.: 544 Without this enzyme, a buildup of dermatan sulfate and heparan sulfate occurs in the body. Symptoms appear during childhood, and early death usually occurs. Other, less severe forms of MPS Type I include Hurler-Scheie Syndrome (MPS-IHS) and Scheie Syndrome (MPS-IS).
Hurler syndrome is classified as a lysosomal storage disease. It is clinically related to Hunter syndrome (MPS II); however, Hunter syndrome is X-linked, while Hurler syndrome is autosomal recessive.
Signs and symptoms
Children with Hurler syndrome may appear normal at birth and develop symptoms over the first years of life. Symptoms vary between patients.One of the first abnormalities that may be detected is coarsening of the facial features; these symptoms can begin at 3–6 months of age. The head can be large with prominent frontal bones. The skull can be elongated. The nose can have a flattened nasal bridge with continuous nasal discharge. The eye sockets may be widely spaced, and the eyes may protrude from the skull. The lips can be large, and affected children may hold their jaws open constantly. Skeletal abnormalities occur by about age 6 months, but may not be clinically obvious until 10–14 months. Patients may experience debilitating spine and hip deformities, carpal tunnel syndrome, and joint stiffness. Patients may be normal height in infancy, but stop growing by the age of 2 years. They may not reach a height of greater than 4 feet.Other early symptoms may include inguinal and umbilical hernias. These may be present at birth, or they may develop within the first months of life. Clouding of the cornea and retinal degeneration may occur within the first year of life, leading to blindness. Enlarged liver and spleen are common. There is no organ dysfunction, but GAG deposition in these organs may lead to a massive increase in size. Patients may also have diarrhea. Aortic valve disease may occur.Airway obstruction is frequent, usually secondary to abnormal cervical vertebrae. Upper and lower respiratory tract infections can be frequent.Developmental delay may become apparent by age 1–2 years, with a maximum functional age of 2–4 years. Progressive deterioration follows. Most children develop limited language capabilities. Death usually occurs by age 10.
Genetics
Children with Hurler Syndrome carry two defective copies of the IDUA gene, which has been mapped to the 4p16.3 site on chromosome 4. This is the gene which encodes for the protein iduronidase. As of 2018, more than 201 different mutations in the IDUA gene have been shown to cause MPS I.Because Hurler syndrome is an autosomal recessive disorder, affected persons have two nonworking copies of the gene. A person born with one normal copy and one defective copy is called a carrier. They will produce less α-L-iduronidase than an individual with two normal copies of the gene. The reduced production of the enzyme in carriers, however, remains sufficient for normal function; the person should not show any symptoms of the disease.
Mechanisms
The IDUA gene is responsible for encoding an enzyme called alpha-L-iduronidase. Through hydrolysis, alpha-L-iduronidase is responsible for breaking down a molecule called unsulfated alpha-L-iduronic acid. This is a uronic acid found in the GAGs dermatan sulfate and heparan sulfate. The alpha-L-iduronidase enzyme is located in lysosomes. Without sufficient enzymatic function, these GAGs cannot be digested properly.
Diagnosis
Diagnosis often can be made through clinical examination and urine tests (excess mucopolysaccharides are excreted in the urine). Enzyme assays (testing a variety of cells or body fluids in culture for enzyme deficiency) are also used to provide definitive diagnosis of one of the mucopolysaccharidoses. Prenatal diagnosis using amniocentesis and chorionic villus sampling can verify if a fetus either carries a copy of the defective gene or is affected with the disorder. Genetic counseling can help parents who have a family history of the mucopolysaccharidoses determine if they are carrying the mutated gene that causes the disorders.
Classification
All members of the mucopolysaccharidosis family are also lysosomal storage diseases. Mucopolysaccharidosis type I (MPS I) is divided into three subtypes based on severity of symptoms. All three types result the absence or decreased functioning of the same enzyme. MPS-IH (Hurler syndrome) is the most severe of the MPS I subtypes. The other two types are MPS-IS (Scheie syndrome) and MPS-IHS (Hurler-Scheie syndrome).Because of the substantial overlap between Hurler syndrome, Hurler-Scheie syndrome, and Scheie syndrome, some sources consider these terms to be outdated. Instead, MPS I may be divided into "severe" and "attenuated" forms.
Treatment
There is currently no cure for Hurler Syndrome. Enzyme replacement therapy with iduronidase (Aldurazyme) may improve pulmonary function and mobility. It can reduce the amount of carbohydrates being improperly stored in organs. Surgical correction of hand and foot deformities may be necessary. Corneal surgery may help alleviate vision problems.Bone marrow transplantation (BMT) and umbilical cord blood transplantation (UCBT) can be used as treatments for MPS I. BMT from siblings with identical HLA genes and from relatives with similar HLA genes can significantly improve survival, cognitive function, and physical symptoms. Patients can develop graft versus host disease; this is more likely in non-sibling donors. In a 1998 study, children with HLA-identical sibling donors had a 5-year survival of 75%; children with non-sibling donors had a 5-year survival of 53%.Children often lack access to a suitable bone marrow donor. In these cases, UCBT from unrelated donors can increase survival, decrease physical signs of the disease, and improve cognition. Complications from this treatment may include graft versus host disease.
Prognosis
A British study from 2008 found a median estimated life expectancy of 8.7 years for patients with Hurler syndrome. In comparison, the median life expectancy for all forms of MPS type I was 11.6 years. Patients who received successful bone marrow transplants had a 2-year survival rate of 68% and a 10-year survival rate of 64%. Patients who did not receive bone marrow transplants had a significantly reduced lifespan, with a median age of 6.8 years.
Epidemiology
Hurler syndrome has an overall frequency of one per 100,000. Combined, all of the mucopolysaccharidoses have a frequency of approximately one in every 25,000 births in the United States.
Research
Gene therapy
A great deal of interest exists in treating MPS I with gene therapy. In animal models, delivery of the iduronidase gene has been accomplished with retrovirus, adenovirus, adeno-associated virus, and plasmid vectors. Mice and dogs with MPS I have been successfully treated with gene therapy. Most vectors can correct the disease in the liver and spleen, and can correct brain effects with a high dosage. Gene therapy has improved survival, neurological, and physical symptoms; however, some animals have developed unexplained liver tumors. If safety issues can be resolved, gene therapy may provide an alternative human treatment for MPS disorders in the future.Sangamo Therapeutics, headquartered in Richmond, California, is currently conducting a clinical trial involving gene editing using Zinc Finger Nuclease (ZFN) for the treatment of MPS I.
History
In 1919, Gertrud Hurler, a German pediatrician, described a syndrome involving corneal clouding, skeletal abnormalities, and mental retardation. A similar disease of "gargoylism" had been described in 1917 by Charles A. Hunter. Hurler did not mention Hunters paper. Because of the communications interruptions caused by World War I, it is likely that she was unaware of his study. Hurler syndrome now refers to MPS IH, while Hunter syndrome refers to MPS II. In 1962, a milder form of MPS I was identified by Scheie, leading to the designation of Scheie syndrome.
See also
Hunter syndrome (MPS II)
Sanfilippo syndrome (MPS III)
Morquio syndrome (MPS IV)
Maroteaux-Lamy syndrome (MPS VI)
References
External links
GeneReview/NIH/UW entry on Mucopolysaccharidosis Type I |
Hunter syndrome | Hunter syndrome, or mucopolysaccharidosis type II (MPS II), is a rare genetic disorder in which large sugar molecules called glycosaminoglycans (or GAGs or mucopolysaccharides) build up in body tissues. It is a form of lysosomal storage disease. Hunter syndrome is caused by a deficiency of the lysosomal enzyme iduronate-2-sulfatase (I2S). The lack of this enzyme causes heparan sulfate and dermatan sulfate to accumulate in all body tissues. Hunter syndrome is the only MPS syndrome to exhibit X-linked recessive inheritance.The symptoms of Hunter syndrome are comparable to those of MPS I. It causes abnormalities in many organs, including the skeleton, heart, and respiratory system. In severe cases, this leads to death during the teenaged years. Unlike MPS I, corneal clouding is not associated with this disease.
Signs and symptoms
Hunter syndrome may present with a wide variety of phenotypes. It has traditionally been categorized as either "mild" or "severe" depending on the presence of central nervous system symptoms, but this is an oversimplification. Patients with "attenuated" or "mild" forms of the disease may still have significant health issues. For severely affected patients, the clinical course is relatively predictable; patients will normally die at an early age. For those with milder forms of the disease, a wider variety of outcomes exist. Many live into their 20s and 30s, but some may have near-normal life expectancies. Cardiac and respiratory abnormalities are the usual cause of death for patients with milder forms of the disease.The symptoms of Hunter syndrome (MPS II) are generally not apparent at birth. Often, the first symptoms may include abdominal hernias, ear infections, runny noses, and colds. As the buildup of GAGs continues throughout the cells of the body, signs of MPS II become more visible. Physical appearances of many children with the syndrome include a distinctive coarseness in their facial features, including a prominent forehead, a nose with a flattened bridge, and an enlarged tongue. They may also have a large head, as well as an enlarged abdomen. For severe cases of MPS II, a diagnosis is often made between the ages of 18 and 36 months. In milder cases, patients present similarly to children with Hurler–Scheie syndrome, and a diagnosis is usually made between the ages of 4 and 8 years.The continued storage of GAGs leads to abnormalities in multiple organ systems. After 18 months, children with severe MPS II may experience developmental decline and progressive loss of skills. The thickening of the heart valves and walls of the heart can result in progressive decline in cardiac function. The walls of the airway may become thickened, as well, leading to obstructive airway disease. As the liver and spleen grow larger with time, the abdomen may become distended, making hernias more noticeable. All major joints may be affected by MPS II, leading to joint stiffness and limited motion. Progressive involvement of the finger and thumb joints results in decreased ability to pick up small objects. The effects on other joints, such as hips and knees, can make walking normally increasingly difficult. If carpal tunnel syndrome develops, a further decrease in hand function can occur. The bones themselves may be affected, resulting in short stature. In addition, pebbly, ivory-colored skin lesions may be found on the upper arms, legs, and upper back of some people with it. These skin lesions are considered pathognomic for the disease. Finally, the storage of GAGs in the brain can lead to delayed development with subsequent intellectual disability and progressive loss of function.The age at onset of symptoms and the presence or absence of behavioral disturbances are predictive factors of ultimate disease severity in very young patients. Behavioral disturbances can often mimic combinations of symptoms of attention deficit hyperactivity disorder, autism, obsessive compulsive disorder, and/or sensory processing disorder, although the existence and level of symptoms differ in each affected child. They often also include a lack of an appropriate sense of danger, and aggression. The behavioral symptoms of MPS II generally precede neurodegeneration and often increase in severity until the mental handicaps become more pronounced. By the time of death, most children with severe MPS II have severe mental disabilities and are completely dependent on their caretakers.
Genetics
Since Hunter syndrome is an X-linked recessive disorder, it preferentially affects male patients. The IDS gene is located on the X chromosome. The IDS gene encodes for an enzyme called iduronate-2-sulfatase (I2S). A lack of this enzyme leads to a buildup of GAGs, which cause the symptoms of MPS II. Females generally have two X chromosomes, whereas males generally have one X chromosome that they inherit from their mother and one Y chromosome that they inherit from their father.If a female inherits one copy of the mutant allele for MPS II, she will usually have a normal copy of the IDS gene which can compensate for the mutant allele. This is known as being a genetic carrier. A male who inherits a defective X chromosome, though, usually does not have another X chromosome to compensate for the mutant gene. Thus, a female would need to inherit two mutant genes to develop MPS II, while a male patient only needs to inherit one mutant gene. A female carrier can be affected due to X-inactivation, which is a random process.
Pathophysiology
The human body depends on a vast array of biochemical reactions to support critical functions. One of these functions is the breakdown of large biomolecules. The failure of this process is the underlying problem in Hunter syndrome and related storage disorders.The biochemistry of Hunter syndrome is related to a problem in a part of the connective tissue known as the extracellular matrix, which is made up of a variety of sugars and proteins. It helps to form the architectural framework of the body. The matrix surrounds the cells of the body in an organized meshwork and functions as the glue that holds the cells of the body together. One of the parts of the extracellular matrix is a molecule called a proteoglycan. Like many components of the body, proteoglycans need to be broken down and replaced. When the body breaks down proteoglycans, one of the resulting products is mucopolysaccharides (GAGs).In MPS II, the problem concerns the breakdown of two GAGs: dermatan sulfate and heparan sulfate. The first step in the breakdown of dermatan sulfate and heparan sulfate requires the lysosomal enzyme iduronate-2-sulfatase, or I2S. In people with MPS II, this enzyme is either partially or completely inactive. As a result, GAGs build up in cells throughout the body, particularly in tissues that contain large amounts of dermatan sulfate and heparan sulfate. The rate of GAGs buildup is not the same for all people with MPS II, resulting in a wide spectrum of medical problems.
Diagnosis
The first laboratory screening test for an MPS disorder is a urine test for GAGs. Abnormal values indicate that an MPS disorder is likely. The urine test can occasionally be normal even if the child actually has an MPS disorder. A definitive diagnosis of MPS II is made by measuring I2S activity in serum, white blood cells, or fibroblasts from skin biopsy. In some people with MPS II, analysis of the I2S gene can determine clinical severity.Prenatal diagnosis is routinely available by measuring I2S enzymatic activity in amniotic fluid or in chorionic villus tissue. If a specific mutation is known to run in the family, prenatal molecular genetic testing can be performed. DNA sequencing can reveal if someone is a carrier for the disease.
Treatment
Because of the wide variety of phenotypes, the treatment for this disorder is specifically determined for each patient. Until recently, no effective therapy for MPS II was available, so palliative care was used. Recent advances, though, have led to medications that can improve survival and well-being in people with MPS II.
Enzyme replacement therapy
Idursulfase, a purified form of the missing lysosomal enzyme, underwent clinical trial in 2006 and was subsequently approved by the United States Food and Drug Administration as an enzyme replacement treatment for MPS II. Idursulfase beta, another enzyme replacement treatment, was approved in Korea by the Ministry of Food and Drug Safety.
Recent advances in enzyme replacement therapy (ERT) with idursulfase have been proven to improve many signs and symptoms of MPS II, especially if started early in the disease. After administration, it can be transported into cells to break down GAGs, but as the medication cannot cross the blood–brain barrier, it is not expected to lead to cognitive improvement in patients with severe central nervous system symptoms. Even with ERT, treatment of various organ problems from a wide variety of medical specialists is necessary.
Bone-marrow and stem-cell transplantation
Bone-marrow transplantation and hematopoietic stem-cell transplantation (HSCT) have been used as treatments in some studies. While transplantation has provided benefits for many organ systems, it has not been shown to improve the neurological symptoms of the disease. Although HSCT has shown promise in the treatment of other MPS disorders, its results have been unsatisfactory so far in the treatment of MPS II. ERT has been shown to lead to better outcomes in MPS II patients.
Gene editing therapy
In February 2019, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first "in body" human gene editing therapy to permanently alter DNA - in a patient with MPS II. Clinical trials by Sangamo involving gene editing using zinc finger nuclease are ongoing as of February 2019.
Prognosis
Earlier onset of symptoms is linked to a worse prognosis. For children who exhibit symptoms between the ages of 2 and 4, death usually occurs by the age of 15 to 20 years. The cause of death is usually due to neurological complications, obstructive airway disease, and cardiac failure. If patients have minimal neurologic involvement, they may survive into their 50s or beyond.
Epidemiology
An estimated 2,000 people have MPS II worldwide, 500 of whom live in the United States.A study in the United Kingdom indicated an incidence among males around one in 130,000 male live births.
History
The syndrome is named after physician Charles A. Hunter (1873–1955), who first described it in 1917.
Research
Beginning in 2010, a phase I/II clinical trial evaluated intrathecal injections of a more concentrated dose of idursulfase than the intravenous formulation used in enzyme replacement therapy infusions, in hopes of preventing the cognitive decline associated with the severe form of the condition. Results were reported in October 2013. A phase II/III clinical trial began in 2014.In 2017, a 44-year-old patient with MPS II was treated with gene therapy in an attempt to prevent further damage by the disease. This is the first case of gene therapy being used in vivo in humans. The study was extended to six patients in 2018.
Society
On 24 July 2004, Andrew Wragg, 38, of Worthing, West Sussex, England, suffocated his 10-year-old son Jacob with a pillow, because of the boys disabilities related to MPS II. A military security specialist, Wragg also claimed that he was under stress after returning from the war in Iraq. He denied murdering Jacob, but pleaded guilty to manslaughter by reason of diminished capacity. Mrs Justice Anne Rafferty called the case "exceptional", gave Wragg a two-year prison sentence for manslaughter, then suspended his sentence for two years. Rafferty said "nothing [was] to be gained" from sending Wragg to prison for the crime.
See also
Hurler syndrome (MPS I)
Sanfilippo syndrome (MPS III)
Morquio syndrome (MPS IV)
Prenatal testing
Genetic counseling
References
External links
Media related to Hunter syndrome at Wikimedia Commons
GeneReview/NIH/UW entry on Mucopolysaccharidosis Type II |
Maroteaux–Lamy syndrome | Maroteaux–Lamy syndrome, or Mucopolysaccharidosis Type VI (MPS-VI), is an inherited disease caused by a deficiency in the enzyme arylsulfatase B (ARSB). ASRB is responsible for the breakdown of large sugar molecules called glycosaminoglycans (GAGs, also known as mucopolysaccharides). In particular, ARSB breaks down dermatan sulfate and chondroitin sulfate. Because people with MPS-VI lack the ability to break down these GAGs, these chemicals build up in the lysosomes of cells. MPS-VI is therefore a type of lysosomal storage disease.
Signs and symptoms
Unlike other MPS diseases, children with Maroteaux–Lamy syndrome usually have normal intelligence. They share many of the physical symptoms found in Hurler syndrome. Maroteaux–Lamy syndrome has a variable spectrum of severe symptoms. Neurological complications include clouded corneas, deafness, thickening of the dura (the membrane that surrounds and protects the brain and spinal cord), and pain caused by compressed or traumatized nerves and nerve roots.Signs are revealed early in the affected childs life, with one of the first symptoms often being a significantly prolonged age of learning how to walk. Growth begins normally, but children usually stop growing by age 8. By age 10, children often develop a shortened trunk, crouched stance, and restricted joint movement. In more severe cases, children also develop a protruding abdomen and forward-curving spine. Skeletal changes, particularly in the pelvis, are progressive and limit movement. Many children also have umbilical hernia or inguinal hernias. Nearly all children have some form of heart disease, usually involving the heart valves.
Genetics
This disorder is inherited in an autosomal recessive pattern. People with two working copies of the gene are unaffected. People with one working copy are genetic carriers of Maroteaux-Lamy Syndrome. They have no symptoms but may pass down the defective gene to their children. People with two defective copies will have MPS-VI.
Diagnosis
A urinalysis will show elevated levels of dermatan sulfate in the urine. A blood sample may be taken to assess the level of ASRB activity. Dermal fibroblast cells may also be examined for ASRB activity. Molecular genetic testing can give information about the specific mutation causing MPS-VI, but it is only available at specialized laboratories.
Treatment
The treatment of Maroteaux–Lamy syndrome is symptomatic and individually tailored. A variety of specialists may be needed. In 2005, the FDA approved the orphan drug galsulfase (Naglazyme) for the treatment of Maroteaux–Lamy syndrome. Galsulfase is an enzyme replacement therapy (ERT) in which the missing ASRB enzyme is replaced with a recombinant version.In addition to ERT, various procedures can alleviate the symptoms of MPS-VI. Surgery may be necessary to treat abnormalities such as carpal tunnel syndrome, skeletal malformations, spinal cord compression, hip degeneration, and hernias. Some patients may need heart valve replacement. It may be necessary to remove the tonsils and/or adenoids. Severe tracheomalacia may require surgery. Physical therapy and exercise may improve joint stiffness.Hydrocephalus may be treated by the insertion of a shunt to drain excess cerebrospinal fluid. A corneal transplantation can be performed for individuals with severe corneal clouding. A myringotomy, in which a small incision is made in the eardrum, may be helpful for patients with fluid accumulation in the ears. Hearing aids may be useful, and speech therapy may help children with hearing loss communicate more effectively.Certain medications can be used to treat heart abnormalities, asthma-like episodes, and chronic infections associated with MPS-VI. Anti-inflammatory medications may be of benefit. Respiratory insufficiency may require treatment with supplemental oxygen. Aggressive management of airway secretions is necessary as well. Sleep apnea may be treated with a CPAP or BPAP device.
Prognosis
The life expectancy of individuals with MPS VI varies depending on the severity of symptoms. Without treatment, some individuals may survive through late childhood or early adolescence. People with milder forms of the disorder usually live into adulthood, although they may have reduced life expectancy. Heart disease and airway obstruction are major causes of death in people with Maroteaux–Lamy syndrome.
Epidemiology
Males and females are affected equally. Studies have shown a birth prevalence between 1 in 43,261 and 1 in 1,505,160 live births. These numbers are likely an underestimate of the true number of cases, because newborn screening for MPS-VI is not widely available. Although studies have not revealed an ethnic predisposition, certain groups with a high degree of consanguinity have a higher prevalence of MPS-VI. For example, one study of a population of Turkish immigrants in Germany revealed that this group had a rate of 1 in 43,261; this was approximately ten times higher than the rate of MPS-VI in non-Turkish Germans. In different populations worldwide, MPS-VI made up between 2 and 18.5% of all MPS disorders.
History
It is named after Pierre Maroteaux (1926–2019) and his mentor Maurice Emil Joseph Lamy (1895–1975), both French physicians.
Society and culture
Keenan Cahill is a YouTuber with Maroteaux–Lamy syndrome.Isabel Bueso, a Guatemalan woman with Maroteaux–Lamy syndrome who has been receiving treatment at UCSF Benioff Childrens Hospital, was at risk of deportation from the United States after the Trump Administration ended the deferred action program in August 2019. In December 2019, she was granted another deferral of two years.
See also
Hurler syndrome (MPS I)
Hunter syndrome (MPS II)
Sanfilippo syndrome (MPS III)
Morquio syndrome (MPS IV)
References
== External links == |
Sly syndrome | Sly syndrome, also called mucopolysaccharidosis type VII (MPS-VII), is an autosomal recessive lysosomal storage disease caused by a deficiency of the enzyme β-glucuronidase. This enzyme is responsible for breaking down large sugar molecules called glycosaminoglycans (AKA GAGs, or mucopolysaccharides). The inability to break down GAGs leads to a buildup in many tissues and organs of the body. The severity of the disease can vary widely.
Signs and symptoms
The most severe cases of Sly syndrome can result in hydrops fetalis, which results in fetal death or death soon after birth. Some people with Sly syndrome may begin to have symptoms in early childhood. Symptoms can include an enlarged head, fluid buildup in the brain, coarse facial features, enlarged tongue, enlarged liver, enlarged spleen, problems with the heart valves, and abdominal hernias. People with Sly syndrome may also suffer from sleep apnea, frequent lung infections, and problems with vision secondary to cloudy corneas. Sly syndrome causes various musculoskeletal abnormalities that worsen with age. These can include short stature, joint deformities, dysostosis multiplex, spinal stenosis, and carpal tunnel syndrome.While some individuals have developmental delay, others may have normal intelligence. However, the accumulation of GAGs in the brain usually leads to the slowing of development from ages 1–3, and then a loss of previously learned skills until death.
Genetics
The defective gene responsible for Sly syndrome is located on chromosome 7.
Diagnosis
Most people with Sly disease will have elevated levels of GAGs seen in the urine. A confirmatory test is necessary for diagnosis. Skin cells and red blood cells of affected people will have low levels of β-glucuronidase activity. Sly syndrome can also be diagnosed through prenatal testing.
Treatment
Vestronidase alfa-vjbk (trade name Mepsevii), an enzyme replacement therapy which is a recombinant form of human β-glucuronidase, is approved by U.S. Food and Drug Administration for the treatment of Sly syndrome. Hematopoietic stem cell transplant (HSCT) has been used to treat other types of MPS diseases, but this is not yet available for MPS-VII. Animal experiments suggest that HSCT may be an effective treatment for MPS-VII in humans.
Prognosis
The life expectancy of individuals with MPS VII varies depending on the symptoms. Some individuals are stillborn, while some may survive into adulthood.
Epidemiology
MPS-VII is one of the rarest forms of MPS. It occurs in less than 1 in 250,000 births. As a family, MPS diseases occur in 1 in 25,000 births, and the larger family of lysosomal storage diseases occur in 1 out of 7,000 to 8,000 births.
History
Sly syndrome was originally discovered in 1972. It was named after its discoverer William S. Sly, an American biochemist who has spent nearly his entire academic career at Saint Louis University.
References
== External links == |
Mucormycosis | Mucormycosis, also known as black fungus, is a serious fungal infection that comes under fulminant fungal sinusitis, usually in people who are immunocompromised. It is curable only when diagnosed early. Symptoms depend on where in the body the infection occurs. It most commonly infects the nose, sinuses, eye, and brain resulting in a runny nose, one-sided facial swelling and pain, headache, fever, blurred vision, bulging or displacement of the eye (proptosis), and tissue death. Other forms of disease may infect the lungs, stomach and intestines, and skin.It is spread by spores of molds of the order Mucorales, most often through inhalation, contaminated food, or contamination of open wounds. These fungi are common in soils, decomposing organic matter (such as rotting fruit and vegetables), and animal manure, but usually do not affect people. It is not transmitted between people. Risk factors include diabetes with persistently high blood sugar levels or diabetic ketoacidosis, low white cells, cancer, organ transplant, iron overload, kidney problems, long-term steroids or use of immunosuppressants, and to a lesser extent in HIV/AIDS.Diagnosis is by biopsy and culture, with medical imaging to help determine the extent of disease. It may appear similar to aspergillosis. Treatment is generally with amphotericin B and surgical debridement. Preventive measures include wearing a face mask in dusty areas, avoiding contact with water-damaged buildings, and protecting the skin from exposure to soil such as when gardening or certain outdoor work. It tends to progress rapidly and is fatal in about half of sinus cases and almost all cases of the widespread type.Mucormycosis is usually rare, affecting fewer than 2 people per million people each year in San Francisco, but is now ~80 times more common in India. People of any age may be affected, including premature infants. The first known case of mucormycosis was possibly the one described by Friedrich Küchenmeister in 1855. The disease has been reported in natural disasters; 2004 Indian Ocean tsunami and the 2011 Missouri tornado. During the COVID-19 pandemic, an association between mucormycosis and COVID-19 has been reported. This association is thought to relate to reduced immune function during the course of the illness and may also be related to glucocorticoid therapy for COVID-19. A rise in cases was particularly noted in India.
Classification
Generally, mucormycosis is classified into five main types according to the part of the body affected. A sixth type has been described as mucormycosis of the kidney, or miscellaneous, i.e., mucormycosis at other sites, although less commonly affected.
Sinuses and brain (rhinocerebral); most common in people with poorly controlled diabetes and in people who have had a kidney transplant.
Lungs (pulmonary); the most common type of mucormycosis in people with cancer and in people who have had an organ transplant or a stem cell transplant.
Stomach and intestine (gastrointestinal); more common among young, premature, and low birth weight infants, who have had antibiotics, surgery, or medications that lower the bodys ability to fight infection.
Skin (cutaneous); after a burn, or other skin injury, in people with leukaemia, poorly controlled diabetes, graft-versus-host disease, HIV and intravenous drug use.
Widespread (disseminated); when the infection spreads to other organs via the blood.
Signs and symptoms
Signs and symptoms of mucormycosis depend on the location in the body of the infection. Infection usually begins in the mouth or nose and enters the central nervous system via the eyes.If the fungal infection begins in the nose or sinus and extends to brain, symptoms and signs may include one-sided eye pain or headache, and may be accompanied by pain in the face, numbness, fever, loss of smell, a blocked nose or runny nose. The person may appear to have sinusitis. The face may look swollen on one side, with rapidly progressing "black lesions" across the nose or upper inside of mouth. One eye may look swollen and bulging, and vision may be blurred.Fever, cough, chest pain, and difficulty breathing, or coughing up blood, can occur when the lungs are involved. A stomach ache, nausea, vomiting and bleeding can occur when the gastrointestinal tract is involved. Affected skin may appear as a dusky reddish tender patch with a darkening centre due to tissue death. There may be an ulcer, and it can be very painful.Invasion of the blood vessels can result in thrombosis and subsequent death of surrounding tissue due to a loss of blood supply. Widespread (disseminated) mucormycosis typically occurs in people who are already sick from other medical conditions, so it can be difficult to know which symptoms are related to mucormycosis. People with disseminated infection in the brain can develop changes in mental status or lapse into a coma.
Cause
Mucormycosis is a fungal infection caused by fungi in the order Mucorales. In most cases it is due to an invasion of the genera Rhizopus and Mucor, common bread molds. Most fatal infections are caused by Rhizopus oryzae. It is less likely due to Lichtheimia, and rarely due to Apophysomyces. Others include Cunninghamella, Mortierella, and Saksenaea.The fungal spores are in the environment, can be found on, for instance, moldy bread and fruit, and are breathed in frequently, but cause disease only in some people. In addition to being breathed in to be deposited in the nose, sinuses and lungs, the spores can also enter the skin via blood or directly through a cut or open wound, or grow in the intestine if eaten. Once deposited, the fungus grows branch-like filaments which invade blood vessels, causing clots to form and surrounding tissues to die. Other reported causes include contaminated wound dressings. Mucormycosis has been reported following the use of elastoplast and the use of tongue depressors for holding in place intravenous catheters. Outbreaks have also been linked to hospital bed sheets, negative-pressure rooms, water leaks, poor ventilation, contaminated medical equipment, and building works.
Risk factors
Predisposing factors for mucormycosis include conditions where people are less able to fight infection, have a low neutrophil count or metabolic acidosis. Risk factors include poorly controlled diabetes mellitus (particularly DKA), organ transplant, iron overload, such cancers as lymphomas, kidney failure, long term corticosteroid and immunosuppressive therapy, liver disease and severe malnutrition. Other risk factors include tuberculosis (TB), deferoxamine and to a lesser extent HIV/AIDS. Cases of mucormycosis in fit and healthy people are rare.Corticosteroids are commonly used in the treatment of COVID-19 and reduce damage caused by the bodys own immune response to the virus. They are immunosuppressant and increase blood sugar levels in both diabetic and non-diabetic patients. It is thought that both these effects may contribute to cases of mucormycosis.
Mechanism
Most people are frequently exposed to Mucorales without developing the disease. Mucormycosis is generally spread by breathing in, eating food contaminated by, or getting spores of molds of the Mucorales type in an open wound. It is not transmitted between people.The precise mechanism by which diabetics become susceptible is unclear. In vivo, a high sugar alone does not permit the growth of the fungus, but acidosis alone does. People with high sugars frequently have higher iron levels, also known to be a risk factor for developing mucormycosis. In people on deferoxamine, the iron removed is captured by siderophores on Rhizopus species, which uses the iron to grow.
Diagnosis
There is no blood test that can confirm the diagnosis. Diagnosis requires identifying the mold in the affected tissue by biopsy and confirming it with a fungal culture. Because the causative fungi occur all around, a culture alone is not decisive. Tests may also include culture and direct detection of the fungus in lung fluid, blood, serum, plasma and urine. Blood tests include a complete blood count to look specifically for neutropenia. Other blood tests include iron levels, blood glucose, bicarbonate, and electrolytes. Endoscopic examination of the nasal passages may be needed.
Imaging
Imaging is often performed, such as CT scan of lungs and sinuses. Signs on chest CT scans, such as nodules, cavities, halo signs, pleural effusion and wedge-shaped shadows, showing invasion of blood vessels, may suggest a fungal infection, but do not confirm mucormycosis. A reverse halo sign in a person with a blood cancer and low neutrophil count, is highly suggestive of mucormycosis. CT scan images of mucormycosis can be useful to distinguish mucormycosis of the orbit and cellulitis of the orbit, but images may appear identical to those of aspergillosis. MRI may also be useful. Currently (when?), MRI with gadolinium contrast is the investigation of choice in rhinoorbito cerebral mucormycosis.
Culture and biopsy
To confirm the diagnosis, biopsy samples can be cultured. Culture from biopsy samples does not always give a result as the organism is very fragile. To precisely identify the species requires an expert. The appearance of the fungus under the microscope will determine the genus and species. The appearances can vary but generally show wide, ribbon-like filaments that generally do not have septa and that—unlike in aspergillosis—branch at right angles, resembling antlers of a moose, which may be seen to be invading blood vessels.
Other
Matrix-assisted laser desorption/ionization may be used to identify the species. A blood sample from an artery may be useful to assess for metabolic acidosis.
Differential diagnosis
Other filamentous fungi may however look similar. It may be difficult to differentiate from aspergillosis. Other possible diagnoses include anthrax, cellulitis, bowel obstruction, ecthyma gangrenosum, lung cancer, clot in lungs, sinusitis, tuberculosis and fusariosis.
Prevention
Preventive measures include wearing a face mask in dusty areas, washing hands, avoiding direct contact with water-damaged buildings, and protecting skin, feet, and hands where there is exposure to soil or manure, such as gardening or certain outdoor work. In high risk groups, such as organ transplant patients, antifungal drugs may be given as a preventative.
Treatment
Treatment involves a combination of antifungal drugs, surgically removing infecting tissue and correcting underlying medical problems, such as diabetic ketoacidosis.
Medication
Once mucormycosis is suspected, amphotericin B at an initial dose of 1 mg is initially given slowly over 10–15 minutes into a vein, then given as a once daily dose according to body weight for the next 14 days. It may need to be continued for longer. Isavuconazole and Posaconazole are alternatives.
Surgery
Surgery can be very drastic, and, in some cases of disease involving the nasal cavity and the brain, removal of infected brain tissue may be required. Removal of the palate, nasal cavity, or eye structures can be very disfiguring. Sometimes more than one operation is required.
Other considerations
The disease must be monitored carefully for any signs of reemergence. Treatment also requires correcting sugar levels and improving neutrophil counts. Hyperbaric oxygen may be considered as an adjunctive therapy, because higher oxygen pressure increases the ability of neutrophils to kill the fungus. The efficacy of this therapy is uncertain.
Prognosis
It tends to progress rapidly and is fatal in about half of sinus cases, two thirds of lung cases, and almost all cases of the widespread type. Skin involvement carries the lowest mortality rate of around 15%. Possible complications of mucormycosis include the partial loss of neurological function, blindness, and clotting of blood vessels in the brain or lung.As treatment usually requires extensive and often disfiguring facial surgery, the effect on life after surviving, particularly sinus and brain involvement, is significant.
Epidemiology
The true incidence and prevalence of mucormycosis may be higher than appears. Mucormycosis is rare, affecting fewer than 1.7 people per million population each year in San Francisco. It is around 80 times more prevalent in India, where it is estimated that there are around 0.14 cases per 1000 population, and where its incidence has been rising. Causative fungi are highly dependent on location. Apophysomyces variabilis has its highest prevalence in Asia and Lichtheimia spp. in Europe. It is the third most common serious fungal infection to infect people, after aspergillosis and candidiasis.Diabetes is the main underlying disease in low and middle-income countries, whereas, blood cancers and organ transplantation are the more common underlying problems in developed countries. As new immunomodulating drugs and diagnostic tests are developed, the statistics for mucormycosis have been changing. In addition, the figures change as new genera and species are identified, and new risk factors reported such as tuberculosis and kidney problems.
COVID-19–associated mucormycosis
During the COVID-19 pandemic in India, the Indian government reported that more than 11,700 people were receiving care for mucormycosis as of 25 May 2021. Many Indian media outlets called it "black fungus" because of the black discoloration of dead and dying tissue the fungus causes. Even before the COVID-19 pandemic, rates of mucormycosis in India were estimated to be about 70 times higher than in the rest of the world. Due to its rapidly growing number of cases some Indian state governments have declared it an epidemic. One treatment was a daily injection for eight weeks of anti-fungal intravenous injection of amphotericin B which was in short supply. The injection could be standard amphotericin B deoxycholate or the liposomal form. The liposomal form cost more but it was considered "safer, more effective and [with] lesser side effects".§ The major obstacle of using antifungal drugs in black fungus is the lack of clinical trials.
Recurrence of mucormycosis during COVID-19 second wave in India
Pre-COVID mucormycosis was a very rare infection, even in India. It is so rare that an ENT (ear, nose, throat) doctor would not witness often a case during their university time. So, the documentation available on the treatment of mucormycosis is limited. In fact, there used to be a couple of mucormycosis expert ENT surgeons for millions of people pre-pandemic. The sudden rise in mucormycosis cases has left a majority of the ENT doctors with no option but to accept mucormycosis cases, as the expert doctors were very much occupied and the patient would die if left untreated. The majority of the ENT doctors had to manage with minimal or no experience on mucormycosis, this has led to the recurrence of mucormycosis in the patients they treated. When a highly experienced doctor in mucormycosis treats a patient even he cannot guarantee that the individual is completely cured and will not have a relapse of mucormycosis; an inexperienced ENT surgeon will definitely have a high number of patients with recurrence due to which there were many recurrent cases of mucormycosis although it did not get the limelight of media or the Indian Government.
History
The first case of mucormycosis was possibly one described by Friedrich Küchenmeister in 1855. Fürbringer first described the disease in the lungs in 1876. In 1884, Lichtheim established the development of the disease in rabbits and described two species; Mucor corymbifera and Mucor rhizopodiformis, later known as Lichtheimia and Rhizopus, respectively. In 1943, its association with poorly controlled diabetes was reported in three cases with severe sinus, brain and eye involvement.In 1953, Saksenaea vasiformis, found to cause several cases, was isolated from Indian forest soil, and in 1979, P. C. Misra examined soil from an Indian mango orchard, from where they isolated Apophysomyces, later found to be a major cause of mucormycosis. Several species of mucorales have since been described. When cases were reported in the United States in the mid-1950s, the author thought it to be a new disease resulting from the use of antibiotics, ACTH and steroids. Until the latter half of the 20th century, the only available treatment was potassium iodide. In a review of cases involving the lungs diagnosed following flexible bronchoscopy between 1970 and 2000, survival was found to be better in those who received combined surgery and medical treatment, mostly with amphotericin B.
Naming
Arnold Paltauf coined the term "Mycosis Mucorina" in 1885, after describing a case with systemic symptoms involving the sinus, brain and gastrointestinal tract, following which the term "mucormycosis" became popular. "Mucormycosis" is often used interchangeably with "zygomycosis", a term made obsolete following changes in classification of the kingdom Fungi. The former phylum Zygomycota included Mucorales, Entomophthorales, and others. Mucormycosis describes infections caused by fungi of the order Mucorales.
COVID-19–associated mucormycosis
COVID-19 associated mucormycosis cases were reported during first and second(delta) wave, with maximum number of cases in delta wave. There were no cases reported during the Omicron wave. A number of cases of mucormycosis, aspergillosis, and candidiasis, linked to immunosuppressive treatment for COVID-19 were reported during the COVID-19 pandemic in India in 2020 and 2021. One review in early 2021 relating to the association of mucormycosis and COVID-19 reported eight cases of mucormycosis; three from the U.S., two from India, and one case each from Brazil, Italy, and the UK. The most common underlying medical condition was diabetes. Most had been in hospital with severe breathing problems due to COVID-19, had recovered, and developed mucormycosis 10–14 days following treatment for COVID-19. Five had abnormal kidney function tests, three involved the sinus, eye and brain, three the lungs, one the gastrointestinal tract, and in one the disease was widespread. In two of the seven deaths, the diagnosis of mucormycosis was made at postmortem. That three had no traditional risk factors led the authors to question the use of steroids and immunosuppressive drugs. Although, there were cases without diabetes or use of immunosuppressive drugs. There were cases reported even in children. In May 2021, the BBC reported increased cases in India. In a review of COVID-19-related eye problems, mucormycosis affecting the eyes was reported to occur up to several weeks following recovery from COVID-19. It was observed that people with COVID-19 were recovering from mucormycosis a bit easily when compared to non-COVID-19 patients. This is because unlike non-COVID-19 patients with severe diabetes, cancer or HIV, the recovery time required for the main cause of immune suppression is temporary.Other countries affected included Pakistan, Nepal, Bangladesh, Russia, Uruguay, Paraguay, Chile, Egypt, Iran, Brazil, Iraq, Mexico, Honduras, Argentina Oman, and Afghanistan. One explanation for why the association has surfaced remarkably in India is high rates of COVID-19 infection and high rates of diabetes. In May 2021, the Indian Council of Medical Research issued guidelines for recognising and treating COVID-19–associated mucormycosis. In India, as of 28 June 2021, over 40,845 people have been confirmed to have mucormycosis, and 3,129 have died. From these cases, 85.5% (34,940) had a history of being infected with SARS-CoV-2 and 52.69% (21,523) were on steroids, also 64.11% (26,187) had diabetes.
Society and culture
The disease has been reported in natural disasters and catastrophes; 2004 Indian Ocean tsunami and the 2011 Missouri tornado. The first international congress on mucormycosis was held in Chicago in 2010, set up by the Hank Schueuler 41 & 9 Foundation, which was established in 2008 for the research of children with leukaemia and fungal infections. A cluster of infections occurred in the wake of the 2011 Joplin tornado. By July 19, 2011, a total of 18 suspected cases of mucormycosis of the skin had been identified, of which 13 were confirmed. A confirmed case was defined as 1) necrotizing soft-tissue infection requiring antifungal treatment or surgical debridement in a person injured in the tornado, 2) with illness onset on or after May 22 and 3) positive fungal culture or histopathology and genetic sequencing consistent with a mucormycete. No additional cases related to that outbreak were reported after June 17. Ten people required admission to an intensive-care unit, and five died.In 2014, details of a lethal mucormycosis outbreak that occurred in 2008 emerged after television and newspaper reports responded to an article in a pediatric medical journal. Contaminated hospital linen was found to be spreading the infection. A 2018 study found many freshly laundered hospital linens delivered to U.S. transplant hospitals were contaminated with Mucorales. Another study attributed an outbreak of hospital-acquired mucormycosis to a laundry facility supplying linens contaminated with Mucorales. The outbreak stopped when major changes were made at the laundry facility. The authors raised concerns on the regulation of healthcare linens.
Other animals
Mucormycosis in other animals is similar, in terms of frequency and types, to that in people. Cases have been described in cats, dogs, cows, horses, dolphins, bison, and seals.
References
Further reading
== External links == |
Multifocal motor neuropathy | Multifocal motor neuropathy (MMN) is a progressively worsening condition where muscles in the extremities gradually weaken. The disorder, a pure motor neuropathy syndrome, is sometimes mistaken for amyotrophic lateral sclerosis (ALS) because of the similarity in the clinical picture, especially if muscle fasciculations are present. MMN is thought to be autoimmune. It was first described in the mid-1980s.Unlike ALS, which affects both upper and lower motor neuron pathways, MMN involves only the lower motor neuron pathway, specifically, the peripheral nerves emanating from the lower motor neurons. Definitive diagnosis is often difficult, and many MMN patients labor for months or years under an ALS diagnosis before finally getting a determination of MMN.
MMN usually involves very little pain; however, muscle cramps, spasms and twitches can cause pain for some people. MMN is not fatal, and does not diminish life expectancy. Many patients, once undergoing treatment, only experience mild symptoms over prolonged periods, though the condition remains slowly progressive. MMN can however, lead to significant disability, with loss of function in hands affecting ability to work and perform everyday tasks, and "foot drop" leading to inability to stand and walk; some patients end up using aids like canes, splints and walkers.
Symptoms
Usually beginning in one or both hands, MMN is characterized by weakness, muscle atrophy, cramping, and often profuse fasciculations (muscle twitching). The symptoms are progressive over long periods, often in a stepwise fashion, but unlike ALS are often treatable.Sensory nerves are usually unaffected.Wrist drop and foot drop (leading to trips and falls) are common symptoms. Other effects can include gradual loss of finger extension, leading to a clawlike appearance. Cold & hot temperatures exacerbate MMN symptoms to such an extent, unlike other neuropathies, that this temperature response is being investigated as a diagnostic tool.
Cause
MMN is thought to be caused by alterations in the immune system, such that certain proteins (antibodies) that would normally protect one from viruses and bacteria begin to attack constituents of peripheral nerves. Antibodies may be directed against "GM-1", a ganglioside found at the Nodes of Ranvier. These antibodies have been detected in at least one-third of MMN patients. More recent studies also suggest that newer tests for antibodies directed against GM-1, as well as a number of related gangliosides, are positive in over 80% of MMN patients. There are increasing reasons to believe these antibodies are the cause of MMN.
Diagnosis
The diagnosis of MMN depends on demonstrating that a patient has a purely motor disorder affecting individual nerves, that there are no upper motor neuron (UMN) signs, that there are no sensory deficits, and that there is evidence of conduction block. These criteria are designed to differentiate the disorder from ALS (purely motor but with UMN signs), the Lewis-Sumner Syndrome variant of Chronic inflammatory demyelinating polyneuropathy (CIDP) (similar to MMN but usually with significant sensory loss), and "vasculitis" (a type of multiple mononeuropathy syndrome caused by inflammatory damage to the blood vessels in nerves that also causes sensory and motor symptoms).A neurologist is usually needed to determine the diagnosis, which is based on the history and physical examination along with the electrodiagnostic study, which includes nerve conduction studies (NCS) and needle electromyography (EMG). The NCS usually demonstrate conduction block. This can be done by showing that the nerve signal cannot conduct past a "lesion" at some point along the nerve. For example, if the nerve is blocked in the forearm, an electrical impulse can easily get from the wrist to the hand if the stimulus is placed at the wrist. However, the signal will be blocked from reaching the hand if the stimulus is applied at the elbow. In MMN, sensory conduction along the same path should be normal. The EMG portion of the test looks for signals in the way muscles fire. In MMN it will most likely reveal abnormalities suggesting that some percentage of the motor axons has been damaged. Laboratory testing for GM1 antibodies is frequently done, and can be very helpful if they are abnormal. However, since only a third of patients with MMN have these antibodies, a negative test does not rule out the disorder. Spinal fluid examination is not usually helpful.
Treatment
Multifocal motor neuropathy is normally treated by receiving intravenous immunoglobulin (IVIG), which can in many cases be highly effective, or immunosuppressive therapy with cyclophosphamide or rituximab. Steroid treatment (prednisone) and plasmapheresis are no longer considered to be useful treatments(not usually some pt highly recommended); prednisone can exacerbate symptoms. IVIg is the primary treatment, with about 80% of patients responding, usually requiring regular infusions at intervals of 1 week to several months. Other treatments are considered in case of lack of response to IVIg, or sometimes because of the high cost of immunoglobulin. Subcutaneous immunoglobulin is under study as a less invasive, more-convenient alternative to IV delivery.
References
External links
Overview of MMN at National Institute of Neurological Disorders and Stroke |
Necrobiosis lipoidica | Necrobiosis lipoidica is a necrotising skin condition that usually occurs in patients with diabetes mellitus but can also be associated with rheumatoid arthritis. In the former case it may be called necrobiosis lipoidica diabeticorum (NLD). NLD occurs in approximately 0.3% of the diabetic population, with the majority of those affected are women (approximately 3:1 females to males affected).
The severity or control of diabetes in an individual does not affect who will or will not get NLD. Better maintenance of diabetes after being diagnosed with NLD will not change how quickly the NLD will resolve.
Signs and symptoms
NL/NLD most frequently appears on the patients shins, often on both legs, although it may also occur on forearms, hands, trunk, and, rarely, nipple, penis, and surgical sites. The lesions are often asymptomatic but may become tender and ulcerate when injured. The first symptom of NL is often a "bruised" appearance (erythema) that is not necessarily associated with a known injury. The extent to which NL is inherited is unknown.
NLD appears as a hardened, raised area of the skin. The center of the affected area usually has a yellowish tint while the area surrounding it is a dark pink. It is possible for the affected area to spread or turn into an open sore. When this happens the patient is at greater risk of developing ulcers. If an injury to the skin occurs on the affected area, it may not heal properly or it will leave a dark scar.
Pathophysiology
Although the exact cause of this condition is not known, it is an inflammatory disorder characterised by collagen degeneration, combined with a granulomatous response. It always involves the dermis diffusely, and sometimes also involves the deeper fat layer. Commonly, dermal blood vessels are thickened (microangiopathy).It can be precipitated by local trauma, though it often occurs without any injury.
Diagnosis
NL is diagnosed by a skin biopsy, demonstrating superficial and deep perivascular and interstitial mixed inflammatory cell infiltrate (including lymphocytes, plasma cells, mononucleated and multinucleated histiocytes, and eosinophils) in the dermis and subcutis, as well as necrotising vasculitis with adjacent necrobiosis and necrosis of adnexal structures. Areas of necrobiosis are often more extensive and less well defined than in granuloma annulare. Presence of lipid in necrobiotic areas may be demonstrated by Sudan stains. Cholesterol clefts, fibrin, and mucin may also be present in areas of necrobiosis. Depending on the severity of the necrobiosis, certain cell types may be more predominant. When a lesion is in its early stages, neutrophils may be present, whereas in later stages of development lymphocytes and histiocytes may be more predominant.
Treatment
There is no clearly defined cure for necrobiosis. NLD may be treated with PUVA therapy, Photodynamic therapy and improved therapeutic control.Although there are some techniques that can be used to diminish the signs of necrobiosis such as low dose aspirin orally, a steroid cream or injection into the affected area, this process may be effective for only a small percentage of those treated.
See also
Diabetic dermadromes
List of cutaneous conditions
References
External links
Information and image at NIH |
Neonatal withdrawal | Neonatal withdrawal or neonatal abstinence syndrome (NAS) or neonatal opioid withdrawal syndrome (NOWS) is a withdrawal syndrome of infants after birth caused by in utero exposure to drugs of dependence, most commonly opioids. Common signs and symptoms include tremors, irritability, vomiting, diarrhea, and fever. NAS is primarily diagnosed with a detailed medication history and scoring systems. First-line treatment should begin with non-medication interventions to support neonate growth, though medication interventions may be used in certain situations.In 2017, approximately 7.3 per 1,000 hospitalized infants in the United States were diagnosed with NOWS. Not all opioid-exposed infants will show clinical signs of withdrawal after birth. Clinical signs range from mild to severe, depending on the quantity and type of substance exposure.The most common form on neonatal withdrawal occurs after in utero exposure, however, iatrogenic withdrawal can also occur after medications are used to treat critically ill infants after they are born.
Signs and symptoms
Drug and alcohol use during pregnancy can lead to many health problems in the fetus and infants, including neonatal abstinence syndrome (NAS). The onset of clinical presentation typically appears within 48 to 72 hours of birth but may take up to 8 days. The signs and symptoms of NAS may be different depending on which substance the mother used.Common signs and symptoms in infants with NAS may include:
Signs due to hyperactivity of the central nervous system:
Tremors (trembling)
Irritability (excessive mood crying)
Sleep problems
High-pitched crying
Muscle tightness
Hyperactive reflexes
Seizures (2% to 11%), notably this clinical sign is controversial given it does not occur in other populations experiencing opioid withdrawal.
Signs due to hyperactivity of stomach and intestines:
Poor feeding and sucking reflex
Vomiting
Diarrhea
Signs due to hyperactivity of autonomous nervous system:
Fever
Sweating
Yawning, stuffy nose, and sneezing
Fast breathing
Causes
The drugs involved can include opioids, selective serotonin reuptake inhibitors (SSRIs), serotonin and norepinephrine reuptake inhibitor (SNRIs), tricyclic antidepressants (TCAs), ethanol, and benzodiazepines. Opioids may be more likely to cause NAS than other substances due to an increase in its usage. Exposure to heroin and methadone claimed to be correlated with a 60 to 80% occurrence of neonatal withdrawal, whereas buprenorphine has been associated with a lower risk. Neonatal abstinence syndrome does not happen in prenatal cocaine exposure. Prematurity and exposure to other drugs may instead be the cause of symptoms.The main mechanistic pathway of prescribed and illicit substance-induced NAS is the hyperactivity of the central and autonomic nervous system and gastrointestinal tract There are several potential mechanisms and pathways that have been proposed, which includes the interaction between the neurotransmitters and lack of adequate expression of opioid receptors. However, the main pathophysiology of this syndrome remains unknown. Most of the opioid induced NAS are due to opioid exposure during pregnancy for pain relief, misuse, or abuse of prescribed opioids or other medication-assisted treatment of opioid use disorder.
Diagnosis
The presence of withdrawal in the neonate can be confirmed by taking a detailed medical history from the mother. The medical history should include physical and mental health problems, prescription and non-prescription medication use, nutritional supplement use, history of alcohol and substance use, childhood adversities, cultural and social beliefs, past traumatic experiences, and infectious diseases such as HIV. Since the medical history of the birth giver may not be available immediately after delivery, some testing needs to be done in the infant to confirm possible exposure. Infants urine, meconium, umbilical cord tissue or hair can be used for testing. The timing of urine sample collection is critical because some drugs may become undetectable after they are metabolized and eliminated from the body. Also, urine test results can only confirm if the fetus was exposed to drugs a few days before birth. Meconium testing can be used to confirm drug exposure in earlier stage of pregnancy, but the collecting process is more difficult. Umbilical cord tissue testing is a relatively new testing method and its accuracy is still controversial. The mothers blood and urine sample should also be collected for drug screening. Chest X-rays can confirm or infirm the presence of heart defects.
Assessment
Depending on what hospital setting the infant is in, different scoring systems are used for assessing the severity and the need for medication treatment in neonatal withdrawal syndrome. One challenge with existing clinical predication tools is that they were designed to assess opiate withdrawal only. The Finnegan Neonatal Abstinence Scoring System (FNASS), or its modified version is the most widely used prediction tool currently in the United States. The FNASS tool focuses on 21 signs of neonatal opioid withdrawal, and a score from 0 to 5 is assigned based on the severity of the symptom. The measurement needs to be repeated every two to four hours. The cutoff for initiation, escalation or de-escalation of medication treatment may be varied. A 2019 review shows that "most institutions using the FNASS have protocols that call for starting or increasing pharmacologic treatment after an infant has received three FNASS scores ≥8 or two scores ≥12." However, there are limitations to the FNASS tool. The repeated measurements may delay treatment and result in increased treatment need. In order to assess some of the signs in the measurement process, infants will be stimulated as opposed to minimizing stimulation recommended in non-medication treatment. A study also indicates that the FNASS tool "has not been validated to show utility in improving outcomes for infants with NAS".
Prevention
Neonatal withdrawal is prevented by the mother abstaining from illicit or prescribed substances. In some cases, a prescribed medication may need to be discontinued during the pregnancy to prevent addiction by the infant. Early prenatal care can identify addictive behaviors in the mother and family system. Referrals to treatment centers is appropriate. Some prescribed medicines should not be stopped without medical supervision, or harm may result. Suddenly stopping a medication can result in a premature birth, fetal complications, and miscarriage. It is recommended that pregnant individuals discuss medication, alcohol, and tobacco use with their health-care provider and to seek assistance to abstain when appropriate. She may need medical attention if she is using drugs non-medically, using drugs not prescribed to them, or using alcohol or tobacco.There are several strategies to prevent the incidence of NAS, those include:
Primary Preventions Follow guidance of 2016 CDC Guideline for Prescribing Opioids for Chronic Pain, which addresses the effectiveness of opioid dosing and treatment, the benefits and risks, and strategies to avoid opioid misuse
Utilize prescription drug monitoring programs (PDMPs) to avoid overuse of opioids
Provision of treatment for opioid use disorder among pregnant women
Non-medicine strategies via minimizing environmental stimuliHowever, there are some barriers to prevention which includes lack of consensus to screening tools to identify substance use while pregnant, stigma, provider bias, and legal consequences.
Treatment
Treatment depends on the drug involved, the infants overall health, abstinence scores (FNASS scoring system), and whether the infant was born full-term or premature. It is recommended to observe and provide supportive measure to infants who are at risk of neonatal abstinence syndrome in the hospital. Infants with severe symptoms may require both supportive measures and medicines. Treatment for NAS may require the infant to stay in the hospital for weeks or months after birth.
The goal of treatment is to minimize negative outcomes and promote normal development. Infants may be prescribed a drug similar to the one the mother used during pregnancy, and slowly decrease the dose over time. This helps wean the infant off the drug and relieves some withdrawal symptoms.
Non-medication treatment
First-line treatment should begin with non-medication interventions to support maturation of the neonate. It is not clear if one type of non-medication therapy is better than another. Common non-medication approaches include physical environment adjustments, swaddling, and breastfeeding.
Adjusting physical environments
Infants with NAS symptoms may have hypersensitivity to light and sounds. Techniques such as darkening the room and eliminating surrounding sounds work to lessen the neonates visual and auditory stimuli.
Swaddling
Swaddling (wrapping an infant firmly in a blanket) can help improve sleep, develop nerves and muscles, decrease stress, and improve motor skills.
Breastfeeding
Infants with NAS may have problems with feeding or slow growth, which require higher-calorie feedings that provide greater nutrition. It is beneficial to give smaller portions more often throughout the day. Breastfeeding promotes infant attachment and bonding, and is associated with a decreased need for medication, may lessen the severity of NAS, and lead to shorter hospital stays.Most pregnant people who are taking buprenorphine or methadone can safely breastfeed their infant. Both buprenorphine and methadone remain in the human milk at low concentrations, which will reduce signs and symptoms of NAS and likely decrease the treatment time. However, there are exclusions in which it is not safe to breastfeed, such as an HIV-positive mother and a mother with history of street drug use or multiple illicit drug use.
Medication treatment
Although non-medication intervention remains first-line treatment, pharmacological intervention, when appropriate and indicated, can improve signs of neonatal withdrawal.Common medication approaches:
Opioids
Opioids have shown to improve symptoms to a clinically safe level but may not affect length of hospital stay. Its common to slowly taper down to wean the infant off.
Sedatives
Sedatives such as phenobarbital or diazepam are less effective at symptom control compared to opioids but can reduce length of hospital stay.
Clonidine
When compared to opioids, clonidine was just as effective at improving clinical symptoms.Additional medication is used to relieve fever, seizures, and weight loss or dehydration. A 2021 systematic review found low-certainty evidence that phenobarbital lengthened hospital stays but resulted in a return to birth-weight more rapidly. Low-certainty evidence also showed phenobarbital reduced treatment failure rates compared to diazepam and chlorpromazine. There was also low-certainty evidence of increased hospitalization days with clonidine and opioid compared to phenobarbital and opioid.
Outcomes
A 2018 meta-analysis reported that newborns diagnosed with NAS are likely to recover with non-medication intervention when roomed with family during their hospital stay compared to newborns diagnosed with NAS that are treated in newborn intensive care unit.Data is limited and more research needs to be conducted to properly evaluate long-term outcomes in children with a prior diagnosis of NAS. However, long-term monitoring into adolescence may be necessary as a 2019 meta-analysis gave evidence of some longterm cognitive and physical side effects associated with prenatal opioid exposure.
Epidemiology
United States
A 2012 study analyzed information on 7.4 million discharges from 4,121 hospitals in 44 states, to measure trends and costs associated with NAS over the past decade. The study indicated that between 2000 and 2009, the number of pregnant people using opiates increased from 1.19 to 5.63 per 1,000 hospital births per year.In 2017 the Centers for Disease Control (CDC) reported an increase of diagnosis of NAS to 7 cases every 1,000 births with indiscrimination to state or demographic group. Additionally, the CDC reported in 2019 that 7% of pregnant individuals self-reported use of opioids at some point in their pregnancy.A 2018 review of NAS reports that the epidemiology of NAS continues to change and evolve. Though opioids are still the most common drug reported in diagnosis of NAS, there are instances where opioids are not the only class of drug the infant is exposed to during pregnancy. Diagnosis of NAS continues and is substantially greater in rural areas compared to urban areas. As the epidemiology continues to change and evolve calls for the need for more research and standardization of treatment.
Other
A 2020 literature review published by the Saskatchewan Prevention Institute reports that NAS has significantly increased in England, Western Australia, and Canada within the last decade, noting that current statistics may be underestimated as reluctance to report can be attributed to stigma associated with diagnosis or differing protocols amongst institutions. From 2016 to 2017 Canada overall reported 1,850 diagnosis of NAS.
See also
Prenatal cocaine exposure
Neonatal opioid withdrawal
References
== External links == |
Cystinosis | Cystinosis is a lysosomal storage disease characterized by the abnormal accumulation of cystine, the oxidized dimer of the amino acid cysteine. It is a genetic disorder that follows an autosomal recessive inheritance pattern. It is a rare autosomal recessive disorder resulting from accumulation of free cystine in lysosomes, eventually leading to intracellular crystal formation throughout the body. Cystinosis is the most common cause of Fanconi syndrome in the pediatric age group. Fanconi syndrome occurs when the function of cells in renal tubules is impaired, leading to abnormal amounts of carbohydrates and amino acids in the urine, excessive urination, and low blood levels of potassium and phosphates.
Cystinosis was the first documented genetic disease belonging to the group of lysosomal storage disease disorders. Cystinosis is caused by mutations in the CTNS gene that codes for cystinosin, the lysosomal membrane-specific transporter for cystine. Intracellular metabolism of cystine, as it happens with all amino acids, requires its transport across the cell membrane. After degradation of endocytosed protein to cystine within lysosomes, it is normally transported to the cytosol. But if there is a defect in the carrier protein, cystine is accumulated in lysosomes. As cystine is highly insoluble, when its concentration in tissue lysosomes increases, its solubility is immediately exceeded and crystalline precipitates are formed in almost all organs and tissues.However, the progression of the disease is not related to the presence of crystals in target tissues. Although tissue damage might depend on cystine accumulation, the mechanisms of tissue damage are not fully understood. Increased intracellular cystine profoundly disturbs cellular oxidative metabolism and glutathione status, leading to altered mitochondrial energy metabolism, autophagy, and apoptosis.Cystinosis is usually treated with cysteamine, which is prescribed to decrease intralysosomal cystine accumulation. However, the discovery of new pathogenic mechanisms and the development of an animal model of the disease may open possibilities for the development of new treatment modalities to improve long-term prognosis.
Symptoms
There are three distinct types of cystinosis each with slightly different symptoms: nephropathic cystinosis, intermediate cystinosis, and non-nephropathic or ocular cystinosis. Infants affected by nephropathic cystinosis initially exhibit poor growth and particular kidney problems (sometimes called renal Fanconi syndrome). The kidney problems lead to the loss of important minerals, salts, fluids, and other nutrients. The loss of nutrients not only impairs growth, but may result in soft, bowed bones (hypophosphatemic rickets), especially in the legs. The nutrient imbalances in the body lead to increased urination, thirst, dehydration, and abnormally acidic blood (acidosis).
By about age two, cystine crystals may also be present in the cornea. The buildup of these crystals in the eye causes an increased sensitivity to light (photophobia). Without treatment, children with cystinosis are likely to experience complete kidney failure by about age ten. With treatment this may be delayed into the patients teens or 20s. Other signs and symptoms that may occur in patients include muscle deterioration, blindness, inability to swallow, impaired sweating, decreased hair and skin pigmentation, diabetes, and thyroid and nervous system problems.
The signs and symptoms of intermediate cystinosis are the same as nephropathic cystinosis, but they occur at a later age. Intermediate cystinosis typically begins to affect individuals around age twelve to fifteen. Malfunctioning kidneys and corneal crystals are the main initial features of this disorder. If intermediate cystinosis is left untreated, complete kidney failure will occur, but usually not until the late teens to mid twenties.
People with non-nephropathic or ocular cystinosis do not usually experience growth impairment or kidney malfunction. The only symptom is photophobia due to cystine crystals in the cornea.
Crystal morphology and identification
Cystine crystals are hexagonal in shape and are colorless. They are not found often in alkaline urine due to their high solubility. The colorless crystals can be difficult to distinguish from uric acid crystals which are also hexagonal. Under polarized examination, the crystals are birefringent with a polarization color interference.
Genetics
Cystinosis occurs due to a mutation in the gene CTNS, located on chromosome 17, which codes for cystinosin, the lysosomal cystine transporter. Symptoms are first seen at about 3 to 18 months of age with profound polyuria (excessive urination), followed by poor growth, photophobia, and ultimately kidney failure by age 6 years in the nephropathic form.
All forms of cystinosis (nephropathic, juvenile and ocular) are autosomal recessive, which means that the trait is located on an autosomal chromosome, and only an individual who inherits two copies of the gene – one from both parents – will have the disorder. There is a 25% risk of having a child with the disorder, when both parents are carriers of an autosomal recessive trait.
Cystinosis affects approximately 1 in 100,000 to 200,000 newborns. and there are only around 2,000 known individuals with cystinosis in the world. The incidence is higher in the province of Brittany, France, where the disorder affects 1 in 26,000 individuals.
Diagnosis
Cystinosis is a rare genetic disorder that causes an accumulation of the amino acid cystine within cells, forming crystals that can build up and damage the cells. These crystals negatively affect many systems in the body, especially the kidneys and eyes.The accumulation is caused by abnormal transport of cystine from lysosomes, resulting in a massive intra-lysosomal cystine accumulation in tissues. Via an as yet unknown mechanism, lysosomal cystine appears to amplify and alter apoptosis in such a way that cells die inappropriately, leading to loss of renal epithelial cells. This results in renal Fanconi syndrome, and similar loss in other tissues can account for the short stature, retinopathy, and other features of the disease.
Definitive diagnosis and treatment monitoring are most often performed through measurement of white blood cell cystine level using tandem mass spectrometry.
Types
Online Mendelian Inheritance in Man (OMIM): 219800 – Infantile nephropathic
Online Mendelian Inheritance in Man (OMIM): 219900 – Adolescent nephropathic
Online Mendelian Inheritance in Man (OMIM): 219750 – Adult nonnephropathic
Treatment
Cystinosis is normally treated with cysteamine, which is available in capsules and in eye drops. People with cystinosis are also often given sodium citrate to treat the blood acidosis, as well as potassium and phosphorus supplements as well as others. If the kidneys become significantly impaired or fail, then treatment must be begun to ensure continued survival, up to and including renal transplantation.
See also
Hartnup disease
Cystinuria
CTNS
References
External links
Cystinosis at NLM Genetics Home Reference
GeneReviews/NCBI/NIH/UW entry on Cystinosis |
Neurocysticercosis | Neurocysticercosis is a specific form of the infectious parasitic disease cysticercosis that is caused by the infection with Taenia solium, a tapeworm found in pigs. Neurocysticercosis occurs when cysts formed by the infection take hold within the brain, causing neurologic syndromes such as epileptic seizures. It is a common cause of seizures worldwide. It has been called a "hidden epidemic" and "arguably the most common parasitic disease of the human nervous system". Common symptoms of neurocysticercosis include seizures, headaches, blindness, meningitis and dementia.
Pathophysiology
Neurocysticercosis most commonly involves the cerebral cortex followed by the cerebellum. The pituitary gland is very rarely involved in neurocysticercosis. The cysts may rarely coalesce and form a tree-like pattern which is known as racemose neurocysticercosis, which when involving the pituitary gland may result in multiple pituitary hormone deficiency.
Diagnosis
Neurocysticerosis is diagnosed by computed tomography (CT) scan. Diagnosis may be confirmed by detection of antibodies against cysticerci in CSF or serum through ELISA or immunoblotting techniques.
Treatment
Treatment of neurocysticerosis includes epileptic therapy and a long-course medication of praziquantel (PZQ) and/or albendazole. Steroid therapy may be necessary to minimize the inflammatory reaction to dying cysticerci. Surgical removal of brain cysts may be necessary, e.g. in cases of large parenchymal cysts, intraventricular cysts or hydrocephalus.Albendazole has been shown to reduce seizure recurrence in those with a single non-viable intraparenchymal cyst.
For seizures further randomized controlled trials are needed to evaluate the efficacy of antiepileptic drugs (AED) for seizure prevention in patients with symptoms other than seizures and the duration of AED treatment in these cases.
Epidemiology
The epidemiology of Taenia solium cysticercosis is associated with poor sanitation and is highly prevalent in Sub-Saharan Africa, Latin America and Asia. Cysticercosis in the United States, which commonly presents in the form of neurocysticercosis, has been classified as a "neglected tropical disease", which commonly affects the poor and homeless, particularly those without access or in the habit of inadequate hand-washing and in the habit of eating with their hands.
== References == |
Neurogenic bladder dysfunction | Neurogenic bladder dysfunction, or neurogenic bladder, refers to urinary bladder problems due to disease or injury of the central nervous system or peripheral nerves involved in the control of urination. There are multiple types of neurogenic bladder depending on the underlying cause and the symptoms. Symptoms include overactive bladder, urinary urgency, frequency, incontinence or difficulty passing urine. A range of diseases or conditions can cause neurogenic bladder including spinal cord injury, multiple sclerosis, stroke, brain injury, spina bifida, peripheral nerve damage, Parkinsons disease, or other neurodegenerative diseases. Neurogenic bladder can be diagnosed through a history and physical as well as imaging and more specialized testing. Treatment depends on underlying disease as well as symptoms and can be managed with behavioral changes, medications, surgeries, or other procedures. The symptoms of neurogenic bladder, especially incontinence, can have a significant impact on quality of life.
Classification
There are different types of neurogenic bladder depending on the underlying cause. Many of these types may have similar symptoms.
Uninhibited
Uninhibited bladder is usually due to damage to the brain from a stroke or brain tumor. This can cause reduced sensation of bladder fullness, low capacity bladder and urinary incontinence. Unlike other forms of neurogenic bladder, it does not lead to high bladder pressures that can cause kidney damage.
Spastic
In spastic neurogenic bladder (also known as upper motor neuron or hyper-reflexive bladder), the muscle of the bladder (detrusor) and urethral sphincter do not work together and are usually tightly contracted at the same time. This phenomenon is also called detrusor external sphincter dyssynergia (DESD). This leads to urinary retention with high pressures in the bladder that can damage the kidneys. The bladder volume is usually smaller than normal due to increased muscle tone in the bladder. Spastic neurogenic bladder is usually caused by damage to the spinal cord above the level of the 10th thoracic vertebrae (T10).
Flaccid
In flaccid bladder (also known as lower motor neuron or hypotonic bladder), the muscles of the bladder lose ability to contract normally. This can cause the inability to void urine even if the bladder is full and cause a large bladder capacity. The internal urinary sphincter can contract normally, however urinary incontinence is common. This type of neurogenic bladder is caused by damage to the peripheral nerves that travel from the spinal cord to the bladder.
Mixed
Mixed type of neurogenic bladder can cause a combination of the above presentations. In mixed type A, the bladder muscle is flaccid but the sphincter is overactive. This creates a large, low pressure bladder and inability to void, but does not carry as much risk for kidney damage as a spastic bladder. Mixed type B is characterized by a flaccid external sphincter and a spastic bladder causing problems with incontinence.
Signs and symptoms
Neurogenic bladder can cause a range of urinary symptoms including urinary urgency, urinary incontinence or difficulty urinating (urinary retention). The first sign of bladder dysfunction may be recurrent urinary tract infections (UTIs).
Complications
Neurogenic bladder can cause hydronephrosis (swelling of a kidney due to a build-up of urine), recurrent urinary tract infections, and recurrent kidney stones which may compromise kidney function. This is especially significant in spastic neurogenic bladder that leads to high bladder pressures. Kidney failure was previously a leading cause of mortality in patients with spinal cord injury but is now dramatically less common due to improvements in bladder management.
Causes
Urine storage and elimination (urination) requires coordination between the bladder emptying muscle (detrusor) and the external sphincter of the bladder. This coordination can be disrupted by damage or diseases of the central nervous system, peripheral nerves or autonomic nervous system. This includes any condition that impairs bladder signaling at any point along the path from the urination center in the brain, spinal cord, peripheral nerves and the bladder.
Central nervous system
Damage to the brain or spinal cord is the most common cause of neurogenic bladder. Damage to the brain can be caused by stroke, brain tumors, multiple sclerosis, Parkinsons disease or other neurodegenerative conditions. Bladder involvement is more likely if the damage is in the area of the pons. Damage to the spinal cord can be caused by traumatic injury, demyelinating disease, syringomyelia, cauda equina syndrome, or spina bifida. Spinal cord compression from herniated disks, tumor, or spinal stenosis can also result in neurogenic bladder.
Peripheral nervous system
Damage to the nerves that travel from the spinal cord to the bladder (peripheral nerves) can cause neurogenic bladder, usually the flaccid type. Nerve damage can be caused by diabetes, alcoholism, and vitamin B12 deficiency. Peripheral nerves can also be damaged as a complication of major surgery of the pelvis, such as for removal of tumors.
Diagnosis
The diagnosis of neurogenic bladder is made based on a complete history and physical examination and may require imaging and specialized studies. History should include information on the onset, duration, triggers, severity, other medical conditions and medications (including anticholinergics, calcium channel blockers, diuretics, sedatives, alpha-adrenergic agonist, alpha 1 antagonists). Urinary symptoms may include frequency, urgency, incontinence or recurrent urinary tract infections (UTIs). Questionnaires can be helpful in quantifying symptom burden. In children it is important to obtain a prenatal and developmental history.Ultrasound imaging can give information on the shape of the bladder, post-void residual volume, and evidence of kidney damage such as kidney size, thickness or ureteral dilation. Trabeculated bladder on ultrasound indicates high risk of developing urinary tract abnormalities such as hydronephrosis and stones. A voiding cystourethrography study uses contrast dye to obtain images of the bladder both when it is full and after urination which can show changes in bladder shape consistent with neurogenic bladder.Urodynamic studies are an important component of the evaluation for neurogenic bladder. Urodynamics refers to the measurement of the pressure-volume relationship in the bladder. The bladder usually stores urine at low pressure and urination can be completed without a dramatic pressure rise. Damage to the kidneys is probable if the pressure rises above 40 cm of water during filling. Bladder pressure can be measured by cystometry, during which the bladder is artificially filled with a catheter and bladder pressures and detrusor activity are monitored. Patterns of involuntary detrusor activity as well as bladder flexibility, or compliance, can be evaluated. The most valuable test to test for detrusor sphincter dyssynergia (DESD) is to perform cystometry simultaneously with external sphincter electromyography (EMG). Uroflowmetry is a less-invasive study that can measure urine flow rate and use it to estimate detrusor strength and sphincter resistance. Urethral pressure monitoring is another less-invasive approach to assessing detrusor sphincter dyssynergia. These studies can be repeated at regular intervals, especially if symptoms worsen or to measure response to therapies.Evaluation of kidney function through blood tests such as serum creatinine should be obtained.Imaging of the pelvis with CT scan or magnetic resonance imaging may be necessary, especially if there is concern for an obstruction such as a tumor. The inside of the bladder can be visualized by cystoscopy.
Treatment
Treatment depends on the type of neurogenic bladder and other medical problems. Treatment strategies include catheterization, medications, surgeries or other procedures. The goals of treatment is to keep bladder pressures in a safe range and eliminate residual urine in the bladder after urination (post-void residual volumes).
Catherization
Emptying the bladder with the use of a catheter is the most common strategy for managing urinary retention from neurogenic bladder. For most patients, this can be accomplished with intermittent catherization which involves no surgery or permanently attached appliances. Intermittent catheterization involves using straight catheters (which are usually disposable or single-use products) several times a day to empty the bladder. This can be done independently or with assistance. For people who are unable to use disposable straight catheters, a Foley catheter allows continuous drainage of urine into a sterile drainage bag that is worn by the patient, but such catheters are associated with higher rates of complications.
Medications
Oxybutynin is a common anti-cholinergic medication used to reduce bladder contractions by blocking M3 muscarinic receptors in the detrusor. Its use is limited by side effects such as dry mouth, constipation and decreased sweating. Tolterodine is a longer acting anticholinergic that may have fewer side effects.For urinary retention, cholinergics (muscarinic agonists) like bethanechol can improve the squeezing ability of the bladder. Alpha blockers can also reduce outlet resistance and allow complete emptying if there is adequate bladder muscle function.
Botulinum Toxin
Botulinum toxin (Botox) can be used through two different approaches. For spastic neurogenic bladder, the bladder muscle (detrusor) can be injected which will cause it to be flaccid for 6–9 months. This prevents high bladder pressures and intermittent catherization must be used during this time.Botox can also be injected into the external sphincter to paralyze a spastic sphincter in patients with detrusor sphincter dyssynergia.
Neuromodulation
There are various strategies to alter the interaction between the nerves and muscles of the bladder, including nonsurgical therapies (transurethral electrical bladder stimulation), minimally invasive procedures (sacral neuromodulation pacemaker), and operative (reconfiguration of sacral nerve root anatomy).
Surgery
Surgical interventions may be pursued if medical approaches have been maximized.
Surgical options depend on the type of dysfunction observed on urodynamic testing, and may include:
Urinary Diversion: Creation of a stoma (from the intestines, called "conduit") that bypasses the urethra to empty the bladder directly through a skin opening. Several techniques may be used. One technique is the Mitrofanoff stoma, where the appendix or a portion of the ileum (‘Yang-Monti’ conduit) are used to create the diversion. The ileum and ascending colon can also be used to create a pouch accessible for catheterization (Indiana pouch).
Urethral stents or urethral sphincterotomy are other surgical approaches that can reduce bladder pressures but require use of an external urinary collection device.
Urethral slings may be used in both adults and children
Artificial Urinary Sphincters have shown good term outcomes in adults and pediatric patients. One study on 97 patients followed for a mean duration of 4 years found that 92% percent were continent at day and night during follow up. However, patients in this study who had intermediate-type bladders underwent adjuvant cystoplasty.
Bladder Neck Closure is a major surgical procedure which can be a last resort treatment for incontinence, a Mitrofanoff stoma is necessary to empty the bladder.
Epidemiology
The overall prevalence of neurogenic bladder is limited due to the broad range of conditions that can lead to urinary dysfunction. Neurogenic bladder is common with spinal cord injury and multiple sclerosis. Rates of some type of urinary dysfunction surpass 80% one year after spinal cord injury. Among patients with multiple sclerosis, 20–25% will develop neurogenic bladder although the type and severity bladder dysfunction is variable.
See also
Bladder sphincter dyssynergia
Multiple sclerosis
Pseudodyssynergia
Spinal cord injury
Urinary retention
References
== External links == |
Neurosyphilis | Neurosyphilis refers to infection of the central nervous system in a patient with syphilis. In the era of modern antibiotics the majority of neurosyphilis cases have been reported in HIV-infected patients. Meningitis is the most common neurological presentation in early syphilis. Tertiary syphilis symptoms are exclusively neurosyphilis, though neurosyphilis may occur at any stage of infection.
To diagnose neurosyphilis, patients undergo a lumbar puncture to obtain cerebrospinal fluid (CSF) for analysis. The CSF is tested for antibodies for specific Treponema pallidum antigens. The preferred test is the VDRL test, which is sometimes supplemented by fluorescent treponemal antibody absorption test (FTA-ABS).Historically, the disease was studied under the Tuskegee study, a notable example of unethical human experimentation. The study was done on approximately 400 African-American men with untreated syphilis who were followed from 1932 to 1972 and compared to approximately 200 men without syphilis. The study began without informed consent of the subjects and was continued by the United States Public Health Service until 1972. The researchers failed to notify and withheld treatment for patients despite knowing penicillin was found as an effective cure for neurosyphilis. After four years of follow up, neurosyphilis was identified in 26.1% of patients vs. 2.5% of controls. After 20 years of followup, 14% showed signs of neurosyphilis and 40% had died from other causes.
Signs and symptoms
The signs and symptoms of neurosyphilis vary with the disease stage of syphilis. The stages of syphilis are categorized as primary, secondary, latent, and tertiary. It is important to note that neurosyphilis may occur at any stage of infection.Meningitis is the most common neurological presentation in early syphilis. It typically occurs in the secondary stage, arising within one year of initial infection. The symptoms are similar to other forms of meningitis. The most common associated with neurosyphilitic meningitis is cranial nerve palsy, especially of the facial nerve.Nearly any part of the eye may be involved. The most common form of ocular syphilis is uveitis. Other forms include episcleritis, vitritis, retinitis, papillitis, retinal detachment, and interstitial keratitis.Meningovascular syphilis usually occurs in late syphilis but may affect those with early disease. It is due to inflammation of the vasculature supplying the central nervous system, that results in ischemia. It typically occurs about 6–7 years after initial infection and it may affect those with early disease. It may present as stroke or spinal cord infarct. Signs and symptoms vary with vascular territory involved. The middle cerebral artery is most often affected.Parenchymal syphilis occurs years to decades after initial infection. It presents with the constellation of symptoms known as tabes dorsalis, because of a degenerative process of the posterior columns of the spinal cord. The constellation includes Argyll Robertson pupil, ataxic wide-based gait, paresthesias, bowel or bladder incontinence, loss of position and vibratory sense, loss of deep pain and temperature sensation, acute episodic gastrointestinal pain, Charcot joints, and general paresis.Gummatous disease may also present with destructive inflammation and space-occupying lesions. It is caused by granulomatous destruction of visceral organs. They most often involve the frontal and parietal lobes of the brain.Movement disorders can be found in a small percentage of individuals with neurosyphilis. The abnormal movements already reported were tremor, chorea, parkinsonism, ataxia, myoclonus, dystonia, athetosis, and ballism.
Neuropsychiatric
Although neurosyphilis is a neurological disease, neuropsychiatric symptoms might appear due to overall damage to the brain. These symptoms can make the diagnosis more difficult and can include symptoms of dementia, mania, psychosis, depression, and delirium:These symptoms are not always present, and when they are, they usually appear in more advanced stages of the disease.
Complications
The Jarisch-Herxheimer reaction is an immune-mediated response to syphilis therapy occurring within 2–24 hours. The exact mechanisms of reaction are unclear, however most likely caused by proinflammatory treponemal lipoproteins that are released from dead and dying organisms following antibiotic treatment. It is typically characterized by fever, headache, myalgia and possibly intensification of skin rash. It most often occurs in early-stage syphilis (up to 50%–75% of patients with primary and secondary syphilis). It is usually self-limiting and managed with antipyretics and nonsteroidal anti-inflammatory medications.
Risk factors
There are several risk factors: high-risk sexual behavior from unprotected sex and multiple sexual partners. The HIV infection antiretroviral therapy (ART) suppresses HIV transmission, but not syphilis transmission. It may also be associated with recreational drug use.
Pathophysiology
The pathogenesis is not fully known, in part due to fact that the organism is not easily cultured. Within days to weeks after initial infection, Treponema pallidum disseminates via blood and lymphatics. The organism may accumulate in perivascular spaces of nearly any organ, including the central nervous system (CNS). It is unclear why some patients develop CNS infection and others do not. Rarely, organisms may invade any structures of the eye (such as cornea, anterior chamber, vitreous and choroid, and optic nerve) and cause local inflammation and edema. In primary or secondary syphilis, invasion of the meninges may result in lymphocytic and plasma cell infiltration of perivascular spaces (Virchow-Robin spaces). The extension of cellular immune response to the brainstem and spinal cord causes inflammation and necrosis of small meningeal vessels.In tertiary syphilis, reactivation of chronic latent infection may result in meningovascular syphilis, arising from endarteritis obliterans of small, medium, or large arteries supplying the CNS. The parenchymal syphilis, presents as tabes dorsalis and general paresis. Tabes dorsalis thought to be due to irreversible degeneration of nerve fibers in posterior columns of the spinal cord involving the lumbosacral and lower thoracic levels. The general paresis is caused by meningeal vascular inflammation and ependymal granulomatous infiltration may lead to neuronal loss, along with astrocytic and microglial proliferation and damage may preferentially occur in the cerebral cortex, striatum, hypothalamus, and meninges.Concurrent infection of T. pallidum with human immunodeficiency virus (HIV) has been found to affect the course of syphilis. Syphilis can lie dormant for 10 to 20 years before progressing to neurosyphilis, but HIV may accelerate the rate of the progress. Also, infection with HIV has been found to cause penicillin therapy to fail more often. Therefore, neurosyphilis has once again been prevalent in societies with high HIV rates and limited access to penicillin.
Diagnosis
To diagnose neurosyphilis, cerebrospinal fluid (CSF) analysis is required. Lumbar puncture ("spinal tap") is used to acquire CSF. The Venereal Disease Research Laboratory test of the CSF is the preferred test for making a diagnosis of neurosyphilis. A positive test confirms neurosyphilis but a negative result does not rule out neurosyphilis. Due to the low sensitivity of the CSF VDRL, fluorescent treponemal antibody absorption test (FTA-ABS) can be used to supplement VDRL. Reported sensitivity is variable. False-negative antibody test result occurring when antibody concentration is so high that agglutination reaction cannot occur, which is typically seen during secondary stage and can be overcome by diluting test sample 1:10. CSF white blood cell count is often elevated in the early stages of neurosyphilis, ranging from about 50 to 100 white blood cells/mcL with a lymphocyte predominance. Cell counts are typically lower in late syphilis. Regardless of syphilis disease stage, the absence of CSF white blood cells rules out neurosyphilis.
Treatment
Penicillin is used to treat neurosyphilis. Two examples of penicillin therapies include:
Aqueous penicillin G 3–4 million units every four hours for 10 to 14 days.
One daily intramuscular injection and oral probenecid four times daily, both for 10 to 14 days.Follow-up blood tests are generally performed at 3, 6, 12, 24, and 36 months to make sure the infection is gone. Lumbar punctures for CSF fluid analysis are generally performed every 6 months until cell counts normalize. All patients with syphilis should be tested for HIV infection. All cases of syphilis should be reported to public health authorities and public health departments can aid in partner notification, testing, and determining need for treatment.The treatment success is measured with a 4-fold drop in the nontreponemal antibody test. In early-stage syphilis drop should occur in 6–12 months. in late syphilis drop can take 12–24 months. Titers may decline more slowly in persons who have previously had syphilis.In people who cannot take penicillin it is uncertain if other antibiotic therapy is effective for treating neurosyphilis.
== References == |
Neurotrophic keratitis | Neurotrophic keratitis (NK) is a degenerative disease of the cornea caused by damage of the trigeminal nerve, which results in impairment of corneal sensitivity, spontaneous corneal epithelium breakdown, poor corneal healing and development of corneal ulceration, melting and perforation. This is because, in addition to the primary sensory role, the nerve also plays a role maintaining the integrity of the cornea by supplying it with trophic factors and regulating tissue metabolism.Neurotrophic keratitis is classified as a rare disease, with an estimated prevalence of less than 5 in 10,000 people in Europe. It has been recorded that on average, 6% of herpetic keratitis cases may evolve to this disease, with a peak of 12.8% of cases of keratitis due to varicella zoster virus.The diagnosis, and particularly the treatment of neurotrophic keratitis are the most complex and challenging aspects of this disease, as a satisfactory therapeutic approach is not yet available.
Causes
The cornea lacks blood vessels and is among the most densely innervated structures of the human body. Corneal nerves are responsible for maintaining the anatomical and functional integrity of the cornea, conveying tactile, temperature and pain sensations, playing a role in the blink reflex, in wound healing and in the production and secretion of tears.Most corneal nerve fibres are sensory in origin and are derived from the ophthalmic branch of the trigeminal nerve. Congenital or acquired ocular and systemic diseases can determine a lesion at different levels of the trigeminal nerve, which can lead to a reduction (hypoesthesia) or loss (anesthesia) of sensitivity of the cornea.
The most common causes of loss of corneal sensitivity are viral infections (herpes simplex and herpes zoster ophthalmicus), chemical burns, physical injuries, corneal surgery, neurosurgery, chronic use of topical medications, or chronic use of contact lenses.Possible causes also include systemic diseases such as diabetes mellitus, multiple sclerosis or leprosy.
Other, albeit less frequent, potential causes of the disease are: intracranial space-occupying lesions such as neuroma, meningioma and aneurysms, which may compress the trigeminal nerve and reduce corneal sensitivity.Conversely, congenital conditions that may lead to this disorder are very rare.
Diagnosis
NK is diagnosed on the basis of the patients medical history and a careful examination of the eye and surrounding area.With regard to the patients medical history, special attention should be paid to any herpes virus infections and possible surgeries on the cornea, trauma, abuse of anaesthetics or chronic topical treatments, chemical burns or, use of contact lenses. It is also necessary to investigate the possible presence of diabetes or other systemic diseases such as multiple sclerosis.
The clinical examination is usually performed through a series of assessments and tools:
General examination of cranial nerves, to determine the presence of nerve damage.
Eye examinations:Complete eye examination: examination of the eyelids, blink rate, presence of inflammatory reactions and secretions, corneal epithelial alterations.
Corneal sensitivity test: performed by placing a cotton wad or cotton thread in contact with the corneal surface: this only allows to determine whether corneal sensitivity is normal, reduced or absent; or using an esthesiometer that allows to assess corneal sensitivity.
Tear film function test, such as Schirmers test, and tear film break-up time.
Fluorescein eye stain test, which shows any damage to the corneal and conjunctival epithelium
Classification
According to Mackies classification, neurotrophic keratitis can be divided into three stages based on severity:
Stage I: characterized by alterations of the corneal epithelium, which is dry and opaque, with superficial punctate keratopathy and corneal oedema. Long-lasting neurotrophic keratitis may also cause hyperplasia of the epithelium, stromal scarring and neovascularization of the cornea.
Stage II: characterized by development of epithelial defects, often in the area near the centre of the cornea.
Stage III: characterized by ulcers of the cornea accompanied by stromal oedema and/or melting that may result in corneal perforation.
Treatment
Early diagnosis, targeted treatment according to the severity of the disease, and regular monitoring of patients with neurotrophic keratitis are critical to prevent damage progression and the occurrence of corneal ulcers, especially considering that the deterioration of the condition is often poorly symptomatic.The purpose of treatment is to prevent the progression of corneal damage and promote healing of the corneal epithelium. The treatment should always be personalized according to the severity of the disease. Conservative treatment is typically the best option.In stage I, the least serious, treatment consists of the administration of preservative-free artificial tears several times a day in order to lubricate and protect the ocular surface, improving the quality of the epithelium and preventing the possible loss of transparency of the cornea.In stage II, treatment should be aimed at preventing the development of corneal ulcers and promoting the healing of epithelial lesions. In addition to artificial tears, topical antibiotics may also be prescribed to prevent possible infections. Patients should be monitored very carefully since, being the disease poorly symptomatic, the corneal damage may progress without the patient noticing any worsening of the symptoms. Corneal contact lenses can also be used in this stage of the disease, for their protective action to improve corneal healing.In the most severe forms (stage III), it is necessary to stop the progression towards corneal perforation: in these cases, a possible surgical treatment option is tarsorrhaphy, i.e. the temporary or permanent closure of the eyelids by means of sutures or botulinum toxin injection. This protects the cornea, although the aesthetic result of these procedures may be difficult to accept for patients. Similarly, a procedure that entails the creation of a conjunctival flap has been shown to be effective in the treatment of chronic corneal ulcers with or without corneal perforation.In addition, another viable therapeutic option is amniotic membrane graft, which has recently been shown to play a role in stimulating corneal epithelium healing and in reducing vascularisation and inflammation of the ocular surface. Other approaches used in severe forms include the administration of autologous serum eye drops.Research studies have focused on developing novel treatments for neurotrophic keratitis, and several polypeptides, growth factors and neuromediators have been proposed. Studies were conducted on topical treatment with Substance P and IGF-1 (insulin-like growth factor-1), demonstrating an effect on epithelial healing. Nerve Growth Factor (NGF) play a role in the epithelial proliferation and differentiation and in the survival of corneal sensory nerves. Topical treatment with murine NGF showed to promote recovery of epithelial integrity and corneal sensitivity in NK patients. Recently, a recombinant human nerve growth factor eye drop formulation has been developed for clinical use.Cenegermin, a recombinant form of human NGF, has recently been approved in Europe in an eye drop formulation for neurotrophic keratitis.
Cenegermin as an eye drop formulation for treatment of NK is approved by FDA in August 2018
See also
Keratitis
Cornea
Rare disease
References
== External links == |
Nightmare | A nightmare, also known as a bad dream, is an unpleasant dream that can cause a strong emotional response from the mind, typically fear but also despair, anxiety or great sadness. However, psychological nomenclature differentiates between nightmares and bad dreams; specifically, people remain asleep during bad dreams, whereas nightmares can awaken individuals. The dream may contain situations of discomfort, psychological or physical terror, or panic. After a nightmare, a person will often awaken in a state of distress and may be unable to return to sleep for a short period of time. Recurrent nightmares may require medical help, as they can interfere with sleeping patterns and cause insomnia.
Nightmares can have physical causes such as sleeping in an uncomfortable position or having a fever, or psychological causes such as stress or anxiety. Eating before going to sleep, which triggers an increase in the bodys metabolism and brain activity, can be a potential stimulus for nightmares.The prevalence of nightmares in children (5–12 years old) is between 20 and 30%, and for adults is between 8 and 30%. In common language, the meaning of nightmare has extended as a metaphor to many bad things, such as a bad situation or a scary monster or person.
Etymology
The word nightmare is derived from the Old English mare, a mythological demon or goblin who torments others with frightening dreams. The term has no connection with the Modern English word for a female horse. The word nightmare is cognate with the Dutch term nachtmerrie and German Nachtmahr (dated).
The sorcerous demons of Iranian mythology known as Divs are likewise associated with the ability to afflict their victims with nightmares.
Signs and symptoms
Those with nightmares experience abnormal sleep architecture. The impact of having a nightmare during the night has been found to be very similar to that of insomnia. This is thought to be caused by frequent nocturnal awakenings and fear of falling asleep. Nightmare disorder symptoms include repeated awakenings from the major sleep period or naps with detailed recall of extended and extremely frightening dreams, usually involving threats to survival, security, or self-esteem. The awakenings generally occur during the second half of the sleep period.
Classification
According to the International Classification of Sleep Disorders-Third Edition (ICSD-3), the nightmare disorder, together with REM sleep behaviour disorder (RBD) and recurrent isolated sleep paralysis, form the REM-related parasomnias subcategory of the Parasomnias cluster. Nightmares may be idiopathic without any signs of psychopathology or associated with disorders like stress, anxiety, substance abuse, psychiatric illness or PTSD (>80% of PTSD patients report nightmares). As regarding the dream content of the dreams they are usually imprinting negative emotions like sadness, fear or rage. According to the clinical studies the content can include being chased, injury or death of others, falling, natural disasters or accidents. Typical dreams or recurrent dreams may also have some of these topics.
Cause
Scientific research shows that nightmares may have many causes. In a study focusing on children, researchers were able to conclude that nightmares directly correlate with the stress in childrens lives. Children who experienced the death of a family member or a close friend or know someone with a chronic illness have more frequent nightmares than those who are only faced with stress from school or stress from social aspects of daily life.
A study researching the causes of nightmares focuses on patients who have sleep apnea. The study was conducted to determine whether or not nightmares may be caused by sleep apnea, or being unable to breathe. In the nineteenth century, authors believed that nightmares were caused by not having enough oxygen, therefore it was believed that those with sleep apnea had more frequent nightmares than those without it. The results actually showed that healthy people have more nightmares than sleep apnea patients.
Another study supports the hypothesis. In this study, 48 patients (aged 20–85 yrs) with obstructive airways disease (OAD), including 21 with and 27 without asthma, were compared with 149 sex- and age-matched controls without respiratory disease. OAD subjects with asthma reported approximately 3 times as many nightmares as controls or OAD subjects without asthma. The evolutionary purpose of nightmares then could be a mechanism to awaken a person who is in danger.
Lucid-dreaming advocate Stephen LaBerge has outlined a possible reason for how dreams are formulated and why nightmares occur. To LaBerge, a dream starts with an individual thought or scene, such as walking down a dimly lit street. Since dreams are not predetermined, the brain responds to the situation by either thinking a good thought or a bad thought, and the dream framework follows from there. If bad thoughts in a dream are more prominent than good thoughts, the dream may proceed to be a nightmare.There is a view, possibly featured in the story A Christmas Carol, that eating cheese before sleep can cause nightmares, but there is little scientific evidence for this.Severe nightmares are also likely to occur when person has a fever, these nightmares are often referred to as fever dreams.
Treatment
Sigmund Freud and Carl Jung seemed to have shared a belief that people frequently distressed by nightmares could be re-experiencing some stressful event from the past. Both perspectives on dreams suggest that therapy can provide relief from the dilemma of the nightmarish experience.
Halliday (1987) grouped treatment techniques into four classes. Direct nightmare interventions that combine compatible techniques from one or more of these classes may enhance overall treatment effectiveness:
Analytic and cathartic techniques
Storyline alteration procedures
Face-and-conquer approaches
Desensitization and related behavioral techniques
Post-traumatic stress disorder
Recurring post-traumatic stress disorder (PTSD) nightmares in which traumas are re-experienced respond well to a technique called imagery rehearsal. This involves dreamers coming up with alternative, mastery outcomes to the nightmares, mentally rehearsing those outcomes while awake, and then reminding themselves at bedtime that they wish these alternate outcomes should the nightmares reoccur. Research has found that this technique not only reduces the occurrence of nightmares and insomnia, but also improves other daytime PTSD symptoms. The most common variations of imagery rehearsal therapy (IRT) "relate to the number of sessions, duration of treatment, and the degree to which exposure therapy is included in the protocol".
Medication
Prazosin (alpha-1 blocker) appears useful in decreasing the number of nightmares and the distress caused by them in people with PTSD.
Risperidone (atypical antipsychotic) at a dosage of 2 mg per day, has been shown in case series to remission of nightmares on the first night.
Trazodone (antidepressant) has been shown in a case report to treat nightmares associated with a depressed patient.Trials have included hydrocortisone, gabapentin, paroxetine, tetrahydrocannabinol, eszopiclone, xyrem, and carvedilol.
See also
Bogeyman
False awakening
Hag § In folklore
Horror and terror
Mare (folklore)
Night terror
Nightmare disorder
Nocnitsa
Sleep disorder
Sleep paralysis
Succubus
Incubus
A Nightmare on Elm Street, 1984 film
References
Further reading
Anch, A. M.; Browman, C. P.; Mitler, M. M.; Walsh, J. K. (1988). Sleep: A Scientific Perspective. New Jersey: Prentice-Hall. ISBN 9780138129187.
Harris, J. C. (2004). "The Nightmare". Archives of General Psychiatry. 61 (5): 439–40. doi:10.1001/archpsyc.61.5.439. PMID 15123487.
Husser, J.-M.; Mouton, A., eds. (2010). Le Cauchemar dans les sociétés antiques. Actes des journées détude de lUMR 7044 (15–16 Novembre 2007, Strasbourg) (in French). Paris: De Boccard.
Jones, Ernest (1951). On the Nightmare. ISBN 978-0-87140-912-6.
Forbes, D.; et al. (2001). "Brief Report: Treatment of Combat-Related Nightmares Using Imagery Rehearsal: A Pilot Study". Journal of Traumatic Stress. 14 (2): 433–442. doi:10.1023/A:1011133422340. PMID 11469167. S2CID 44630028.
Siegel, A. (2003). "A mini-course for clinicians and trauma workers on posttraumatic nightmares".
Burns, Sarah (2004). Painting the Dark Side : Art and the Gothic Imagination in Nineteenth-Century America. Ahmanson-Murphy Fine Are Imprint. University of California Press. ISBN 978-0-520-23821-3.
Davenport-Hines, Richard (1999). Gothic: Four Hundred Years of Excess, Horror, Evil and Ruin. North Point Press. pp. 160–61. ISBN 9780865475441.
Hill, Anne (2009). What To Do When Dreams Go Bad: A Practical Guide to Nightmares. Serpentine Media. ISBN 978-1-887590-04-4.
Simons, Ronald C.; Hughes, Charles C., eds. (1985). Culture-Bound Syndromes. Springer.
Sagan, Carl (1997). The Demon-Haunted World: Science as a Candle in the Dark.
Coalson, Bob (1995). "Nightmare help: Treatment of trauma survivors with PTSD". Psychotherapy: Theory, Research, Practice, Training. 32 (3): 381–388. doi:10.1037/0033-3204.32.3.381.
"Nightmares? Bad Dreams, or Recurring Dreams? Lucky You!". Archived from the original on 19 March 2012. Retrieved 8 December 2015.
Halliday, G. (1987). "Direct psychological therapies for nightmares: A review". Clinical Psychology Review. 7 (5): 501–523. doi:10.1016/0272-7358(87)90041-9.
Doctor, Ronald M.; Shiromoto, Frank N., eds. (2010). "Imagery Rehearsal Therapy (IRT)". The Encyclopedia of Trauma and Traumatic Stress Disorders. New York: Facts on File. p. 148. ISBN 9780816067640.
Mayer, Mercer (1976). Theres a Nightmare in My Closet. [New York]: Puffin Pied Piper.
Moore, Bret A.; Kraków, Barry (2010). "Imagery rehearsal therapy: An emerging treatment for posttraumatic nightmares in veterans". Psychological Trauma: Theory, Research, Practice, and Policy. 2 (3): 232–238. doi:10.1037/a0019895.
External links
Night-Mares: Demons that Cause Nightmares |
Nocturnal enuresis | Nocturnal enuresis, also informally called bedwetting, is involuntary urination while asleep after the age at which bladder control usually begins. Bedwetting in children and adults can result in emotional stress. Complications can include urinary tract infections.Most bedwetting is a developmental delay—not an emotional problem or physical illness. Only a small percentage (5 to 10%) of bedwetting cases have a specific medical cause. Bedwetting is commonly associated with a family history of the condition. Nocturnal enuresis is considered primary when a child has not yet had a prolonged period of being dry. Secondary nocturnal enuresis is when a child or adult begins wetting again after having stayed dry.
Treatments range from behavioral therapy, such as bedwetting alarms, to medication, such as hormone replacement, and even surgery such as urethral dilatation. Since most bedwetting is simply a developmental delay, most treatment plans aim to protect or improve self-esteem. Treatment guidelines recommend that the physician counsel the parents, warning about psychological consequences caused by pressure, shaming, or punishment for a condition children cannot control.Bedwetting is the most common childhood complaint.
Impact
A review of medical literature shows doctors consistently stressing that a bedwetting child is not at fault for the situation. Many medical studies state that the psychological impacts of bedwetting are more important than the physical considerations. "It is often the childs and family members reaction to bedwetting that determines whether it is a problem or not."
Self-esteem
Whether bedwetting causes low self-esteem remains a subject of debate, but several studies have found that self-esteem improved with management of the condition.Children questioned in one study ranked bedwetting as the third most stressful life event, after "parental war of words", divorce and parental fighting. Adolescents in the same study ranked bedwetting as tied for second with parental fighting.Bedwetters face problems ranging from being teased by siblings, being punished by parents, the embarrassment of still having to wear diapers, and being afraid that friends will find out.
Psychologists report that the amount of psychological harm depends on whether the bedwetting harms self-esteem or development of social skills. Key factors are:
How much the bedwetting limits social activities like sleep-overs and campouts
The degree of the social ostracism by peers
(Perceived) Anger, punishment, refusal and rejection by caregivers along with subsequent guilt
The number of failed treatment attempts
How long the child has been wetting
Behavioral impact
Studies indicate that children with behavioral problems are more likely to wet their beds. For children who have developmental problems, the behavioral problems and the bedwetting are frequently part of/caused by the developmental issues. For bedwetting children without other developmental issues, these behavioral issues can result from self-esteem issues and stress caused by the wetting.As mentioned below, current studies show that it is very rare for a child to intentionally wet the bed as a method of acting out.
Punishment for bedwetting
Medical literature states, and studies show, that punishing or shaming a child for bedwetting will frequently make the situation worse. It is best described as a downward cycle, where a child punished for bedwetting feels shame and a loss of self-confidence. This can cause increased bedwetting incidents, leading to more punishment and shaming.In the United States, about 25% of enuretic children are punished for wetting the bed. In Hong Kong, 57% of enuretic children are punished for wetting. Parents with only a grade-school level education punish bedwetting children at twice the rate of high-school- and college-educated parents.
Families
Parents and family members are frequently stressed by a childs bedwetting. Soiled linens and clothing cause additional laundry. Wetting episodes can cause lost sleep if the child wakes and/or cries, waking the parents. A European study estimated that a family with a child who wets nightly will pay about $1,000 a year for additional laundry, extra sheets, diapers, and mattress replacement.Despite these stressful effects, doctors emphasize that parents should react patiently and supportively.
Sociopathy
Bedwetting does not indicate a greater possibility of being a sociopath, as long as caregivers do not cause trauma by shaming or punishing a bedwetting child. Bedwetting was part of the Macdonald triad, a set of three behavioral characteristics described by John Macdonald in 1963. The other two characteristics were firestarting and animal abuse. Macdonald suggested that there was an association between a person displaying all three characteristics, then later displaying sociopathic criminal behavior.Up to 60% of multiple-murderers, according to some estimates, wet their beds post-adolescence.Enuresis is an "unconscious, involuntary [..] act".Bedwetting can be connected to past emotions and identity. Children under substantial stress, particularly in their home environment, frequently engage in bedwetting, in order to alleviate the stress produced by their surroundings. Trauma can also trigger a return to bedwetting (secondary enuresis) in both children and adults.
It is not bedwetting that increases the chance of criminal behavior, but the associated trauma. For example, parental cruelty can result in "homicidal proneness".
Causes
The aetiology of NE is not fully understood, although there are three common causes: excessive urine volume, poor sleep arousal, and bladder contractions. Differentiation of cause is mainly based on patient history and fluid charts completed by the parent or carer to inform management options.Bedwetting has a strong genetic component. Children whose parents were not enuretic have only a 15% incidence of bedwetting. When one or both parents were bedwetters, the rates jump to 44% and 77% respectively.These first two factors (aetiology and genetic component) are the most common in bedwetting, but current medical technology offers no easy testing for either cause. There is no test to prove that bedwetting is only a developmental delay, and genetic testing offers little or no benefit. As a result, other conditions should be ruled out. The following causes are less common, but are easier to prove and more clearly treated:In some bed-wetting children there is no increase in ADH (antidiuretic hormone) production, while other children may produce an increased amount of ADH but their response is insufficient.
Individuals with reported bedwetting issues are 2.7 times more likely to be diagnosed with Attention deficit hyperactivity disorder.
Caffeine increases urine production.
Chronic constipation can cause bed wetting. When the bowels are full, it can put pressure on the bladder. Often such children defecate normally, yet they retain a significant mass of material in the bowel which causes bed wetting.
Infections and disease are more strongly connected with secondary nocturnal enuresis and with daytime wetting. Less than 5% of all bedwetting cases are caused by infection or disease, the most common of which is a urinary tract infection.
Patients with more severe neurological-developmental issues have a higher rate of bedwetting problems. One study of seven-year-olds showed that "handicapped and intellectually disabled children" had a bedwetting rate almost three times higher than "non-handicapped children" (26.6% vs. 9.5%, respectively).
Psychological issues (e.g., death in the family, sexual abuse, extreme bullying) are established as a cause of secondary nocturnal enuresis (a return to bedwetting), but are very rarely a cause of PNE-type bedwetting. Bedwetting can also be a symptom of a pediatric neuropsychological disorder called PANDAS.
Sleep apnea stemming from an upper airway obstruction has been associated with bedwetting. Snoring and enlarged tonsils or adenoids are a sign of potential sleep apnea problems.
Sleepwalking can lead to bedwetting. During sleepwalking, the sleepwalker may think he/she is in another room. When the sleepwalker urinates during a sleepwalking episode, he/she usually thinks they are in the bathroom, and therefore urinate where they think the toilet should be. Cases of this have included opening a closet and urinating in it; urinating on the sofa and simply urinating in the middle of the room.
Stress is a cause of people who return to wetting the bed. Researchers find that moving to a new town, parent conflict or divorce, arrival of a new baby, or loss of a loved one or pet can cause insecurity, contributing to returning bedwetting.
Type 1 diabetes mellitus can first present as nocturnal enuresis. It is classically associated with polyuria, polydipsia, and polyphagia; weight loss, lethargy, and diaper candidiasis may also be present in those with new-onset disease.
Unconfirmed
Food allergies may be part of the cause for some patients. This link is not well established, requiring further research.
Improper toilet training is another disputed cause of bedwetting. This theory was more widely supported in the last century and is still cited by some authors today. Some say bedwetting can be caused by improper toilet training, either by starting the training when the child is too young or by being too forceful. Recent research has shown more mixed results and a connection to toilet training has not been proven or disproven. According to the American Academy of Pediatrics, more child abuse occurs during potty training than in any other developmental stage.
Dandelions are reputed to be a potent diuretic, and anecdotal reports and folk wisdom say children who handle them can end up wetting the bed. English folk names for the plant are "peebeds" and "pissabeds". In French the dandelion is called pissenlit, which means "piss in bed"; likewise "piscialletto", an Italian folkname, and "meacamas" in Spanish.
Mechanism
Two physical functions prevent bedwetting. The first is a hormone that reduces urine production at night. The second is the ability to wake up when the bladder is full. Children usually achieve nighttime dryness by developing one or both of these abilities. There appear to be some hereditary factors in how and when these develop.The first ability is a hormone cycle that reduces the bodys urine production. At about sunset each day, the body releases a minute burst of antidiuretic hormone (also known as arginine vasopressin or AVP). This hormone burst reduces the kidneys urine output well into the night so that the bladder does not get full until morning. This hormone cycle is not present at birth. Many children develop it between the ages of two and six years old, others between six and the end of puberty, and some not at all.The second ability that helps people stay dry is waking when the bladder is full. This ability develops in the same age range as the vasopressin hormone, but is separate from that hormone cycle.
The typical development process begins with one- and two-year-old children developing larger bladders and beginning to sense bladder fullness. Two- and three-year-old children begin to stay dry during the day. Four- and five-year-olds develop an adult pattern of urinary control and begin to stay dry at night.
Diagnosis
Thorough history regarding frequency of bedwetting, any period of dryness in between, associated daytime symptoms, constipation, and encopresis should be sought.
Voiding diary
People are asked to observe, record and measure when and how much their child voids and drinks, as well as associated symptoms. A voiding diary in the form of a frequency volume chart records voided volume along with the time of each micturition for at least 24 hours. The frequency volume chart is enough for patients with complaints of nocturia and frequency only. If other symptoms are also present then a detailed bladder diary must be maintained. In a bladder diary, times of micturition and voided volume, incontinence episodes, pad usage, and other information such as fluid intake, the degree of urgency, and the degree of incontinence are recorded.
Physical examination
Each child should be examined physically at least once at the beginning of treatment. A full pediatric and neurological exam is recommended. Measurement of blood pressure is important to rule out any renal pathology. External genitalia and lumbosacral spine should be examined thoroughly. A spinal defect, such as a dimple, hair tuft, or skin discoloration, might be visible in approximately 50% of patients with an intraspinal lesion. Thorough neurologic examination of the lower extremities, including gait, muscle power, tone, sensation, reflexes, and plantar responses should be done during first visit.
Classification
Nocturnal urinary continence is dependent on 3 factors: 1) nocturnal urine production, 2) nocturnal bladder function and 3) sleep and arousal mechanisms. Any child will experience nocturnal enuresis if more urine is produced than can be contained in the bladder or if the detrusor is hyperactive, provided that he or she is not awakened by the imminent bladder contraction.
Primary nocturnal enuresis
Primary nocturnal enuresis is the most common form of bedwetting. Bedwetting becomes a disorder when it persists after the age at which bladder control usually occurs (4–7 years), and is either resulting in an average of at least two wet nights a week with no long periods of dryness or not able to sleep dry without being taken to the toilet by another person.
New studies show that anti-psychotic drugs can have a side effect of causing enuresis.It has been shown that diet impacts enuresis in children. Constipation from a poor diet can result in impacted stool in the colon putting undue pressure on the bladder creating loss of bladder control (overflow incontinence).Some researchers, however, recommend a different starting age range. This guidance says that bedwetting can be considered a clinical problem if the child regularly wets the bed after turning 7 years old.
Secondary nocturnal enuresis
Secondary enuresis occurs after a patient goes through an extended period of dryness at night (six months or more) and then reverts to night-time wetting. Secondary enuresis can be caused by emotional stress or a medical condition, such as a bladder infection.
Psychological definition
Psychologists are usually allowed to diagnose and write a prescription for diapers if nocturnal enuresis causes the patient significant distress. Psychiatists may instead use a definition from the DSM-IV, defining nocturnal enuresis as repeated urination into bed or clothes, occurring twice per week or more for at least three consecutive months in a child of at least 5 years of age and not due to either a drug side effect or a medical condition.
Management
There are a number of management options for bedwetting. The following options apply when the bedwetting is not caused by a specifically identifiable medical condition such as a bladder abnormality or diabetes. Treatment is recommended when there is a specific medical condition such as bladder abnormalities, infection, or diabetes. It is also considered when bedwetting may harm the childs self-esteem or relationships with family/friends. Only a small percentage of bedwetting is caused by a specific medical condition, so most treatment is prompted by concern for the childs emotional welfare. Behavioral treatment of bedwetting overall tends to show increased self-esteem for children.Parents become concerned much earlier than doctors. A study in 1980 asked parents and physicians the age that children should stay dry at night. The average parent response was 2.75 years old, while the average physician response was 5.13 years old.Punishment is not effective and can interfere with treatment.
Treatment approaches
Simple behavioral methods are recommended as initial treatment. Other treatment methods include the following:
Motivational therapy in nocturnal enuresis mainly involves parent and child education. Guilt should be allayed by providing facts. Fluids should be restricted 2 hours prior to bed. The child should be encouraged to empty the bladder completely prior to going to bed. Positive reinforcement can be initiated by setting up a diary or chart to monitor progress and establishing a system to reward the child for each night that they are dry. The child should participate in morning cleanup as a natural, nonpunitive consequence of wetting. This method is particularly helpful in younger children (<8 years) and will achieve dryness in 15-20% of the patients.
Waiting: Almost all children will outgrow bedwetting. For this reason, urologists and pediatricians frequently recommend delaying treatment until the child is at least six or seven years old. Physicians may begin treatment earlier if they perceive the condition is damaging the childs self-esteem and/or relationships with family/friends.
Bedwetting alarms: Physicians also frequently suggest bedwetting alarms which sound a loud tone when they sense moisture. This can help condition the child to wake at the sensation of a full bladder. These alarms are considered more effective than no treatment and may have a lower risk of adverse events than some medical therapies but it is still uncertain if alarms are more effective than other treatments. There may be a 29% to 69% relapse rate, so the treatment may need to be repeated.
DDAVP (desmopressin) tablets are a synthetic replacement for antidiuretic hormone, the hormone that reduces urine production during sleep. Desmopressin is usually used in the form of desmopressin acetate, DDAVP. Patients taking DDAVP are 4.5 times more likely to stay dry than those taking a placebo. The drug replaces the hormone for that night with no cumulative effect. US drug regulators have banned using desmopressin nasal sprays for treating bedwetting since the oral form is considered safer.
DDAVP is most efficient in children with nocturnal polyuria (nocturnal urine production greater than 130% of expected bladder capacity for age) and normal bladder reservoir function (maximum voided volume greater than 70% of expected bladder capacity for age). Other children who are likely candidates for desmopressin treatment are those in whom alarm therapy has failed or those considered unlikely to comply with alarm therapy. It can be very useful for summer camp and sleepovers to prevent enuresis.
Tricyclic antidepressants: Tricyclic antidepressant prescription drugs with anti-muscarinic properties have been proven successful in treating bedwetting, but also have an increased risk of side effects, including death from overdose. These drugs include amitriptyline, imipramine and nortriptyline. Studies find that patients using these drugs are 4.2 times as likely to stay dry as those taking a placebo. The relapse rates after stopping the medicines are close to 50%.
Condition management
Diapers: Wearing a diaper can reduce embarrassment for bedwetters and make cleanup easier for caregivers. These products are known as training pants or diapers when used for younger children, and as absorbent underwear or incontinence briefs when marketed for older children and adults. Some diapers are marketed especially for people with bedwetting. A major benefit is the reduced stress on both the bedwetter and caregivers. Wearing diapers can be especially beneficial for bedwetting children wishing to attend sleepovers or campouts, reducing emotional problems caused by social isolation and/or embarrassment in front of peers. According to one study of an adult with severe disabilities, extended diaper usage may interfere with learning to stay dry.
Waterproof mattress pads are used in some cases to ease clean-up of bedwetting incidents, however they only protect the mattress, and the sheets, bedding or sleeping partner may be soiled.
Unproven
Acupuncture: While acupuncture is safe in most adolescents, studies done to assess its effectiveness for nocturnal enuresis are of low quality.
Dry bed training: Dry bed training is frequently waking the child at night. Studies show this training is ineffective by itself and does not increase the success rate when used in conjunction with a bedwetting alarm.
Star chart: A star chart allows a child and parents to track dry nights, as a record and/or as part of a reward program. This can be done either alone or with other treatments. There is no research to show effectiveness, either in reducing bedwetting or in helping self-esteem. Some psychologists, however, recommend star charts as a way to celebrate successes and help a childs self-esteem.
Epidemiology
Doctors frequently consider bedwetting as a self-limiting problem, since most children will outgrow it. Children 5 to 9 years old have a spontaneous cure rate of 14% per year. Adolescents 10 to 18 years old have a spontaneous cure rate of 16% per year.As can be seen from the numbers above, a portion of bedwetting children will not outgrow the problem. Adult rates of bedwetting show little change due to spontaneous cure. Persons who are still enuretic at age 18 are likely to deal with bedwetting throughout their lives.Studies of bedwetting in adults have found varying rates. The most quoted study in this area was done in the Netherlands. It found a 0.5% rate for 20- to 79-year-olds. A Hong Kong study, however, found a much higher rate. The Hong Kong researchers found a bedwetting rate of 2.3% in 16- to 40-year-olds.
History
In the first century B.C., at lines 1026-29 of the fourth book of his On the Nature of Things, Lucretius gave a high-style description of bed-wetting:
"Innocent children often, when they are bound up by sleep, believe they are raising up their clothing by a latrine or shallow pot; they pour out the urine from their whole body, and the Babylonian bedding with its magnificent splendor is soaked."An early psychological perspective on bedwetting was given in 1025 by Avicenna in The Canon of Medicine:
"Urinating in bed is frequently predisposed by deep sleep: when urine begins to flow, its inner nature and hidden will (resembling the will to breathe) drives urine out before the child awakes. When children become stronger and more robust, their sleep is lighter and they stop urinating."Psychological theory through the 1960s placed much greater focus on the possibility that a bedwetting child might be acting out, purposefully striking back against parents by soiling linens and bedding. However, more recent research and medical literature states that this is very rare.
See also
Enuresis
Nocturnal emission
References
== External links == |
Onchocerciasis | Onchocerciasis, also known as river blindness, is a disease caused by infection with the parasitic worm Onchocerca volvulus. Symptoms include severe itching, bumps under the skin, and blindness. It is the second-most common cause of blindness due to infection, after trachoma.The parasite worm is spread by the bites of a black fly of the Simulium type. Usually, many bites are required before infection occurs. These flies live near rivers, hence the common name of the disease. Once inside a person, the worms create larvae that make their way out to the skin, where they can infect the next black fly that bites the person. There are a number of ways to make the diagnosis, including: placing a biopsy of the skin in normal saline and watching for the larva to come out; looking in the eye for larvae; and looking within the bumps under the skin for adult worms.A vaccine against the disease does not exist. Prevention is by avoiding being bitten by flies. This may include the use of insect repellent and proper clothing. Other efforts include those to decrease the fly population by spraying insecticides. Efforts to eradicate the disease by treating entire groups of people twice a year are ongoing in a number of areas of the world. Treatment of those infected is with the medication ivermectin every six to twelve months. This treatment kills the larvae but not the adult worms. The antibiotic doxycycline weakens the worms by killing an associated bacterium called Wolbachia, and is recommended by some as well. The lumps under the skin may also be removed by surgery.About 15.5 million people are infected with river blindness. Approximately 0.8 million have some amount of loss of vision from the infection. Most infections occur in sub-Saharan Africa, although cases have also been reported in Yemen and isolated areas of Central and South America. In 1915, the physician Rodolfo Robles first linked the worm to eye disease. It is listed by the World Health Organization (WHO) as a neglected tropical disease. In 2013 Colombia became first country to eradicate this disease.
Signs and symptoms
Adult worms remain in subcutaneous nodules, limiting access to the hosts immune system. Microfilariae, in contrast, are able to induce intense inflammatory responses, especially upon their death. Wolbachia species have been found to be endosymbionts of O. volvulus adults and microfilariae, and are thought to be the driving force behind most of O. volvulus morbidity. Dying microfilariae have been recently discovered to release Wolbachia surface protein that activates TLR2 and TLR4, triggering innate immune responses and producing the inflammation and its associated morbidity. The severity of illness is directly proportional to the number of infected microfilariae and the power of the resultant inflammatory response.Skin involvement typically consists of intense itching, swelling, and inflammation. A grading system has been developed to categorize the degree of skin involvement:
Acute papular onchodermatitis – scattered pruritic papules
Chronic papular onchodermatitis – larger papules, resulting in hyperpigmentation
Lichenified onchodermatitis – hyperpigmented papules and plaques, with edema, lymphadenopathy, pruritus and common secondary bacterial infections
Skin atrophy – loss of elasticity, the skin resembles tissue paper, lizard skin appearance
Depigmentation – leopard skin appearance, usually on anterior lower leg
Glaucoma effect – eyes malfunction, begin to see shadows or nothingOcular involvement provides the common name associated with onchocerciasis, river blindness, and may involve any part of the eye from conjunctiva and cornea to uvea and posterior segment, including the retina and optic nerve. The microfilariae migrate to the surface of the cornea. Punctate keratitis occurs in the infected area. This clears up as the inflammation subsides. However, if the infection is chronic, sclerosing keratitis can occur, making the affected area become opaque. Over time, the entire cornea may become opaque, thus leading to blindness. Some evidence suggests the effect on the cornea is caused by an immune response to bacteria present in the worms.The infected persons skin is itchy, with severe rashes permanently damaging patches of skin.
Mazzotti reaction
The Mazzotti reaction, first described in 1948, is a symptom complex seen in patients after undergoing treatment of onchocerciasis with the medication diethylcarbamazine (DEC). Mazzotti reactions can be life-threatening, and are characterized by fever, urticaria, swollen and tender lymph nodes, tachycardia, hypotension, arthralgias, oedema, and abdominal pain that occur within seven days of treatment of microfilariasis.
The phenomenon is so common when DEC is used that this drug is the basis of a skin patch test used to confirm that diagnosis. The drug patch is placed on the skin, and if the patient is infected with O. volvulus microfilaria, localized pruritus and urticaria are seen at the application site.
Nodding disease
This is an unusual form of epidemic epilepsy associated with onchocerciasis although definitive link has not been established. This syndrome was first described in Tanzania by Louise Jilek-Aall, a Norwegian psychiatric doctor in Tanzanian practice, during the 1960s. It occurs most commonly in Uganda and South Sudan. It manifests itself in previously healthy 5–15-year-old children, is often triggered by eating or low temperatures and is accompanied by cognitive impairment. Seizures occur frequently and may be difficult to control. The electroencephalogram is abnormal but cerebrospinal fluid (CSF) and magnetic resonance imaging (MRI) are normal or show non-specific changes. If there are abnormalities on the MRI they are usually present in the hippocampus. Polymerase chain reaction testing of the CSF does not show the presence of the parasite.
Cause
The cause is Onchocerca volvulus.
Life cycle
The life of the parasite can be traced through the black fly and the human hosts in the following steps:
A Simulium female black fly takes a blood meal on an infected human host, and ingests microfilaria.
The microfilaria enter the gut and thoracic flight muscles of the black fly, progressing into the first larval stage (J1.).
The larvae mature into the second larval stage (J2.), and move to the proboscis and into the saliva in its third larval stage (J3.). Maturation takes about seven days.
The black fly takes another blood meal, passing the larvae into the next human hosts blood.
The larvae migrate to the subcutaneous tissue and undergo two more molts. They form nodules as they mature into adult worms over six to 12 months.
After maturing, adult male worms mate with female worms in the subcutaneous tissue to produce between 700 and 1,500 microfilaria per day.
The microfilaria migrate to the skin during the day, and the black flies only feed in the day, so the parasite is in a prime position for the female fly to ingest it. Black flies take blood meals to ingest these microfilaria to restart the cycle.
Diagnosis
Diagnosis can be made by skin biopsy (with or without PCR) or antibody testing.
Classification
Onchocerciasis causes different kinds of skin changes, which vary in different geographic regions; it may be divided into the following phases or types:: 440–441
Erisipela de la costa
An acute phase, it is characterized by swelling of the face, with erythema and itching.: 440 This skin change, erisípela de la costa, of acute onchocerciasis is most commonly seen among victims in Central and South America.Mal morando
This cutaneous condition is characterized by inflammation accompanied by hyperpigmentation.: 440
Sowda
A cutaneous condition, it is a localized type of onchocerciasis.: 440 Additionally, the various skin changes associated with onchocerciasis may be described as follows:: 440
Leopard skin
The spotted depigmentation of the skin that may occur with onchocerciasis: 440
Elephant skin
The thickening of human skin that may be associated with onchocerciasis: 440
Lizard skin
The thickened, wrinkled skin changes that may result with onchocerciasis: 441
Prevention
Various control programs aim to stop onchocerciasis from being a public health problem. The first was the Onchocerciasis Control Programme (OCP), which was launched in 1974, and at its peak, covered 30 million people in 11 countries. Through the use of larvicide spraying of fast-flowing rivers to control black fly populations, and from 1988 onwards, the use of ivermectin to treat infected people, the OCP eliminated onchocerciasis as a public health problem. The OCP, a joint effort of the World Health Organization, the World Bank, the United Nations Development Programme, and the UN Food and Agriculture Organization, was considered to be a success, and came to an end in 2002. Continued monitoring ensures onchocerciasis cannot reinvade the area of the OCP.
Elimination
In 1995, the African Programme for Onchocerciasis Control (APOC) began covering another 19 countries, mainly relying upon the use of the drug ivermectin. Its goal was to set up community-directed treatment with ivermectin for those at risk of infection. In these ways, transmission has declined. APOC closed in 2015 and aspects of its work taken over by the WHO Expanded Special Programme for the Elimination of Neglected Tropical Diseases (ESPEN). As in the Americas, the objective of ESPEN working with Government Health Ministries and partner NGDOs, is the elimination of transmission of onchocerciasis. This requires consistent annual treatment of 80% of the population in endemic areas for at least 10–12 years – the life span of the adult worm. No African country has so far verified elimination of onchocerciasis, but treatment has stopped in some areas (e.g. Nigeria), following epidemiological and entomological assessments that indicated that no ongoing transmission could be detected. In 2015, WHO facilitated the launch of an elimination program in Yemen which was subsequently put on hold due to conflict.In 1992, the Onchocerciasis Elimination Programme for the Americas, which also relies on ivermectin, was launched. On July 29, 2013, the Pan American Health Organization (PAHO) announced that after 16 years of efforts, Colombia had become the first country in the world to eliminate onchocerciasis. In September 2015, the Onchocerciasis Elimination Program for the Americas announced that onchocerciasis only remained in a remote region on the border of Brazil and Venezuela. The area is home to the Yanomami indigenous people. The first countries to receive verification of elimination were Colombia in 2013, Ecuador in 2014, Mexico in 2015 and Guatemala in 2016. The key factor in elimination is mass administration of the antiparasitic drug ivermectin. The initial projection was that the disease would be eliminated from remaining foci in the Americas by 2012.No vaccine to prevent onchocerciasis infection in humans is available. A vaccine to prevent onchocerciasis infection for cattle is in phase three trials. Cattle injected with a modified and weakened form of O. ochengi larvae have developed very high levels of protection against infection. The findings suggest that it could be possible to develop a vaccine that protects people against river blindness using a similar approach. Unfortunately, a vaccine to protect humans is still many years off.
Treatment
In mass drug administration (MDA) programmes, the treatment for onchocerciasis is ivermectin (trade name: Mectizan); infected people can be treated with two doses of ivermectin, six months apart, repeated every three years. The drug paralyses and kills the microfilariae causing fever, itching, and possibly oedema, arthritis and lymphadenopathy. Intense skin itching is eventually relieved, and the progression towards blindness is halted. In addition, while the drug does not kill the adult worms, it does prevent them for a limited time from producing additional offspring. The drug therefore prevents both morbidity and transmission for up to several months.Ivermectin treatment is particularly effective because it only needs to be taken once or twice a year, needs no refrigeration, and has a wide margin of safety, with the result that it has been widely given by minimally trained community health workers.
Antibiotics
For the treatment of individuals, doxycycline is used to kill the Wolbachia bacteria that live in adult worms. This adjunct therapy has been shown to significantly lower microfilarial loads in the host, and may kill the adult worms, due to the symbiotic relationship between Wolbachia and the worm. In four separate trials over ten years with various dosing regimens of doxycycline for individualized treatment, doxycycline was found to be effective in sterilizing the female worms and reducing their numbers over a period of four to six weeks. Research on other antibiotics, such as rifampicin, has shown it to be effective in animal models at reducing Wolbachia both as an alternative and as an adjunct to doxycycline. However, doxycycline treatment requires daily dosing for at least four to six weeks, making it more difficult to administer in the affected areas.
Ivermectin
Ivermectin kills the parasite by interfering with the nervous system and muscle function, in particular, by enhancing inhibitory neurotransmission. The drug binds to and activates glutamate-gated chloride channels. These channels, present in neurons and myocytes, are not invertebrate-specific, but are protected in vertebrates from the action of ivermectin by the blood–brain barrier. Ivermectin is thought to irreversibly activate these channel receptors in the worm, eventually causing an inhibitory postsynaptic potential. The chance of a future action potential occurring in synapses between neurons decreases and the nematodes experience flaccid paralysis followed by death.Ivermectin is directly effective against the larval stage microfilariae of O. volvulus; they are paralyzed and can be killed by eosinophils and macrophages. It does not kill adult females (macrofilariae), but does cause them to cease releasing microfilariae, perhaps by paralyzing the reproductive tract. Ivermectin is very effective in reducing microfilarial load and reducing number of punctate opacities in individuals with onchocerciasis.
Moxidectin
Moxidectin was approved for onchocerciasis in 2018 for people over the age of 11 in the United States. The safety of multiple doses is unclear.
Epidemiology
About 21 million people were infected with this parasite in 2017; about 1.2 million of those had vision loss. As of 2017, about 99% of onchocerciasis cases occurred in Africa. Onchocerciasis is currently relatively common in 31 African countries, Yemen, and isolated regions of South America. Over 85 million people live in endemic areas, and half of these reside in Nigeria. Another 120 million people are at risk for contracting the disease. Due to the vectors breeding habitat, the disease is more severe along the major rivers in the northern and central areas of the continent, and severity declines in villages farther from rivers. Onchocerciasis was eliminated in the northern focus in Chiapas, Mexico, and the focus in Oaxaca, Mexico, where Onchocerca volvulus existed, was determined, after several years of treatment with ivermectin, as free of the transmission of the parasite.According to a 2002 WHO report, onchocerciasis has not caused a single death, but its global burden is 987,000 disability adjusted life years (DALYs). The severe pruritus alone accounts for 60% of the DALYs. Infection reduces the hosts immunity and resistance to other diseases, which results in an estimated reduction in life expectancy of 13 years.
History
Onchocerca originated in Africa and was exported to the Americas by the slave trade, as part of the Columbian exchange that introduced other old world diseases such as yellow fever into the New World. Findings of a phylogenetic study in the mid-90s are consistent with an introduction to the New World in this manner. DNA sequences of savannah and rainforest strains in Africa differ, while American strains are identical to savannah strains in western Africa. The microfilarial parasite that causes the disease was first identified in 1874 by an Irish naval surgeon, John ONeill, who was seeking to identify the cause of a common skin disease along the west coast of Africa, known as "craw-craw". Rudolf Leuckart, a German zoologist, later examined specimens of the same filarial worm sent from Africa by a German missionary doctor in 1890 and named the organism Filaria volvulus.Rodolfo Robles and Rafael Pacheco in Guatemala first mentioned the ocular form of the disease in the Americas about 1915. They described a tropical worm infection with adult Onchocerca that included inflammation of the skin, especially the face (erisipela de la costa), and eyes. The disease, commonly called the "filarial blinding disease", and later referred to as "Robles disease", was common among coffee plantation workers. Manifestations included subcutaneous nodules, anterior eye lesions, and dermatitis. Robles sent specimens to Émile Brumpt, a French parasitologist, who named it O. caecutiens in 1919, indicating the parasite caused blindness (Latin "caecus" meaning blind). The disease was also reported as being common in Mexico. By the early 1920s, it was generally agreed that the filaria in Africa and Central America were morphologically indistinguishable and the same as that described by ONeill 50 years earlier.Robles hypothesized that the vector of the disease was the day-biting black fly, Simulium. Scottish physician Donald Blacklock of the Liverpool School of Tropical Medicine confirmed this mode of transmission in studies in Sierra Leone. Blacklocks experiments included the re-infection of Simulium flies exposed to portions of the skin of infected subjects on which nodules were present, which led to elucidation of the life cycle of the Onchocerca parasite. Blacklock and others could find no evidence of eye disease in Africa. Jean Hissette, a Belgian ophthalmologist, discovered in 1930 that the organism was the cause of a "river blindness" in the Belgian Congo. Some of the patients reported seeing tangled threads or worms in their vision, which were microfilariae moving freely in the aqueous humor of the anterior chamber of the eye. Blacklock and Strong had thought the African worm did not affect the eyes, but Hissette reported that 50% of patients with onchocerciasis near the Sankuru river in the Belgian Congo had eye disease and 20% were blind. Hisette Isolated the microfilariae from an enucleated eye and described the typical chorioretinal scarring, later called the "Hissette-Ridley fundus" after another ophthalmologist, Harold Ridley, who also made extensive observations on onchocerciasis patients in north west Ghana, publishing his findings in 1945. Ridley first postulated that the disease was brought by the slave trade. The international scientific community was initially skeptical of Hisettes findings, but they were confirmed by the Harvard African Expedition of 1934, led by Richard P. Strong, an American physician of tropical medicine.
Society and culture
Since 1987, ivermectin has been provided free of charge for use in humans by Merck through the Mectizan donation program (MDP). The MDP works together with ministries of health and nongovernmental development organisations, such as the World Health Organization, to provide free ivermectin to those who need it in endemic areas.In 2015 William C. Campbell and Satoshi Ōmura were co-awarded half of that years Nobel Prize in Physiology or Medicine for the discovery of the avermectin family of compounds, the forerunner of ivermectin. The latter has come to decrease the occurrence of lymphatic filariasis and onchocerciasis.Ugandas government, working with the Carter Center river blindness program since 1996, switched strategies for distribution of Mectizan. The male-dominated volunteer distribution system had failed to take advantage of traditional kinship structures and roles. The program switched in 2014 from village health teams to community distributors, primarily selecting women with the goal of assuring that everyone in the circle of their family and friends received river blindness information and Mectizan.
Research
Animal models for the disease are somewhat limited, as the parasite only lives in primates, but there are close parallels. Litomosoides sigmodontis , which will naturally infect cotton rats, has been found to fully develop in BALB/c mice. Onchocerca ochengi, the closest relative of O. volvulus, lives in intradermal cavities in cattle, and is also spread by black flies. Both systems are useful, but not exact, animal models.A study of 2501 people in Ghana showed the prevalence rate doubled between 2000 and 2005 despite treatment, suggesting the parasite is developing resistance to the drug. A clinical trial of another antiparasitic agent, moxidectin (manufactured by Wyeth), began on July 1, 2009 (NCT00790998).A Cochrane review compared outcomes of people treated with ivermectin alone versus doxycycline plus ivermectin. While there were no differences in most vision-related outcomes between the two treatments, there was low quality evidence suggesting treatment with doxycycline plus ivermectine showed improvement in iridocyclitis and punctate keratitis, over those treated with ivermectine alone.
See also
Carter Center River Blindness Program
List of parasites (human)
Neglected tropical diseases
Rodolfo Robles
United Front Against Riverblindness
Harold Ridley (ophthalmologist)
References
External links
CDC Parasites of public health concern |
Onychomycosis | Onychomycosis, also known as tinea unguium, is a fungal infection of the nail. Symptoms may include white or yellow nail discoloration, thickening of the nail, and separation of the nail from the nail bed. Toenails or fingernails may be affected, but it is more common for toenails. Complications may include cellulitis of the lower leg.
A number of different types of fungus can cause onychomycosis, including dermatophytes and Fusarium. Risk factors include athletes foot, other nail diseases, exposure to someone with the condition, peripheral vascular disease, and poor immune function. The diagnosis is generally suspected based on the appearance and confirmed by laboratory testing.Onychomycosis does not necessarily require treatment. The antifungal medication terbinafine taken by mouth appears to be the most effective but is associated with liver problems. Trimming the affected nails when on treatment also appears useful.There is a ciclopirox-containing nail polish, but there is no evidence that it works. The condition returns in up to half of cases following treatment. Not using old shoes after treatment may decrease the risk of recurrence.Onychomycosis occurs in about 10 percent of the adult population, with older people more frequently affected. Males are affected more often than females. Onychomycosis represents about half of nail disease. It was first determined to be the result of a fungal infection in 1853 by Georg Meissner.
Etymology
The term is from Ancient Greek ὄνυξ onyx "nail", μύκης mykēs "fungus", and the suffix -ωσις ōsis "functional disease".
Signs and symptoms
The most common symptom of a fungal nail infection is the nail becoming thickened and discoloured: white, black, yellow or green. As the infection progresses the nail can become brittle, with pieces breaking off or coming away from the toe or finger completely. If left untreated, the skin underneath and around the nail can become inflamed and painful. There may also be white or yellow patches on the nailbed or scaly skin next to the nail, and a foul smell. There is usually no pain or other bodily symptoms, unless the disease is severe. People with onychomycosis may experience significant psychosocial problems due to the appearance of the nail, particularly when fingers – which are always visible – rather than toenails are affected.Dermatophytids are fungus-free skin lesions that sometimes form as a result of a fungus infection in another part of the body. This could take the form of a rash or itch in an area of the body that is not infected with the fungus. Dermatophytids can be thought of as an allergic reaction to the fungus.
Causes
The causative pathogens of onychomycosis are all in the fungus kingdom and include dermatophytes, Candida (yeasts), and nondermatophytic molds. Dermatophytes are the fungi most commonly responsible for onychomycosis in the temperate western countries; while Candida and nondermatophytic molds are more frequently involved in the tropics and subtropics with a hot and humid climate.
Dermatophytes
When onychomycosis is due to a dermatophyte infection, it is termed tinea unguium. Trichophyton rubrum is the most common dermatophyte involved in onychomycosis. Other dermatophytes that may be involved are T. interdigitale, Epidermophyton floccosum, T. violaceum, Microsporum gypseum, T. tonsurans, and T. soudanense. A common outdated name that may still be reported by medical laboratories is Trichophyton mentagrophytes for T. interdigitale. The name T. mentagrophytes is now restricted to the agent of favus skin infection of the mouse; though this fungus may be transmitted from mice and their danders to humans, it generally infects skin and not nails.
Other
Other causative pathogens include Candida and nondermatophytic molds, in particular members of the mold genus Scytalidium (name recently changed to Neoscytalidium), Scopulariopsis, and Aspergillus.
Candida species mainly cause fingernail onychomycosis in people whose hands are often submerged in water. Scytalidium mainly affects people in the tropics, though it persists if they later move to areas of temperate climate.
Other molds more commonly affect people older than 60 years, and their presence in the nail reflects a slight weakening in the nails ability to defend itself against fungal invasion.
Nail injury and nail psoriasis can cause damaged toenails to become thick, discolored & brittle.
Risk factors
Advancing age (usually over the age of 60) is the most common risk factor for onychomycosis due to diminished blood circulation, longer exposure to fungi, nails which grow more slowly and thicken, and reduced immune function increasing susceptibility to infection. Nail fungus tends to affect men more often than women and is associated with a family history of this infection.
Other risk factors include perspiring heavily, being in a humid or moist environment, psoriasis, wearing socks and shoes that hinder ventilation and do not absorb perspiration, going barefoot in damp public places such as swimming pools, gyms and shower rooms, having athletes foot (tinea pedis), minor skin or nail injury, damaged nail, or other infection, and having diabetes, circulation problems, which may also lead to lower peripheral temperatures on hands and feet, or a weakened immune system.
Diagnosis
The diagnosis is generally suspected based on the appearance and confirmed by laboratory testing. The four main tests are a potassium hydroxide smear, culture, histology examination, and polymerase chain reaction. The sample examined is generally nail scrapings or clippings. These being from as far up the nail as possible.Nail plate biopsy with periodic acid-Schiff stain appear more useful than culture or direct KOH examination. To reliably identify nondermatophyte molds, several samples may be necessary.
Classification
There are five classic types of onychomycosis:
Distal subungual onychomycosis is the most common form of tinea unguium and is usually caused by Trichophyton rubrum, which invades the nail bed and the underside of the nail plate.
White superficial onychomycosis (WSO) is caused by fungal invasion of the superficial layers of the nail plate to form "white islands" on the plate. It accounts for around 10 percent of onychomycosis cases. In some cases, WSO is a misdiagnosis of "keratins granulations" which are not a fungus, but a reaction to nail polish that can cause the nails to have a chalky white appearance. A laboratory test should be performed to confirm.
Proximal subungual onychomycosis is fungal penetration of the newly formed nail plate through the proximal nail fold. It is the least common form of tinea unguium in healthy people, but is found more commonly when the patient is immunocompromised.
Endonyx onychomycosis is characterized by leukonychia along with a lack of onycholysis or subungual hyperkeratosis.
Candidal onychomycosis is Candida species invasion of the fingernails, usually occurring in persons who frequently immerse their hands in water. This normally requires the prior damage of the nail by infection or trauma.
Differential diagnosis
In many cases of suspected nail fungus there is actually no fungal infection, but only nail deformity.To avoid misdiagnosis as nail psoriasis, lichen planus, contact dermatitis, nail bed tumors such as melanoma, trauma, or yellow nail syndrome, laboratory confirmation may be necessary.Other conditions that may appear similar to onychomycosis include: psoriasis, normal aging, yellow nail syndrome, and chronic paronychia.
Treatment
Medications
Most treatments are with antifungal medications, either topically or by mouth. Avoiding use of antifungal therapy by mouth (e.g., terbinafine) in persons without a confirmed infection is recommended, because of the possible side effects of that treatment.Medications that may be taken by mouth include terbinafine (76% effective), itraconazole (60% effective), and fluconazole (48% effective). They share characteristics that enhance their effectiveness: prompt penetration of the nail and nail bed, and persistence in the nail for months after discontinuation of therapy. Ketoconazole by mouth is not recommended due to side effects. Oral terbinafine is better tolerated than itraconazole. For superficial white onychomycosis, systemic rather than topical antifungal therapy is advised.Topical agents include ciclopirox nail paint, amorolfine, and efinaconazole. Some topical treatments need to be applied daily for prolonged periods (at least one year). Topical amorolfine is applied weekly.Efinaconazole, a topical azole antifungal, led to cure rates two or three times better than the next-best topical treatment, ciclopirox. In trials, about 17% of people were cured using efinaconazole, as opposed to 4% of people using placebo.Topical ciclopirox results in a cure in 6% to 9% of cases. Ciclopirox when used with terbinafine appears to be better than either agent alone. Although eficonazole, P-3051 (ciclopirox 8% hydrolacquer), and tavaborole are effective at treating fungal infection of toenails, complete cure rates are low.
Other
Chemical (keratolytic) or surgical debridement of the affected nail appears to improve outcomes.As of 2014 evidence for laser treatment is unclear as the evidence is of low quality and varies by type of laser.Tea tree oil is not recommended as a treatment, since it is not effective and can irritate the surrounding skin.
Cost
United States
According to a 2015 study, the cost in the United States of testing with the periodic acid–Schiff stain (PAS) was about $148. Even if the cheaper KOH test is used first and the PAS test is used only if the KOH test is negative, there is a good chance that the PAS will be done (because of either a true or a false negative with the KOH test). But the terbinafine treatment costs only $10 (plus an additional $43 for liver function tests). In conclusion the authors say that terbinafine has a relatively benign adverse effect profile, with liver damage very rare, so it makes more sense cost-wise for the dermatologist to prescribe the treatment without doing the PAS test. (Another option would be to prescribe the treatment only if the potassium hydroxide test is positive, but it gives a false negative in about 20% of cases of fungal infection.) On the other hand, as of 2015 the price of topical (non-oral) treatment with efinaconazole was $2307 per nail, so testing is recommended before prescribing it.The cost of efinaconazole treatment can be reduced to $65 per 1-month dose using drug coupons, bringing the treatment cost to $715 per nail.
Canada
In 2019, a study by the Canadian Agency for Drugs and Technologies in Health found the cost for a 48-week efinaconazole course to be $178 for a big toe, and $89 for an other toe.
Prognosis
Recurrence may occur following treatment, with a 20-25% relapse rate within 2 years of successful treatment. Nail fungus can be painful and cause permanent damage to nails. It may lead to other serious infections if the immune system is suppressed due to medication, diabetes or other conditions. The risk is most serious for people with diabetes and with immune systems weakened by leukemia or AIDS, or medication after organ transplant. Diabetics have vascular and nerve impairment, and are at risk of cellulitis, a potentially serious bacterial infection; any relatively minor injury to feet, including a nail fungal infection, can lead to more serious complications. Infection of the bone is another rare complication.
Epidemiology
A 2003 survey of diseases of the foot in 16 European countries found onychomycosis to be the most frequent fungal foot infection and estimated its prevalence at 27%. Prevalence was observed to increase with age. In Canada, the prevalence was estimated to be 6.48%. Onychomycosis affects approximately one-third of diabetics and is 56% more frequent in people with psoriasis.
Research
Research suggests that fungi are sensitive to heat, typically 40–60 °C (104–140 °F). The basis of laser treatment is to try to heat the nail bed to these temperatures in order to disrupt fungal growth. As of 2013 research into laser treatment seemed promising. There is also ongoing development in photodynamic therapy, which uses laser or LED light to activate photosensitisers that eradicate fungi.
== References == |
Neonatal conjunctivitis | Neonatal conjunctivitis is a form of conjunctivitis (inflammation of the outer eye) which affects newborn babies following birth. It is typically due to neonatal bacterial infection, although can also be non-infectious (e.g. chemical exposure). Infectious neonatal conjunctivitis is typically contracted during vaginal delivery from exposure to bacteria from the birth canal, most commonly Neisseria gonorrhoeae or Chlamydia trachomatis.Antibiotic ointment is typically applied to the newborns eyes within 1 hour of birth as prevention for gonococcal ophthalmia. This practice is recommended for all newborns and most hospitals in the United States are required by state law to apply eye drops or ointment soon after birth to prevent the disease.If left untreated, neonatal conjunctivitis can cause blindness.
Signs and symptoms
Neonatal conjunctivitis by definition presents during the first month of life. Signs and symptoms include:
Pain and tenderness in the eyeball
Conjunctival discharge: purulent, mucoid or mucopurulent (depending on the cause)
Conjunctival hyperaemia and chemosis, usually also with swelling of the eyelids
Corneal involvement (rare) may occur in herpes simplex ophthalmia neonatorum
Time of onset
Chemical causes: Right after delivery
Neisseria gonorrhoeae: Delivery of the baby until 5 days after birth (early onset)
Chlamydia trachomatis: 5 days after birth to 2 weeks (late onset – C. trachomatis has a longer incubation period)
Complications
Untreated cases may develop corneal ulceration, which may perforate, resulting in corneal opacification and staphyloma formation.
Cause
Non-infectious
Chemical irritants such as silver nitrate can cause chemical conjunctivitis, usually lasting 2–4 days. Thus, prophylaxis with a 1% silver nitrate solution is no longer in common use. In most countries, neomycin and chloramphenicol eye drops are used, instead.
However, newborns can develop neonatal conjunctivitis due to reactions with chemicals in these common eye drops. Additionally, a blocked tear duct may be another noninfectious cause of neonatal conjunctivitis.
Infectious
The two most common infectious causes of neonatal conjunctivitis are N. gonorrheae and Chlamydia, typically acquired from the birth canal during delivery. However, other different bacteria and viruses can be the cause, including herpes simplex virus (HSV 2), Staphylococcus aureus, Streptococcus pyogenes, and Streptococcus pneumoniae.Ophthalmia neonatorum due to gonococci (N. gonorrhoeae) typically manifests in the first 5 days after birth and is associated with marked bilateral purulent discharge and local inflammation. In contrast, conjunctivitis secondary to infection with C. trachomatis produces conjunctivitis 3 days to 2 weeks after delivery. The discharge is usually more watery in nature (mucopurulent) and less inflamed. Babies infected with chlamydia may develop pneumonitis (chest infection) at a later stage (range 2–19 weeks after delivery). Infants with chlamydia pneumonitis should be treated with oral erythromycin for 10–14 days.Diagnosis is performed after taking swab from the infected conjunctivae.
Prevention
Antibiotic ointment is typically applied to the newborns eyes within 1 hour of birth as prevention against gonococcal ophthalmia. This may be erythromycin, tetracycline, or rarely silver nitrate or Argyrol (mild silver protein).
Treatment
Prophylaxis needs antenatal, natal, and postnatal care.
Antenatal measures include thorough care of mother and treatment of genital infections when suspected.
Natal measures are of utmost importance, as most infection occurs during childbirth. Deliveries should be conducted under hygienic conditions taking all aseptic measures. The newborn babys closed lids should be thoroughly cleansed and dried.
If the cause is determined to be due to a blocked tear duct, gentle palpation between the eye and the nasal cavity may be used to clear the tear duct. If the tear duct is not cleared by the time the newborn is 1 year old, surgery may be required.
Postnatal measures include:
Use of 1% tetracycline ointment, 0.5% erythromycin ointment, or 1% silver nitrate solution (Credés method) into the eyes of babies immediately after birth
Single injection of ceftriaxone IM or IV should be given to infants born to mothers with untreated gonococcal infection.
Curative treatment as a rule, conjunctival cytology samples and culture sensitivity swabs should be taken before starting treatment.
Chemical ophthalmia neonatorum is a self-limiting condition and does not require any treatment.
Gonococcal ophthalmia neonatorum needs prompt treatment to prevent complications. Topical therapy should include:
Saline lavage hourly until the discharge is eliminated
Bacitracin eye ointment four times per day (because of resistant strains, topical penicillin therapy is not reliable, but in cases with proven penicillin susceptibility, penicillin drops 5000 to 10000 units per ml should be instilled every minute for half an hour, every five minutes for next half an hour, and then half-hourly until the infection is controlled.)
If the cornea is involved, then atropine sulfate ointment should be applied.
The advice of both the pediatrician and ophthalmologist should be sought for proper management.Systemic therapy: Newborns with gonococcal ophthalmia neonatorum should be treated for 7 days with ceftriaxone, cefotaxime, ciprofloxacin, or crystalline benzyl penicillin.
Other bacterial ophthalmia neonatorum should be treated by broad-spectrum antibiotics drops and ointment for 2 weeks.
Neonatal inclusion conjunctivitis caused by C. trachomatis should be treated with oral erythromycin. Topical therapy is not effective and also does not treat the infection of the nasopharynx.
Herpes simplex conjunctivitis should be treated with intravenous acyclovir for a minimum of 14 days to prevent systemic infection.
Epidemiology
The incidence of neonatal conjunctivitis varies widely depending on the geographical location. The incidence in England was 257 (95% confidence interval: 245 to 269) per 100,000 in 2011.
See also
List of systemic diseases with ocular manifestations
References
== External links == |
Osteogenesis imperfecta | Osteogenesis imperfecta (IPA: ; OI), colloquially known as brittle bone disease, is a group of genetic disorders that all result in bones that break easily.: 85 The range of symptoms—on the skeleton as well as on the bodys other organs—may be mild to severe.: 1512 Symptoms found in various types of OI include whites of the eye (sclerae) that are blue instead, short stature, loose joints, hearing loss, breathing problems and problems with the teeth (dentinogenesis imperfecta). Potentially life-threatening complications, all of which become more common in more severe OI, include: tearing (dissection) of the major arteries, such as the aorta;: 333 pulmonary valve insufficiency secondary to distortion of the ribcage;: 335–341 and basilar invagination.: 106–107 The underlying mechanism is usually a problem with connective tissue due to a lack of, or poorly formed, type I collagen.: 1513 In more than 90% of cases, OI occurs due to mutations in the COL1A1 or COL1A2 genes. These mutations may be inherited from a persons parents in an autosomal dominant manner but may also occur spontaneously (de novo). There are four clinically defined types: type I, the least severe; type IV, moderately severe; type III, severe and progressively deforming; and type II, perinatally lethal. As of September 2021, 19 different genes are known to cause the 21 documented genetically defined types of OI, many of which are extremely rare and have only been documented in a few individuals. Diagnosis is often based on symptoms and may be confirmed by collagen biopsy or DNA sequencing.Although there is no cure, most cases of OI do not have a major effect on life expectancy,: 461 death during childhood from it is rare, and many adults with OI can achieve a significant degree of autonomy despite disability. Maintaining a healthy lifestyle by exercising, eating a balanced diet sufficient in vitamin D and calcium, and avoiding smoking can help prevent fractures. Genetic counseling may be sought by those with OI to prevent their children from inheriting the disorder from them.: 101 Treatment may include acute care of broken bones, pain medication, physical therapy, mobility aids such as leg braces and wheelchairs, vitamin D supplementation, and, especially in childhood, rodding surgery. Rodding is an implantation of metal intramedullary rods along the long bones (such as the femur) in an attempt to strengthen them. Medical research also supports the use of medications of the bisphosphonate class, such as pamidronate, to increase bone density. Bisphosphonates are especially effective in children, however it is unclear if they either increase quality of life or decrease the rate of fracture incidence.OI affects only about one in 15,000 to 20,000 people, making it a rare genetic disease. Outcomes depend on the genetic cause of the disorder (its type). Type I (the least severe) is the most common, with other types comprising a minority of cases. Moderate-to-severe OI primarily affects mobility; if rodding surgery is performed during childhood, some of those with more severe types of OI may gain the ability to walk. The condition has been described since ancient history. The Latinate term osteogenesis imperfecta was coined by Dutch anatomist Willem Vrolik in 1849; translated literally, it means "imperfect bone formation".: 683
Signs and symptoms
Orthopedic
The main symptom of osteogenesis imperfecta is fragile, low mineral density bones; all types of OI have some bone involvement. In moderate and especially severe OI, the long bones may be bowed, sometimes extremely so. The weakness of the bones causes them to fracture easily; a study in Pakistan found an average of 5.8 fractures per year in untreated children. Fractures typically occur much less after puberty, but begin to increase again in women after menopause and in men between the ages of 60 and 80.: 486 Joint hypermobility is also a common sign of OI, thought to be because the affected genes are the same as those that cause some types of Ehlers–Danlos syndrome.: 1513
Otologic
By the age of 50, about 50% of adults with OI experience significant hearing loss, much earlier as compared to the general population. Hearing loss in OI may or may not be associated with visible deformities of the ossicles and inner ear. Hearing loss frequently begins during the second, third, and fourth decades of life, and may be conductive, sensorineural, or a combination of both ("mixed"). If hearing loss does not occur by age 50, it is significantly less likely to occur in the years afterwards. Mixed hearing loss is most common among those with OI of all age groups, while conductive hearing loss is most likely to affect older people, with sensorineural hearing loss most likely to affect children.Although relatively rare, OI-related hearing loss can also begin in childhood; in a study of forty-five children aged four to sixteen, two were found to be affected, aged 11 and 15. In a different 2008 study, the hearing of 41 people with OI was checked. The results showed that 88% of those over 20 years of age had some form of hearing loss, while only 38% of those under 20 did.Hearing loss is most common in type I OI; it is less common in types III and IV.: 294–296 Other parts of the inner ear may also be affected by OI. causing balance issues; however, only small studies have found links between vertigo and OI.: 308 OI may worsen the outcome of medical treatments which correct hearing loss.Besides OIs association with sensorineural hearing loss, OI is associated with a number of neurological abnormalities, usually involving the central nervous system, due to deformities in the skeletal structures surrounding it. Neurological complications, especially basilar invagination, may adversely affect life expectancy. In OI, this is most often due to upwards migration of the dens,: 106–107 a feature of the C2 vertebra. Neurosurgery may be needed to correct severe abnormalities when they risk the patients life or cause either great suffering or intolerable neurological deficits.: 106–107
Systemic
As its biological causes have been more precisely determined, it has become more widely recognized that while the primary disease process of OI happens in the bones, the most common types of OI—those caused by type I collagen gene mutations—affect virtually all of the human bodys organs in some way.Type I collagen is present throughout the circulatory and respiratory systems: from the ventricles of the heart itself, to the heart valves, to the vasculature,: 329 it is an integral part of the connective tissue of the lungs.: 336 As such, cardiovascular complications, among them aortic insufficiency, aortic aneurysm, and arterial dissections, are sometimes comorbid with OI,: 333 but not as frequently as they are comorbid with Marfan syndrome.: 332 Respiratory illnesses are a major cause of death in OI.: 335 The most obvious source of respiratory problems in OI is pulmonary insufficiency caused by problems in the architecture of the thoracic wall.: 341 However, respiratory tract infections, such as pneumonia, are also more fatal among those with OI than the general population. Those with more severe ribcage deformities were found to have worse lung restriction in a small-scale 2012 study involving 22 Italian patients with OI types III and IV, plus 26 non-affected controls.OI—especially its severe form type III—also has effects on the gastrointestinal system. It was found to be associated with recurrent abdominal pain and chronic constipation in two studies on patients affected by OI. Chronic constipation is especially common,: 377 and is thought to be aggravated by an asymmetric pelvis (acetabular protrusion).: 377 Especially in childhood, OI-associated constipation may cause a feeling of fullness and associated food refusal, leading to malnutrition.: 377
Classification
There are two typing systems for OI in modern use. The first, created by David Sillence in 1979, classifies patients into four types, or syndromes, according to their clinical presentation, without taking into account the genetic cause of their disease.: 114–115 The second system expands on the Sillence model, but assigns new numbered types genetically as they are found. Therefore, people with OI can be described as having both a clinical type and a genetic type, which may or may not be equivalent.Type I is the most common, and 90% of cases result from mutations to either COL1A1 or COL1A2. Symptoms vary a lot between types, as well as vary from person to person, even in the same family.As of 2021, 21 types of OI have been defined:
Sillence types
Sillences four types have both a clinical and a genetic meaning; the descriptions below are clinical and can be applied to several genetic types of OI. When used to refer to a genetic as well as a clinical type, it indicates that the clinical symptoms are indeed caused by mutations in the COL1A1 or COL1A2 genes which are inherited in an autosomal dominant fashion.
Type I
Collagen is of normal quality but is produced in insufficient quantities.: 1516 Bones fracture more easily than in the general public, but not as easily as more severe types of OI; there might be scoliosis, albeit mild compared to OI types III and IV, with a lower Cobb angle; the joints may be loose; blue sclerae may be apparent; hearing loss is likely to occur;: Table 1 and there might be a slight decrease in height. Because cases exist missing one or more of these symptoms, OI type I in some cases goes undetected into adulthood.: 1513–1514 Some further split type I into types I–A and I–B, defined as being distinguished by the absence (I–A) or presence (I–B) of dentinogenesis imperfecta (opalescent teeth).: 217 People with type I generally have a normal lifespan.
Type II
Collagen is fatally defective at its C-terminus.: 1512 Most cases result in death shortly after birth, or within the first year of life, due to respiratory failure. Another common cause of death is intracranial bleeds from skull fractures present at, or sustained during or shortly after, birth.: 1511 In many cases, the newborn already has multiple broken bones at the time of birth. Type II infants also exhibit severe respiratory problems, and have severely deformed bones. Sixty percent of infants die less than 24 hours after being born, and survival after the first year is extremely unlikely and normally requires mechanical ventilation. In the rare cases of infants who survive their first year of life, severe developmental and motor delays are seen; neither of two infants studied in 2019, both aged around two years, had achieved head control, and both required a ventilator to breathe.Type II is also known as the "lethal perinatal" form of OI, and is not compatible with survival into adulthood. Due to similarly severely deformed bones, sometimes infants with severe type III are wrongly initially classified as type II; once long-term survival is shown, they are considered as having type III instead.: 1511
Type III
Collagen quantity is sufficient, but is not of a high enough quality.: 1512 Clinical differentiation between types III and IV is not always simple, and is further confounded by the fact that an untreated adult with type IV may have worse symptoms than a treated adult with type III;: 1511 features only found in type III are its progressively deforming nature: 1511–1512 and the presence of a face with a "triangular" appearance. Another differentiating factor between type III and IV is blue sclerae; in type III, infants commonly have blue sclerae that gradually turn white with age, but blue sclerae are not commonly seen in type IV,: 294–296 although they are seen in 10% of cases.OI type III causes osteopenic bones that fracture very easily, sometimes even in utero, often leading to hundreds of fractures during a lifetime; early scoliosis that progresses until puberty; dwarfism (a final adult height frequently less than 4 feet or 120 centimetres); loose joints; and possible respiratory problems due to low rib cage volume causing low lung volumes.: 1512 Due to the severity of the issues with the bones, neurological and seizure disorders are more likely to develop in type III.: 1512 Basilar invagination, which puts pressure on the brainstem, may cause or contribute to early death; surgical treatment of it is more complex in OI cases.: 1512 : 106–107
Type IV
Collagen quantity is sufficient, but is not of a high enough quality.: 1512 Type IV is for cases of variable severity, which do not fit into either types III or I. While one of Sillences required characteristics for type IV was having normal sclerae,: 294–296 : 114 modern classification allows even those with blue sclerae to fit the criteria for type IV if they meet the other clinical requirements of the type.In type IV, bone deformity can be mild to severe, bones fracture easily (especially before puberty), dwarfism is common, vertebral collapse and scoliosis are evident, and hearing loss is possible, although uncommon. Type IV OI is mostly defined in contrast to type III and type I, being the clinical classification for patients somewhere in the middle ground between the two.: 1511 As such, type IV OI is often termed "variable" OI,: 111 with the severity of even those in the same family (so, with the same genetic mutation) differing.Prepubertal bone fracture rates are another way of clinically assessing type IV OI—those with it tend to have fracture rates of ≈1 per year, compared to ≈3 per year for severe OI (type III).As in type I, some further split type IV into types IV–A and IV–B, defined again by the absence (IV–A) or presence (IV–B) of dentinogenesis imperfecta.: 217
Genetically defined types (types V–XXI)
As of 2020, fifteen types of OI are defined genetically:
Type V – Having the same clinical features as type IV, it can be clinically distinguished by observing a "mesh-like" appearance to a bone biopsy under a microscope. Type V can be further distinguished from other types of OI by the "V triad": an opaque band (visible on X-ray) adjacent to the growth plates; hypertrophic calluses (abnormally large masses of bony repair tissue) which form at fracture sites during the healing process; and calcification of the interosseous membrane of the forearm, which may make it difficult to turn the wrist.: 429 Other features of this condition may include pulled elbow, and, as in other types of OI, long bone bowing and hearing loss. Cases of this type are caused by mutations in the IFITM5 gene on chromosome 11p15.5. The separation of type V from type IV OI, its clinical type, was initially suggested even before its genetic cause was known, by Glorieux et al. in 2000. Type V is relatively common compared to other genetically defined types of OI—4% of OI patients at the genetics department of the Brazilian Hospital de Clínicas de Porto Alegre were found to have it.
Type VI – With the same clinical features as type III, it is distinguished by bones which have an appearance similar to that seen in osteomalacia.: 168 Type VI is caused by a loss-of-function mutation in the SERPINF1 gene on chromosome 17p13.3.: 170
Type VII – OI caused by a mutation in the gene CRTAP on chromosome 3p22.3; clinically similar to OI types II and III, depending on affected individual. Type VII was the first recessive OI type confirmed, initially found among First Nations people in Quebec.
Type VIII – OI caused by a mutation in the gene LEPRE1 on chromosome 1p34.2; clinically similar to OI types II and III, depending on affected individual.
Type IX – OI caused by homozygous or compound heterozygous mutation in the PPIB gene on chromosome 15q22.31.
Type X – OI caused by homozygous mutation in the SERPINH1 gene on chromosome 11q13.
Type XI – OI caused by mutations in FKBP10 on chromosome 17q21. The mutations cause a decrease in secretion of trimeric procollagen molecules. Other mutations in this gene can cause autosomal recessive Bruck syndrome, which is similar to OI.
Type XII – OI caused by a frameshift mutation in SP7 on chromosome 12q13.13. This mutation causes bone deformities, fractures, and delayed tooth eruption.
Type XIII – OI caused by a mutation in the bone morphogenetic protein 1 (BMP1) gene on chromosome 8p21.3. This mutation causes recurrent fractures, high bone mass, and hypermobile joints.
Type XIV – OI caused by mutations in the TMEM38B gene on chromosome 9q31.2. This mutation causes recurrent fractures and osteopenia, although the disease trajectory is highly variable.
Type XV – OI caused by homozygous or compound heterozygous mutations in the WNT1 gene on chromosome 12q13.12. It is autosomal recessive.
Type XVI – OI caused by mutations in the CREB3L1 gene on chromosome 11p11.2. The homozygous mutation causes prenatal onset of recurrent fractures of the ribs and long bones, demineralization, decreased ossification of the skull, and blue sclerae; it is clinically type II or type III. Family members who are heterozygous for OI XVI may have recurrent fractures, osteopenia and blue sclerae.
Type XVII – OI caused by homozygous mutation in the SPARC gene on chromosome 5q33, causing a defect in the protein osteonectin, which leads to severe disease characterized by generalized platyspondyly, dependence on a wheelchair, and recurrent fractures.
Type XVIII – OI caused by homozygous mutation in the FAM46A gene on chromosome 6q14.1. Characterized by congenital bowing of the long bones, Wormian bones, blue sclerae, vertebral collapse, and multiple fractures in the first years of life.
Type XIX – OI caused by hemizygous mutation in the MBTPS2 gene on chromosome Xp22.12. Thus far, OI type XIX is the only known type of OI with an X-linked recessive pattern of inheritance, making it the only type that is more common in males than females. OI type XIX disrupts regulated intramembrane proteolysis, which is critical for healthy bone formation.
Type XX – OI caused by homozygous mutation in the MESD gene on chromosome 15q25.1. Initial studies of type XX indicate that it may cause global developmental delay, a first among OI types. OI type XX disrupts the Wnt signaling pathway, which is thought to have a role in bone development.
Type XXI – OI caused by homozygous mutation in the KDELR2 gene on chromosome 7p22.1. Causes disease clinically similar to types II and III, thought to be related to inability of chaperone protein HP47 to unbind from collagen type I, as to do so it needs to bind to the missing ER lumen protein retaining receptor 2 protein encoded by KDELR2.Given the rapid rate of type discovery, it is extremely likely that there are other genes associated with OI that have yet to be reported.: 491–492
Genetics
Osteogenesis imperfecta is a group of genetic disorders, all of which cause bone fragility. OI has high genetic heterogeneity, that is, many different genetic mutations lead to the same or similar sets of observable symptoms (phenotypes).The main causes for developing the disorder are a result of mutations in the COL1A1 and/or COL1A2 genes which are jointly responsible for the production of collagen type I. Approximately 90% of people with OI are heterozygous for mutations in either the COL1A1 or COL1A2 genes. There are several biological factors that are results of the dominant form of OI. These factors include: intracellular stress; abnormal tissue mineralization; abnormal cell to cell interactions; abnormal cell-matrix interactions; a compromised cell matrix structure; and, abnormal interaction between non-collagenous proteins and collagen.Previous research lead to the belief that OI was an autosomal dominant disorder with few other variations in genomes. However, with the lowering of the cost of DNA sequencing in the wake of 2003s Human Genome Project, autosomal recessive forms of the disorder have been identified. Recessive forms of OI relate heavily to defects in the collagen chaperones responsible for production of procollagen and the assembly of the related proteins. Examples of collagen chaperones that are defective in patients with recessive forms of OI include chaperone HSP47 (Cole-Carpenter syndrome) and FKBP65. Mutations in these chaperones result in an improper folding pattern in the collagen 1 proteins which causes the recessive form of the disorder. There are three significant types of OI that are a result of mutations in the collagen prolyl 3-hydroxylation complex (components CRTAP, P3H1, and CyPB). These components are responsible for the modification of collagen α1(l)Pro986. Mutations in other genes such as SP7, SERPINF1, TMEM38B and BMP1 can also lead to irregularly formed proteins and enzymes that result in other recessive types of osteogenesis imperfecta.Defects in the proteins pigment epithelium-derived factor (PEDF) and bone-restricted interferon-induced transmembrane protein (BRIL) are the causes of type V and VI osteogenesis imperfecta. Defects in these proteins lead to defective bone mineralization which causes the characteristic brittle bones of osteogenesis imperfecta. A single point mutation in the 5′ untranslated region (5′ UTR) of the IFITM5 gene, which encodes BRIL, is linked directly to OI type V.In the rare case of type XIX, first discovered in 2016, OI is inherited as an X-linked genetic disorder, with its detrimental effects resulting ultimately from a mutation in the gene MBTPS2. Genetic research is ongoing, and it is uncertain when all the genetic causes of OI will be identified, as the number of genes that need to be tested to rule out the disorder continue to increase.: 491–492 In a study of 37 families, a 1.3% chance was found that OI recurs in multiple siblings born to two unaffected parents—this is a much higher rate than would be expected if all such recurrences were de novo. The cause is genetic mosaicism; that is, some of, or most of, the germ cells of one parent have a dominant form of OI, but not enough of their somatic cells do to cause symptoms or obvious disability in the parent—the parents different cells have two (or more) sets of slightly different DNA.: 1513 It has been clinically observed that ≈5–10% of cases of OI types II and III are attributable to genetic mosaicism.: 532
Pathophysiology
People with OI are either born with defective connective tissue, born without the ability to make it in sufficient quantities, or, in the rarest genetic types, born with deficiencies in other aspects of bone formation such as chaperone proteins, the Wnt signaling pathway, the BRIL protein, et cetera. In type I the collagens structure itself is normal, it is just its quantity that is low.: 1516 Types II, III and IV are usually, but not always, related to a deficiency of type I collagen. One possible deficiency arises from an amino acid substitution of glycine to a bulkier amino acid, such as alanine, in the collagen proteins triple helix structure. The larger amino acid side-chains lead to steric effects that creates a bulge in the collagen complex, which in turn influences both the molecular nanomechanics and the interaction between molecules, which are both compromised. Depending on both the location of the substitution and the amino acid being used instead, different effects are seen which account for the type diversity in OI despite the same two collagen genes being responsible for most cases. Replacements of glycine with serine or cysteine are seen less often in fatal type II OI, while replacements with valine, aspartic acid, glutamic acid, or arginine are seen more often.At a larger scale, the relationship between the collagen fibrils and hydroxyapatite crystals to form bone is altered, causing brittleness. Bone fractures occur because the stress state within collagen fibrils is altered at the locations of mutations, where locally larger shear forces lead to rapid failure of fibrils even at moderate loads because the homogeneous stress state normally found in healthy collagen fibrils is lost. OI is therefore a multi-scale phenomenon, where defects at the smallest levels of tissues (genetic, nano, micro) domino to affect the macro level of tissues.
Diagnosis
Diagnosis is typically based on medical imaging, including plain X-rays, and symptoms. In severe OI, signs on medical imaging include abnormalities in all extremities and in the spine. As X-rays are often insensitive to the comparatively smaller bone density loss associated with type I OI, DEXA scans may be needed.: 1514 An OI diagnosis can be confirmed through DNA or collagen protein analysis, but in many cases, the occurrence of bone fractures with little trauma and the presence of other clinical features such as blue sclerae are sufficient for a diagnosis. A skin biopsy can be performed to determine the structure and quantity of type I collagen. While DNA testing can confirm the diagnosis, it cannot absolutely exclude it because not all mutations causing OI are yet known and/or tested for.: 491–492 OI type II is often diagnosed by ultrasound during pregnancy, where already multiple fractures and other characteristic features may be visible. Relative to control, OI cortical bone shows increased porosity, canal diameter, and connectivity in micro-computed tomography. OI can also be detected before birth by using an in vitro genetic testing technique such as amniocentresis.
Genetic testing
In order to determine whether osteogenesis imperfecta is present, genetic sequencing of the most common problematic genes, COL1A1, COL1A2, and IFITM5, may be done; if no mutation is found yet OI is still suspected, the other 10+ genes known to cause OI may be tested. Duplication and deletion testing is also suggested to parents who suspect their child has OI. The presence of frameshift mutations caused by duplications and deletions is generally the cause of increased severity of disease.
Differential diagnosis
An important differential diagnosis of OI is child abuse, as both may present to a clinician with multiple fractures in various stages of healing.: 1514 Differentiating them can be difficult, especially when no other characteristic features of OI are present.: 391 This can become an issue in court; in the United States, several child abuse cases were resolved with a finding that osteogenesis imperfecta was the true cause of a childs fractures, leading to lawsuits seeking redress such as Alice Velasquez, et al. v. United States.: 391 Other differential diagnoses include rickets and osteomalacia, both caused by malnutrition, as well as rare skeletal syndromes such as Bruck syndrome, hypophosphatasia, geroderma osteodysplasticum, and Ehlers–Danlos syndrome.: 1513 : 253–256 Various |