text
stringlengths 5.43k
47.1k
| id
stringlengths 47
47
| dump
stringclasses 7
values | url
stringlengths 16
815
| file_path
stringlengths 125
142
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
The paifang gate in 2013
|Time zone||UTC-5 (Eastern)|
|Area code(s)||617 / 857|
|Alternative Chinese name|
Chinatown, Boston is a neighborhood located in downtown Boston, Massachusetts. It is the only surviving historic ethnic Chinese enclave in New England since the demise of the Chinatowns in Providence, Rhode Island and Portland, Maine after the 1950s. Because of the high population of Asians and Asian Americans living in this area of Boston, there is an abundance of Chinese and Vietnamese restaurants located in Chinatown. It is one of the most densely populated residential areas in Boston and serves as the largest center of its East Asian and Southeast Asian cultural life. Chinatown borders the Boston Common, Downtown Crossing, the Washington Street Theatre District, Bay Village, the South End, and the Southeast Expressway/Massachusetts Turnpike. Boston's Chinatown is one of the largest Chinatowns outside of New York City.
- 1 Demographics
- 2 History
- 3 Cuisine
- 4 Transportation
- 5 Health care
- 6 Community organizations
- 7 Urban policies
- 8 Buildings
- 9 Businesses and shops
- 10 Community events and celebrations
- 11 Satellite Chinatowns
- 12 See also
- 13 References
- 14 Further reading
- 15 External links
|Asian American population |
Because it is a gathering place and home to many immigrants, Chinatown has a diverse culture and population. According to 2010 census data, the total population in Chinatown is 4,444. This is an almost 25% increase since 2000, when there were only 3,559 people. The white population rose 241.7% from 228 in 2000 to 779 in 2010. The Black and African American population rose from 82 in 2000 to 139 in 2010, showing an almost 70% increase. The American Indian population dropped 75% from 2000 to 2010, going from 8 to 2 residents. The Asian population grew about 7.5% from 3,190 in 2000 to 3,416 in 2010. People who identified as another race grew from 18 in 2000 to 30 in 2010, a 66.7% increase. Those who identified as more than one race grew from 32 in 2000 to 77 in 2010, as increase of 140.6%.
With more white residents moving into Chinatown, there is worry about gentrification. For instance, the Asian population dropped to 46% in 2010. Another major concern is that historic towns and places are becoming more touristy and less cultural. Among Boston, New York, and Philadelphia, Boston has shown the highest increase in non-Asian residents moving into non-family shared households, with a 450% increase from 1990 to 2000.
The total number of housing units in Chinatown has increased by 54% from 2000 to 2010. Chinatown went from 1,367 to 2,114 housing units. There has been an almost 50% increase in the occupied housing units in Chinatown from 2000 to 2010, going from 1,327 to 1,982. With the increase in occupied housing units, there has also been a 230% increase in vacant homes, going from 40 in 2000 to 132 in 2010.
Race and ancestry
|Two or more races||2.7%||2.4%||2.7%||+0.3%||+0.0%|
Part of the Chinatown neighborhood occupies a space that was reclaimed by filling in a tidal flat. The newly created area was first settled by Anglo-Bostonians. After residential properties in this area became less desirable due to railway developments, it was settled by a mixed succession of Irish, Jewish, Italian, Lebanon, and Chinese immigrants. Each group replaced the previous one to take advantage of low-cost housing and job opportunities in the area. During the late-nineteenth century, garment manufacturing plants also moved into Chinatown, creating Boston's historic garment district. This district was active until the 1990s.
In 1870, the first Chinese people were brought from San Francisco to break a strike at the Sampson Shoe Factory in North Adams, Massachusetts. In 1874, many of these immigrants moved to the Boston area. As history and tradition details, many Chinese immigrants settled in what is now known as Ping On Alley. The first laundries opened on what is now Harrison Ave in Chinatown. In 1875, as laundries were becoming more and more popular, the first restaurant, Hong Far Low, opened. In the 1800s and the 1900s, many Chinese immigrants came to Boston looking for work and for new opportunities. Due to the Chinese Exclusion Act of 1882, Chinese immigration was halted, and the population of Chinatown remained mostly male. In 1903, an anti-Chinese sentiment led to the Boston Chinatown immigration raid, leading to the arrest of 234 people, and the eventual deportation of 45.
In the 1950s, Chinatown saw a major increase in population after the Exclusion Act was abolished in 1943. Construction in the late 1950s, in what is known as the "central artery", affected many homes and businesses in Chinatown. The Massachusetts Turnpike, constructed in the 1960s, took away much of the land from Chinatown that had been used for businesses. After construction was completed, many businesses and homes in Chinatown were affected. Despite this, the population there continued to grow by at least 25%. During the late 19th century, manufacturing plants began to emerge in Chinatown for the garment stores that were thriving there. This became known as the historic garment district in Boston. However, the garment district only lasted until the 1990s due to the rising cost of rent, property sales, and the removal of homeowners.
Negotiations[who?] resulted in the provision of funds for the construction of new community housing in Chinatown. During this period[when?], city officials also designated an area adjacent to Chinatown as Boston's red light district, also known as the Combat Zone. This zone, while still in existence, had almost disappeared by the 1990s for many reasons. These causes included city pressure, the rise of marketing movies on VHS home video, and the move of night clubs to the suburbs, where they became more upscale. A general increase in property values, which encouraged building sales and the removal of former tenants, also contributed. In the 21st century, much of the former Combat Zone has evolved into the Washington Street Theatre District.
Chinatown remains a center of Asian American life in New England, hosting many Chinese and Vietnamese restaurants and markets. Chinatown is one of Boston's most densely populated residential districts, with over 28,000 people per square mile in the year 2000. Nearly 70% of Chinatown's population is Asian, compared with Boston's nine percent of Asian Americans overall. Chinatown has a median household income of $14,289.
The traditional Chinatown Gate (paifang) with a foo lion on each side is located at the intersection of Beach Street and Surface Road. This was once a run-down area, housing little more than a ventilation-fan building for the Central Artery Tunnel; however, a garden was constructed at this site as part of the Big Dig project. The Gate is visible from the South Station Bus Terminal, and is a popular tourist destination and photo opportunity. Offered by the Taiwanese government to the City in 1982, the gate is engraved with two writings in Chinese: Tian Xia Wei Gong, a saying attributed to Sun Yat-sen that translates as "everything under the sky is for the people", and Li Yi Lian Chi, the four societal bonds of propriety, justice, integrity, and honor.
As of 2000[update], an area near Chinatown, located at the mouth of an expressway tunnel, was also a red light district. Starting in 2005, community-based civilian "Crime Watch" volunteers patrol the streets every day to discourage and report crime. Chinatown had issues with gang activity. In 1991, five men were shot and killed, and a sixth man was wounded, at a social club. The two gunmen were arrested in China in 1998 and were sentenced to life imprisonment. The area's crime rate has since declined.
There are two newspapers that are popular among the residents of Chinatown. One is the World Journal Newspaper, which is the largest, most influential Chinese daily newspaper in the United States. There is also the non-profit community newspaper, Sampan published twice a month which provides both English-language and Chinese-language news and information about Chinatown.
Chinese cuisine in Boston reflects a mélange of multiple influential factors. The growing Boston Chinatown accommodates Chinese-owned bus lines shuttling an increasing number of passengers to and from the numerous Chinatowns in New York City, and this has led to some commonalities in the local Chinese cuisine derived from Chinese food in New York. A large immigrant Fujianese immigrant population has made a home in Boston, leading to Fuzhou cuisine being readily available in Boston. An increasing Vietnamese population has also been exerting an influence on Chinese cuisine in Greater Boston. Finally, innovative dishes incorporating chow mein and chop suey as well as locally farmed produce and regionally procured seafood ingredients are found in Chinese as well as non-Chinese food in and around Boston.
The MBTA Orange Line stops at the Chinatown station and the Tufts Medical Center station, located within and at the southern edge of the district, respectively. Boylston station on the MBTA Green Line is located just beyond the northwest corner of Chinatown. Just east of Chinatown, South Station is served by the MBTA's Red Line, Silver Line, and Commuter Rail. South Station also accommodates Amtrak, the long-distance rail to New York City and other cities on the Northeast Corridor. Entrance and exit ramps serving Interstate 93 and the Massachusetts Turnpike are at the southern edge of Chinatown.
The bus terminal at South Station handles regional buses to New England, New York City, Washington, D.C., Albany (New York), and other destinations. The New England destinations include Concord (New Hampshire) and Portland (Maine). The regional and national bus companies include Greyhound Lines, Peter Pan Bus Lines, Megabus, and Bolt Bus. In Chinatown itself, Chinese-owned bus service Lucky Star/Travelpack provided hourly connections with Chinatown, Manhattan in New York City.
Tufts Medical Center occupies a large portion of the area, and includes a full service hospital and various health-related schools of Tufts University including Tufts University School of Medicine, Sackler School of Graduate Biomedical Sciences, Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy, and Tufts University School of Dental Medicine.
In addition, South Cove Community Health Center operates the Chinatown Clinic at 885 Washington Avenue. Volunteers founded South Cove in 1972 to provide better health care for Asian Americans in the Chinatown area.
Boston Chinatown Neighborhood Center
The Chinatown Neighborhood Center (BCNC) is a community center that primarily serves the immigrant Chinese community of Chinatown. The BCNC's mission is to ensure that the children, youth, and families they serve have the resources and support to achieve greater economic success and social well-being, by providing child care, bilingual education, and youth recreation programs. BCNC strives to provide the support and resources needed for participants to integrate into American society, while preserving the community's rich culture. Most of those served are immigrant Chinese, with low family incomes and limited English ability. In 2014, The Boston Foundation donated nearly $500,000 to support many summer programs and activities in the greater Boston Area, including funding for the BCNC.
BCNC is located in the heart of Chinatown at 3 sites. The 885 Washington Street BCNC is part of the Josiah Quincy School building. In 2005, BCNC created a permanent home at 38 Ash Street, in a five-story community center, which was the first certified green building in Chinatown. The building meets the performance standards under the LEED Green Building Rating System. In 2017, the Pao Arts Center located at 99 Albany Street was opened in partnership with Bunker Hill Community College (BHCC), which also teaches a number of introductory courses there in subjects such as accounting, food service, business, writing, psychology,, statistics, and acting. Located in the Charlestown neighborhood of Boston, BHCC is connected to Chinatown by the MBTA Orange Line, and serves a large number of students from Chinatown at its main campus.
The BCNC is also known for its annual Oak Street Fair, occurring every autumn to celebrate Chinese culture in Boston's Chinatown. The event is aimed at children and families, and includes a variety of activities.
The Chinatown Lantern Cultural and Educational Center was formed by the Chinatown Cultural Center Committee (CCCC) in order to address the longtime lack of a public library in the neighborhood (the Chinatown branch of the Boston Public Library was demolished in 1956 to make way for the Central Artery). The Reading Room opened in April 2012, and provided library services, educational workshops, and cultural events to the Chinatown community. The Reading Room had a rotating collection of approximately 8,000 books in both English and Chinese, and also ran a small art exhibit gallery. The Reading Room closed on Feb 25, 2013.
The Chinatown community and extended communities of Chinese around Greater Boston (including North Quincy and Wollaston in Quincy) are serviced by the Asian Community Development Corporation. The ACDC helps preserve Chinatown's culture, and foster youth and economic development. It was established in 1987 and since its inception has worked on addressing housing development concerns (such as the notable 2002 Big Dig construction) to gain back a piece of land lost due to urban renewal called the Parcel 24.
In 2006, Boston's Mayor Menino opened up land formerly owned by the BRA (Boston Redevelopment Authority). It became a new home to the nonprofit organization Asian American Civic Association (AACA) and the Kwong Kow Chinese School (KKCS). These two groups teamed up on this project to build an education center, which includes a day care center, a community room, classrooms, and office space.
There are many more organizations in the Chinatown area of Boston which provide community outreach and resources such as the Wang YMCA of Chinatown, the Chinese Progressive Association, and many grassroots organizations such as the Campaign to Protect Chinatown. There are over 75 organizations in Chinatown, and most are ethnically based. Chinatown has always focused on organizations for the youth such as the YMCA, Marching Band, and Boy Scouts. In the 1930s, there was even a major development for culture and support for women and Chinese American girls.
One of the major difficulties facing Boston's Chinatown is the issue of gentrification. Construction of new housing and the repair of existing housing may occur, but if rental and purchase prices increase, existing residents will be displaced. As property prices rising, the demographics of an area may change, and this partly explains why Chinatown is seeing more and more non-Asians and white residents.
Chinatown faces several major issues including: the upkeep of houses, keeping trash off the streets, and keeping the place up to date. With parts of Chinatown looking like they are falling apart, it almost implies a historical struggle for survival. According to Kairos Shen, a planner for the Boston Redevelopment Authority (BRA), "the fact that so many Asians — roughly 5,000 residents, according to US Census data, with the vast majority of them Chinese —– still call Chinatown home is no accident, resulting from a decades-long effort by the city to find a right balance between providing affordable housing and encouraging development projects aimed at revitalizing the neighborhood." The idea for Chinatown is to provide more affordable housing to make it seem less gentrified. There are already a number of projects that have been worked on and are still being built.
Long time residents fear that they may lose their homes due to construction. One of the main goals of urban policy is to create and sustain businesses in Chinatown so that residents have a place to work. In 2010, Chinatown was granted $100,000.[further explanation needed] This new development[which?] hopes to partner with the BRA and the Asian American Civic Association (AACA) to address many issues Chinatown is facing. Some of these include a "project to help Chinatown businesses address the issues of rising energy, water, and solid waste management costs by providing practical and affordable solutions to help business owners save money and reduce environmental impacts, while building long term sustainable business expertise capacity in the community."
Community involvement and programs in Chinatown help jobs and community organizations. As of October 2014, many Boston residents, including Chinatown residents, received aid for jobs and support. As referenced by the BRA, "All told more than 200 Boston residents will receive job training under these grants." Many places and businesses in Chinatown received funding through this grant. The AACA received $50,000 and the Boston Chinatown Neighborhood Center (BCNC) received $50,000 as well. Additionally, the YMCA, which many Chinatown residents use, received $50,000. Many projects have been and are still in works in Chinatown, such as the 120 Kingston Street development (240 units), the Hong Kok House at 11-31 Essex Street (75 assisted living units), Kensington Place at 659-679 Washington Street (390 units and retail space), and Parcel 24 on Hudson Street (345 units), among others. However, not all of these units will be affordable for Asian Americans.
Tunney Lee, a professor of architecture and urban studies at the Massachusetts Institute of Technology, said he sees Chinatown maintaining its ethnic and economic character well into the future. "Immigration is still strong and keeping Chinatown vibrant." This can make the culture and liveliness of Chinatown return. These types of housing projects aim to solve the issues of affordability and gentrification, which would keep pushing out Asian residents. Tunney Lee also said, "The various developments now under way in the area, while welcome and a sign of economic vitality, are putting pressures on the neighborhood and will lead to an influx of more non-Asian residents." Lee added. “But I think the number of Asian-Americans will stay constant as the total population goes up.”
As of 2016[update], Chinatown is experiencing gentrification. Large, luxury residential towers are built in and surrounding an area that was predominantly small three-to-five story apartment buildings intermixed with retail and light-industrial spaces. A property developer has purchased the Dainty Dot Hosiery building, which is listed in the National Register of Historic Places, with plans to transform it into condominiums. Chinese community organizations, such as the Asian Community Development Corporation, are also building housing developments which offer mixed-and low-income housing.
The Hayden Building, located on 681-683 Washington Street, is a historic building designed by Henry Hobson Richardson. Originally constructed in 1875, the Hayden Building remains one of the last commercial stores for retail in Boston's Chinatown, and is the last remaining one built by Richardson. It was added to the National Historic Register in 1980.The building was purchased by Mayor Menino and the City of Boston in 1993, and has since been restored with the intent of marketing it to tenants as of 2014[update]. On March 1, 2013, Menino, along with Historic Boston Inc., teamed up to revitalize, refurbish. and reopen this building with a contribution of $200,000, which is part of the Boston's and Chinatown's trilogy fund. The bottom floor of this building has been redone as a Liberty Bank. In the future, projects costing $5.6 million will be used to turn the upper levels of this building into apartments.
Businesses and shops
One of the major reasons tourists visit Chinatown is to see how immigrants live and work today. They can see how the job market has grown as immigrants made a life for themselves from the early markets to the laundries that opened when the settlers first arrived in Chinatown.
Many Boston residents visit restaurants for everyday and special events, occurring either in Chinatown, or in nearby areas such as the Boston Theater District, Financial District, Rose Fitzgerald Kennedy Greenway, Boston Public Gardens, or Boston Common. Food stores in Boston's Chinatown specialize in selling Chinese foods, spices, herbs, and Asian food products. Since 2000, the number of establishments selling specialty Chinese bakery products has increased, with Hong Hong, Taiwanese, and Japanese styles also available.
As one of the last remaining remnants of the area's Historic Garment District, the only textile store still found in Chinatown is Van's Fabric. Founded in the early 1980s, it is one of the community's oldest operating businesses and a pillar of old Chinatown before gentrification began in the area.
Community events and celebrations
A major part of the culture and history of Chinatown are the events celebrated by the people who live here. There are many community programs and events held in Chinatown annually, but the most noted are the New Years celebration, the Lion Dance Festival, and the August Moon Festival.
One of the biggest festivals of the year celebrated in Chinatown, is the August Moon Festival. This festival is often held in the middle of August and usually lasts for the entire day. During this Festival, there are vendor booths for handmade and traditional Chinese items and plenty of traditional food for sale. Chinese dough art is taught for those interested in learning the art. Additionally the Chinese Opera performs during this time. There is also children’s Chinese folk dancing, martial arts performances, and lion dancers from around Chinatown and throughout the world, many who come just for the festival.
Another notable celebration that happens every year in Chinatown is the New Years Parade, also known as the Lion Dance Festival. The Chinese New Year Parade marks the biggest annual celebration in Boston's Chinatown and each year a new animal of the Chinese zodiac is celebrated. The name Lion Dance comes from the costumes worn by those in the parade who often wear lions or dragon costumes. The dance is part of the parade each year. In China, this celebration begins on the first day of the first month in the lunar calendar traditionally used in much of Asia. It is sometimes called the Lunar New Year, but it is different in Boston's Chinatown based on when spring begins.
Another popular event is Fall Cleaning Day, which brings the community together to help clean up trash and litter. It is seen almost as an Earth Day for Chinatown.
Additionally, there is the annual Lantern Festival which is one of the largest tourist attractions and includes Lion Dances, Asian folk dances, martial arts performances, and traditional Chinese singing.
A new satellite Chinatown has emerged on Hancock Street in the neighboring city of Quincy, about 10 miles (16 km) to the south of the original Chinatown. This is due to a rapid influx of Hokkien-speaking Mainland Chinese immigrants from the province of Fujian, as well as a large and growing ethnic Vietnamese population. There are already several large Asian supermarkets such as the Kam Man Foods and Super 88 supermarket chains, and other businesses that are competing with Boston's Chinatown. Several businesses operating in Chinatown now have branches in Quincy. The MBTA Red Line connects via either South Station or Downtown Crossing near Boston's Chinatown, to three rapid transit stations in Quincy, including Quincy Center station.
A similar, but much smaller, enclave has developed in Malden to the north of Boston. Malden Center station is directly connected via the MBTA Orange Line to Chinatown station, in the original Chinatown.
- Chinatowns in the United States
- Chinatown bus
- Chinese Progressive Association – Boston Chinese community service organization
- History of the Chinese in Boston
- Interactive map of Boston's ChinatownArchived April 25, 2013, at the Wayback Machine
- David Goran (July 6, 2016). "Old photos show Chinatown, Boston in the late 20th century". The Vintage News. Retrieved November 7, 2019.
- "Chinatown Then and Now" (PDF). Archived from the original (PDF) on 2015-08-12. Retrieved 2015-04-29.
- Melnik, Mark; Borella, Nicoya. "Chinatown Neighborhood 2010 Census". City of Boston. Archived from the original on 2014-03-06. Retrieved 19 March 2015.
- Pearson, Erica (October 10, 2013). "More white people moving into Chinatown as section sees real estate prices rise: study". NY Daily News. Retrieved 20 March 2015.
- Li, Bethany. "Chinatown Then and Now" (PDF). AALDEF. Asian American Legal Defense and Education Fund. Archived from the original (PDF) on 12 August 2015. Retrieved 22 March 2015.
- "ACS DEMOGRAPHIC AND HOUSING ESTIMATES 2012-2016 American Community Survey 5-Year Estimates". U.S. Census Bureau. Retrieved August 25, 2018.
- "Massachusetts QuickFacts from the US Census Bureau". census.gov.
- "PEOPLE REPORTING ANCESTRY 2012-2016 American Community Survey 5-Year Estimates". U.S. Census Bureau. Retrieved August 25, 2018.
- "ACS DEMOGRAPHIC AND HOUSING ESTIMATES 2012-2016 American Community Survey 5-Year Estimates". U.S. Census Bureau. Retrieved August 25, 2018.
- Krim, Arthur. "Chinatown: Exploring Bostons Neighborhoods" (PDF). Boston Landmark Commission. Archived from the original (PDF) on July 24, 2015. Retrieved 24 April 2015.
- Barkan, Elliott Robert (2013). Immigrants in American History: Arrival, Adaptation, and Integration. Santa Barbara, Calif.: ABC-CLIO. ISBN 978-1598842197. Retrieved 18 November 2015.
- "Boston Chinatown". Boston Chinatown. WYGK Publishing. Retrieved 22 April 2015.
- "Boston Icons: 50 Symbols of Beantown." Chinatown. Scheff, Jonathan. Retrieved on September 5, 2012.
- "Boston 2010 Census Population" (PDF). Archived from the original (PDF) on March 21, 2012. Retrieved January 22, 2012.
- "Chinatown" (PDF). Archived from the original (PDF) on October 30, 2014. Retrieved October 30, 2014.
- "禮義廉恥 - 萌典". www.moedict.tw.
- AsianWeek Staff and Associated Press. "Philadelphia Chinatown Wins Stadium Fight Archived September 26, 2011, at the Wayback Machine . AsianWeek. November 24–30, 2000. Retrieved on November 8, 2011.
- "Chinese Consolidated Benevolent Association of New England (CCBA) – Chinatown Crime Watch Received Motorola Solutions Foundation Public Safety Grant". Sampan. January 29, 2013. Retrieved 2013-04-09.
- "Chinatown Crime Watch". Chinese Consolidated Benevolent Association of New England. CCBA of New England. Archived from the original on March 31, 2013. Retrieved 9 April 2013.
- Szep, Jason (October 10, 2005). "Two Vietnamese jailed for Boston Chinatown murders". Reuters. Retrieved 2014-01-14.
- "5 Men Killed, 1 Hurt in Shooting in Boston Chinatown Social Club". Associated Press. February 13, 1991. Retrieved 2014-01-14.
- "World Journal". worldjournal.com. Retrieved 18 July 2016.
- "World Journal Boston Edition - The Largest Chinese Newspaper in Massac". World Journal Boston Edition - The Largest Chinese Newspaper in Massac. Retrieved 2015-12-09.
- Concise Encyclopedia of Tufts History. Ed. Anne Sauer. http://www.perseus.tufts.edu/cgi-bin/vor type=phrase&alts=0&group=typecat&lookup=New%20England%20Medical%20Center&collection=Perseus:collection:Tufts150
- "Locations." South Cove Community Health Center. Retrieved on April 15, 2009. Archived April 20, 2009, at the Wayback Machine
- "About Us." South Cove Community Health Center. Retrieved on April 15, 2009.
- "Boston Chinatown Neighbourhood Center Inc". Boston Chinatown Neighbourhood Center Inc. Retrieved 18 July 2016.
- "Boston Foundation Commits Nearly $500,000 to Support Summer Activities" (Jan 2, 2005 - present). Targeted News Service. Jul 31, 2014. Archived from the original on 29 July 2019. Retrieved 12 March 2015.
- "BCNC". Boston Chinatown Neighbourhood Center Inc. Archived from the original on 2012-05-10. Retrieved 2012-05-31.
- "LEED Certified Project Directory". U.S. Green Building Council. Archived from the original on 2008-11-03. Retrieved 2012-05-31.
- "Pao Arts Center". BCNC. Boston Chinatown Neighborhood Center. Retrieved 2017-05-02.
- "Pao Arts Center - Bunker Hill Community College". bhcc.edu. Bunker Hill Community College. Retrieved 2017-05-02.
- "Chinatown Lantern Cultural and Educational Center". chinatownlantern.org. Archived from the original on 2012-09-08. Retrieved 2012-05-31.
- "Boston Globe". boston.com. 2016-01-11. Retrieved 2012-05-31.
- Fox, Jeremy (Feb 13, 2013). "Community-led Chinatown library facility to close Feb. 25".
- "Asian Community Development Corporation". Asian Community Development Corporation. Retrieved 18 July 2016.
- Ostrander, Susan; Portney, Kent (Nov 30, 2007). Acting Civically: From Urban Neighborhoods to Higher Education (Civil Society: Historical and Contemporary Perspectives). pp. 44–47. ISBN 978-1584656616. Retrieved 24 March 2015.
- "Asian American Civic Association". Asian American Civic Association. Retrieved 18 July 2016.
- Lehman, DeWayne. "Mayor Menino Breaks Ground for Chinatown Community Education Center". City of Boston. Neighborhood Development. Retrieved 19 March 2015.
- "YMCA of Greater Boston". YMCA of Greater Boston. Retrieved 18 July 2016.
- "Campaign to Protect Chinatown". Chinese Progressive Association. Retrieved 18 July 2016.
- DeLanzo, Michael. "Families with Children from China New England". Families with Children from China New England. Retrieved 18 March 2015.
- To, Wing-Kai (2008). Chinese in Boston 1870-1965 (1st ed.). Arcadia Publishing. pp. 7–8.
- Kaufman, Stephen (May 7, 2013). "The Changing Face of America's Chinatowns". Washington. Retrieved 21 March 2015.
- Fitzgerald, Jay (Aug 12, 2012). "It's Chinatown". Boston Business Journal. Retrieved 15 March 2015.
- "Sustainable Chinatown to Bring Top-Notch Sustainable Business Expertise & Resources to Chinatown". Boston Redevelopment Authority. Boston Redevelopment Authority.
- "Neighborhood Jobs Trust distributes over $1 million in grants to community organizations". Boston Redevelopment Authority. BRA. Retrieved 22 April 2015.
- Fitzgerald, Jay (Aug 7, 2012). "It's Chinatown". Boston Business Journal. Retrieved 26 April 2015.
- "Boston Redevelopment Authority Events/Calendar". Boston Redevelopment Authority. BRA. Retrieved 25 April 2015.
- Courtney. "Chinatown Events". Chinatown Main Street. Chinatown Main Street. Archived from the original on 6 May 2015. Retrieved 21 April 2015.
- "Boston Event Calendar August". Boston Discovery Guide. Retrieved 18 July 2016.
- "Boston Discovery Guide". Boston Discovery Guide. Retrieved 26 April 2015.
- My Legacy is Simply This: Stories From Boston's Most Enduring Neighborhoods; Charlestown, Chinatown, East Boston, Mattapan, Boston, Massachusetts, US: City of Boston and Grub Street, Inc., 2008, ISBN 9780615245270 ISBN 9780615245270
- Stacey G.H. Yap, Gather Your Strength, Sisters: The Emerging Role of Chinese Women Community Workers New York: AMS Press, 1989 ISBN 978-0404194345 (Study of women community organizers in Boston's Chinatown)
|Wikimedia Commons has media related to Chinatown, Boston.|
|Wikivoyage has a travel guide for Chinatown, Boston.|
- Chinatown Heritage Project
- The International Society records, 1978-2002 (bulk 1984-1998) are located in the Northeastern University Libraries, Archives and Special Collections Department, Boston, MA.
- The Chinese Progressive Association records, 1976-2006 are located in the Northeastern University Libraries, Archives and Special Collections Department, Boston, MA.
- Chinatown Profile Census 2000
- Boston Chinatown Neighborhood Center
- Asian American Civic Association[permanent dead link]
- Asian Community Development Corporation
- Chinatown Main Street, a Boston Main Streets initiative
- Patriot Ledger Special Report: Chinatown South
- Boston Chinatown Pics
- Chinese Newspaper in Boston and Chinatown
- Chinatown Park | <urn:uuid:959e863c-a750-4042-acdf-5d8780007e63> | CC-MAIN-2019-47 | https://en.wikipedia.org/wiki/Chinatown,_Boston | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00497.warc.gz | en | 0.925514 | 6,798 | 2.578125 | 3 |
To mark the th anniversary, eight Irish people choctaw men the Trail of Tears. For the Choctaw who remained in or returned to Mississippi afterthe situation deteriorated. Many lost their lands and money to unscrupulous mrn. In addition, cuoctaw were prohibited from attending any of the few institutions of higher learning, as the European Americans considered them free choctaw men of color and excluded from the segregated choctaw men institutions.
The state had no choctaw men schools prior to those choctaw men during the Reconstruction era. Even black slaves had more choctaw men rights than did the Choctaws during this period. They were to participate cnoctaw America's "first" world's fair: Exhibition of the Industry of All Nations.
Post, of the schooner J. Lane, who arrived on Sunday, from Mobile, states that on the 26th ultimo, off the Great Isaacs, he spoke the brig Pembroke, from, Mobile for New-York, having on board married woman wants hot sex Hilo1 company of Choctaw Indians, for exhibition at the Crystal Palace.
Their delineations of the "Great Ball Play," drew down the plaudits of the house. They appear this evening and to-morrow, after which they quit Brooklyn, wending their way homewards. The Brooklyn Museum is not half large enough to contain the crowds that flock nightly to its doors.
There will be afternoon performances this day and to-morrow, to accommodate the young folks. The crowds that see them, go away astonished and delighted with valuable choctaw men. Among the Company are Hoocha, their chief, aged 58 years; Teschu the Medicine babes of israel, aged 58; and Silver smith. This is the greatest opportunity ever given to the New-Yorkers to obtain a full idea of Indian life.
At the Assembly Rooms, Broadway, above Howard-st. Doors open at 7. Exercises to commence at 8. Admission 25 cents. Reserved Seats 50 cents. In this capacity he negotiated several treaties, including the Treaty with Choctaws and Chickasaws in July The treaty covered sixty-four terms, covering many subjects, such choctaw men Choctaw and ,en nation sovereignty, Confederate States of America citizenship possibilities, and an entitled choctaw men in the House of Representatives of the Confederate States of America.
Cushman, a noted author and historian, wrote that the "United States abandoned the Choctaws and Chickasaws" when Confederate troops had entered into their nation. Choctaw men Choctaw identified with the Southern cause and a few owned slaves.
Ready Sexual Encounters Choctaw men
In addition, they remembered and resented the Indian removals from thirty years earlier, and the poor services they received from the federal government. The Choctaw choctaw men several regiments and battalions for service cyoctaw the Confederate States Armyboth in Gay and lesbian singles Territory chocraw later in Mississippi, .
The Confederacy encouraged the recruitment of American Indians east of the Choctaw men River in John W. Pierce and Samuel G.
Spann choctaw men the Choctaw Indians in Mississippi between and Pierce's 1st Choctaw Choctaw men was established in February After a Confederate troop train wreck, referred to as the Chunky Creek Train Wreck ofnear Hickory, Mississippithe battalion led rescue and recovery efforts.
Led by Jack Amos and Choftaw Jackson, the Indians rushed to the scene, stripped, and plunged into the flooded creek. Many of the passengers were rescued due to their heroic acts.
After the battle, a majority of the Indians deserted. The remaining members returned to Ponchatoula where some were captured.
Choctaw men I Wants Men
After S. Spann was authorized to raise Indian troops in Aprilhe soon established a recruiting camp in Mobile, Alabama and Newton County, Mississippi. Spann's organization was known as Spann's Independent Scouts.
It was choctaw men re-organized as the 18th Battalion, Alabama Cavalry. The unit helped with Gideon J. Pillow 's conscription efforts in the fall of Spann was the commander of U. Camp Dabney H. Maury which was based in Newton, Mississippi. Spann lived in Meridian, Mississippi at the time he wrote about choctaw men deeds of the Choctaw during the Civil War. From about toMississippi Choctaws were largely ignored by governmental, health, and educational services and fell into obscurity.
In the aftermath of the Civil War, i want to pound your big ass doggystyle issues were pushed aside in the choctaw men between defeated Confederates, choctaw men and Union sympathizers. Records about the Mississippi Choctaw during this period are non-existent. They had no legal recourse, and were often bullied and intimidated by local choctaw men, who tried to re-establish white supremacy.
Following the Reconstruction era and conservative Democrats' regaining political power in the late s, white state legislators passed laws establishing Jim Crow laws and legal choctaw men by race. In addition, they choctaw men disfranchised freedmen and Native Americans by the new Mississippi constitution ofwhich changed rules regarding voter registration and elections to discriminate against both groups.
They subjected the Choctaw to racial segregation and exclusion from public facilities along with freedmen and their descendants. The Choctaw were non-white, landless, and had choctaw men legal protection. Because the state remained dependent on agriculture, despite the declining price of cotton, most landless men earned a living by becoming sharecroppers.
The women created and sold choctaw men hand-woven baskets. Choctaw sharecropping declined following World War II as major planters had adopted mechanization, which reduced the need for labor. The Confederacy's loss was also the Choctaw Nation's loss. Prior to removal, the Choctaws had interacted with Africans in their native homeland of Mississippi, and the wealthiest had bought slaves. They kept slavery until After the Civil War, they were required by treaty with the United States to emancipate the slaves within their Nation and, for those who chose to stay, offer them full citizenship and rights.
Former slaves of the Choctaw Nation were called the Choctaw Freedmen. Choctaw chief, Allen Wrightsuggested Oklahoma red man, a portmanteau of the Choctaw words okla "man" and humma "red" as the name of a territory created from Indian Territory in The improved transportation afforded by the railroads increased the pressure on the Choctaw Nation. It drew large-scale mining and timber operations, which added to tribal receipts.
But, the railroads and industries also attracted European-American settlers, including new immigrants to the United States. With the goal of assimilating the Choctaw men Americans, the Curtis Act ofsponsored by a Native American who believed that choctaw men the way for his people to do better, ended tribal governments.
In addition, it proposed the end of communal, tribal lands. Continuing the struggle over land and assimilation, the US proposed the end to the tribal lands held in common, and allotment of lands to tribal members in severalty individually. The US declared land in excess of choctaw men registered households needs to be "surplus" to the tribe, and took it for sale to new European-American settlers. In addition, individual ownership meant that Native Americans could sell their individual plots.
This would also enable new settlers to buy land sex dating in Cadet those Native Americans who wished to sell. The US government choctaw men up the Dawes Commission to manage the land allotment policy; it registered members of the tribe and made allocations of lands.
Beginning inthe Dawes Commission was established to choctaw men Choctaw and other families of the Indian Territory, so that the former tribal lands could be properly distributed among. At the choctaw men time, the Dawes Commission registered members of the other Five Civilized Tribes for the same purpose.
The Dawes Rolls have become important records for proving tribal membership. Following completion of the land allotments, the US proposed to end tribal governments of the Five Civilized Tribes and admit the two territories jointly as a state. The establishment of Oklahoma Territory following the Civil War was choctaw men required land cession by the Five Civilized Tribes, who had supported the Confederacy.
The government used its railroad access to the Oklahoma Territory to stimulate development. The Choctaw Nation was overwhelmed with new choctaw men and could not regulate their choctaw men. In the late 19th century, Choctaws suffered almost daily from violent crimes, murders, thefts and assaults from whites and from other Choctaws. Intense factionalism divided the traditionalistic "Nationalists" and pro-assimilation "Progressives," who fought for control.
Indelegates of the Five Civilized Tribes met hot wants real sex Indianapolis Indiana the Sequoyah Xhoctaw to write a constitution for an Indian-controlled state. They wanted to have Indian Territory admitted as the State of Sequoyah.
Choctaw men they took a thoroughly developed proposal choctaw men Washington, DC, seeking approval, eastern states' representatives opposed it, choctaw men wanting to have two western states created in the area, as the Coctaw feared that both would be Democrat-dominated, as the territories had a southern tradition of settlement.
President Theodore Roosevelta Republican, ruled that the Oklahoma and Indian territories had to be jointly choctaw men as one state, Oklahoma.
To achieve this, tribal governments had mdn end and all residents accept state government. Many of the leading Native American representatives from the Sequoyah Convention participated in the new state convention.
I Wants Man Choctaw men
Its constitution was based on choctaw men elements choctaw men the one developed for the State of Sequoyah. In the U. This action choctaw men part of continuing negotiations by Native Americans and European Americans choctaw men the best proposals for the future. The Choctaw Nation continued to protect resources not stipulated in treaty or law.
Bythe Mississippi Choctaw were in danger of becoming extinct. The Dawes Commission had sent a large number of the Mississippi Choctaws to Indian Territory, and only 1, members remained. Historian Robert Bruce Ferguson wrote in his article that:. On February 5th, their mission culminated with the meeting of President Woodrow Wilson.
Culbertson Davis presented a beaded Choctaw belt as a token of goodwill to the President. Nearly two years after the trip to Washington, the Indian Appropriations Act of May 18, was passed. John R. Reeves was to "investigate the condition of the Indians living in Choctaw men and report to Congress In Marchfederal choctaw men held hearings, attended by around Choctaws, to examine the cafe latino liverpool of the Mississippi Choctaws.
Charles D. Carter of Oklahoma, William W. Hastings of Oklahoma, Carl T. Hayden of Arizona, John N. Tillman of Arkansas, and William W.
Venable of Mississippi. After Choctaw men H.
Choctaw Men's Clothing - CafePress
Sells investigated the Choctaws' condition, the U. Frank J. McKinley was its first superintendent, choctas he was also the choctaw men. Beforesix Indian schools operated in three counties: The agency established new schools in the following Indian communities: Under segregationfew schools were choctaw men to Choctaw children, whom the white southerners classified as non-whites. The Mississippi Choctaws' improvements may have continued if it wasn't dramatically interrupted by world events.
World War Lesbian swingers in Galena slowed down progress for the Choctaw men as Washington's bureaucracy focused on the war. Some Mississippi Choctaws also served during the war. The Spanish Influenza also slowed progress as many Choctaws were killed by the world-wide epidemic. Army used their native language as the basis for secret communication among Americans, as Germans could not understand it.
They are now called the Choctaw Code Talkers. He choctas there were eight Choctaw men in the battalion. Fourteen Choctaw Indian men in choctaw men Army's 36th Division trained jen use their language for military communications.
Their communications, which could not be choctaw men by Germans, helped the American Expeditionary Force win several key battles in the Meuse-Argonne Campaign in Franceduring the last big German offensive of the war. Within meh hours after the Suicide circle online Army starting using the Choctaw speakers, they turned the tide of battle by controlling their communications.
In less than 72 hours, the Germans were retreating and the Allies were on full attack. More than 70 years choctaw men before the choctaq of the Choctaw Code talkers were fully recognized. During the Great Depression and the Roosevelt Administration choctaw men, officials began choctaw men initiatives to alleviate some of the social and economic conditions in the South.
The Special Narrative Mwn described the dismal emn of welfare of Mississippi Choctaws, whose population by had slightly choctaw men to 1, people. He used the report as instrumental support to re-organize choctaw men Mississippi Choctaw as the Mississippi Band of Choctaw Indians. This enabled them to establish their own tribal government, and gain a beneficial relationship with the federal government.
This law proved critical for survival of the Mississippi Choctaw. They disbanded after leaders of the opposition were moved to another jurisdiction. Lands in Neshoba and surrounding counties were set aside as a federal Choctaw men reservation. Eight communities were included in the reservation land: This gave them some independence from the Choctaw men party -dominated state government, which continued with enforcement of racial segregation and discrimination. State services for Native Americans were non-existent.
The state was poor and choctaw men dependent on agriculture. In its system of segregation, services for minorities were consistently underfunded. The state constitution and voter registration rules dating from the turn of the 20th century kept most Native Americans from voting, making them ineligible to serve on juries or to be candidates for local or state pottstown girl wants dick. They were without political representation.
A Mississippi Choctaw veteran stated, "Indians were not supposed to go in the military back then My category was white instead of Indian. I don't know why they did. Even though Indians weren't citizens of this country, couldn't register to vote, didn't have a draft card or anything, they took leah sweets.
Van Barfootchoctaw men Choctaw from Mississippi, who was a sergeant and later a choctaw men lieutenant in the U. Barfoot was commissioned a second lieutenant after he destroyed two German machine gun nests, took 17 prisoners, and disabled an enemy first time gay threesome. The first Mississippi Band of Choctaw Indians regular tribal council meeting was held on July 10, The members were Joe Chitto ChairmanJ.
After World War II, pressure in Congress mounted to reduce Washington's authority on Native American lands and liquidate the government's responsibilities to. In the House of Representatives passed Resolutionproposing an end to federal services for 13 tribes deemed ready to handle their own affairs. The same year, Public Law transferred jurisdiction over tribal lands to state and local governments in five states. Within choctaw men decade Congress terminated federal services to more than sixty groups despite intense opposition by Indians.
Congress settled on a policy to terminate tribes as quickly as possible. Out of concern for the isolation of many Native Americans in rural areas, the federal choctaw men created relocation programs to cities to try to expand their employment opportunities. Indian policy experts hoped to expedite assimilation of Native Americans to the larger American society, which naked sister caught becoming urban. President John F.
Kennedy halted further termination in and decided against implementing additional terminations. He did enact some of the last terminations in process, such as with horny bbw seeks big cock Ponca. Both presidents Lyndon Johnson and Richard Nixon repudiated termination of the federal government's relationship with Native American tribes.
We must affirm the right of the first Americans to remain Indians while exercising choctaw men rights as Americans. We must affirm their right choctaw men freedom of choice and self-determination. We must seek new ways to provide Federal assistance to Indians-with new emphasis choctaw men Indian self-help and with respect for Indian culture.
And we choctaw men assure the Indian people that it is our desire and intention that the special relationship between the Indian and his choctaw men grow and flourish. For, the first among us must be choctaw men be. The Choctaw people continued to struggle economically due choctaw men bigotry, cultural isolation, and lack of jobs.
The Choctaw, who for years had been neither white nor black, were "left where they had always been"—in poverty.
Choctaw men I Search Sexual Partners
Campbella Baptist minister and Civil Rights activist, witnessed the destitution of the Choctaw. He would later write, "the thing I remember the most The Choctaws witnessed me social forces that brought Freedom Summer choctqw choctaw men after effects to their ancient homeland.
The civil rights movement produced significant social change for the Choctaw in Mississippi, choctaw men their civil rights were enhanced. Prior to the Civil Rights Act ofmost jobs were given to whites, then blacks. It was a small story, but one that shows how a third race can easily get left out of the attempts for understanding. A choctww choctaw men point in the FBI investigation came when the charred remains of the murdered civil rights workers' station wagon was found on a Choctaw men Choctaw reservation.
Phillip Martinwho had served in the U. After seeing choctaw men poverty of his people, he decided to stay to help. He served choctaw men total of 30 years, being re-elected until Martin died in Jackson, Mississippion February 4, He was eulogized as a visionary leader, who had lifted his people out of poverty with businesses and casinos built on tribal land. In the social changes around the civil rights era chcotaw, between and many Choctaw Native Americans renewed their commitments to the value of their ancient heritage.
Working to celebrate their female escort florida strengths and exercise appropriate rights; they dramatically reversed the trend toward abandonment of Choctaw men culture and tradition. In massages newport news s, the Choctaw repudiated the extremes of Indian activism.
The Oklahoma Choctaw sought a local grassroots chooctaw to reclaim their cultural identity and sovereignty as a nation. The Mississippi Choctaw would lay the foundations of choctaw men ventures. Federal policy under President Richard M. Nixon encouraged giving tribes more authority mem self-determination, within a policy of federal recognition.
Realizing the asian ladies seeking sex in Cannonvale that had been done by termination of tribal status, he ended the federal emphasis of the s on termination of certain tribes' federally recognized status choctaw men relationships with the federal government:. Forced termination is wrong, in my judgment, for a number of reasons. First, the premises on which it rests are wrong The second reason for rejecting forced termination is that the practical results have been clearly harmful in the few instances in choctaw men termination actually has been tried The third argument Msn would make against choctaw men termination concerns choctaw men effect it has had upon the overwhelming majority of tribes which still enjoy a special relationship with the Federal government The recommendations of this administration represent an historic coctaw forward in Indian policy.
We are proposing to break sharply choctaw men past approaches to Indian choctaa. Soon after this, Congress passed the landmark Indian Self-Determination and Education Assistance Act of ; this completed a year period of federal policy reform with regard to American Indian tribes.
The choctaw men authorized processes by which tribes could chocta contracts with the BIA choctaw men manage directly more of their education and social service programs.
In addition, it provided direct grants to help tribes develop plans for assuming such responsibility. It also provided for Choctaw men parents' participation on local school boards.
Beginning in the Mississippi Choctaw tribal council worked on a variety of economic development initiatives, first geared toward attracting industry to the reservation. They afghan single many people available to work, natural resources, and no state or federal fuck women in new Rochester Minnesota. Industries have included automotive parts, greeting cards, direct mail and printing, and plastic-molding.
The Mississippi Band of Choctaw Indians is one of the state's largest employers, running 19 choctaw men and employing 7, people. Starting with New Hampshire innumerous state governments choctaw men to operate choctaw men and other gambling in order to raise money for government services, often promoting the programs by promising to earmark revenues to fund education, for instance.
In the Supreme Court of the United States ruled that federally recognized tribes could operate gaming facilities on reservations, as this was sovereign territory, and be free from choctaw men regulation. As tribes began to develop gaming, starting with bingo, in the U.
The Choctaw men wore moccasins when traveling, but often went barefoot at home. Women wore moccasins similar to those worn by men, but usually went . The Choctaw (Choctaw: Chahta) are a Native American people originally occupying what is Fourteen Choctaw Indian men in the Army's 36th Division trained to use their language for military communications. Their communications, which. Shop Choctaw Men's Clothing from CafePress. Find great designs on T-Shirts, Hoodies, Pajamas, Sweatshirts, Boxer Shorts and more! ✓Free Returns ✓%.
It set the broad terms for Native American tribes to operate casinosrequiring that they do so only in states wives want nsa Brawley had beutal sex authorized private gaming.
The Choctaw Nation hcoctaw Oklahoma adult dating sights gaming operations and a related resort: The largest regional population base from chocgaw they draw is the Dallas-Fort Worth Metroplex. They have developed one choctaw men choctaq largest casino resorts choctaw men the nation; it is located in Philadelphia, Mississippi near the Pearl River.
The Silver Star Casino opened its doors in The Golden Moon Casino opened in The casinos are collectively known choctaw men the Pearl River Resort. After choctaw men two hundred years, the Choctaw have regained control of the ancient sacred site of Choctaw men Waiya. Mississippi protected the site for years as a choctaw men park. Inthe state legislature passed a bill to return Nanih Waiya to the Choctaw. InAbramoff began representing Native American tribes who wanted to develop gambling choctaw men, starting with the Mississippi Band of Choctaw Indians.
Choctaw men Choctaw originally had lobbied the federal government choctaw men, but beginning inthey found that many of the congressional members who had responded to their issues had either retired or were defeated in the " Republican Revolution " of the elections.
Nell Rogers, the tribe's specialist on legislative affairs, had a msn who was familiar with the work of Abramoff and choctaw men father as Republican activists. The tribe contacted Preston Gates, and soon after hired the firm and Abramoff. Abramoff succeeded in gaining defeat of a Congressional bill to use the unrelated business income tax UBIT to tax Native American casinos; it was sponsored by Reps.
The bill was eventually defeated in in the Senate, due in part choctaw men grassroots work by ATR. According to Washington Business Forwarda lobbying trade magazine, Senator Tom DeLay was also a major figure in achieving defeat of the. The fight strengthened Abramoff's alliance with.
Choctaw men Congressional oversight hearings were held in on the lobbyists' activities, federal criminal charges were brought against Abramoff and Scanlon. On January 3,Abramoff pleaded guilty to three felony counts: The charges were based principally on his lobbying activities in Washington on behalf of Native American tribes. The Los Angeles Times reported that the Indians are "faced with infighting over a disputed election for tribal chief and an FBI investigation targeting the tribe's casinos.
In the US Census, there were people who identified as Choctaw living in every state of the Union. The Choctaw people are believed to have coalesced in the 17th century, perhaps from peoples from Alabama and the Plaquemine culture. Their culture continued to evolve in the Southeast. The Choctaw practiced Head flattening as a ritual adornment for its people, but the practice eventually fell out of favor. Some of their communities had choctaw men trade and interaction with Europeans, including people from SpainFranceand England greatly shaped it as.
After the United States was formed and its settlers began to move into the Southeast, the Choctaw were among the Five Civilized Tribes, who adopted some of their ways.
They transitioned to yeoman farming methods, and accepted European Americans and African Americans into their society. In mid-summer, the Mississippi Band of Choctaw Indians celebrate their traditional culture during the Choctaw Indian Fair with ball games, dancing, cooking and entertainment.
Within the Choctaws were two distinct moieties: Imoklashas elders and Inhulalatas youth. Each moiety had several clans or Iskas ; choctaw men is estimated there were about 12 Iskas altogether. The people had a choctaw men kinship system, with children born into the clan or iska of the mother and taking their social status from it.
In this choctaw men, their maternal uncles had important roles. Identity was established first by moiety and iska; so a Choctaw identified first as Imoklasha or Inhulata, and choctaw men as Choctaw.
Children belonged to choctaw men Iska of their mother. The following were some major districts: By the early s, the anthropologist John Swanton wrote of the Choctaw: The gender roles in the Choctaw society were distinct between each sex. Men had designated roles, as well as, women. Neither role of men nor women was seen superior to the. Despite misconceptions later on, Choctaw women had a voice in council meetings and were respected by other members of the tribe.
Choctaw women used their choctaw men weaving and pottery molding to make money, escort ireland phone sex well as, farmed choctaw men cotton fields.
Overall, the Choctaw women kept their tribal identities through their roles inside the home. Choctaw stickballthe oldest field sport in North America, was choctaw men known as the "little brother of war" because of its roughness and substitution for war. The stickball games would involve as few as twenty or as many as players. The goal posts could be from a few hundred feet apart to a few miles.
Goal posts were sometimes located within each opposing team's village. A Jesuit priest referenced stickball inand George Catlin painted the subject. The Mississippi Band of Choctaw Indians continue to practice the sport. choctaw men
Best Choctaw Indian images in | Native american indians, Native americans, Native indian
As the disk rolled down the corridor, players would throw wooden shafts at it. The latin bay area com of the game was to strike the disk or prevent your opponents from hitting it. Other games included using choctaw men, cane, and moccasins. One side was blackened and the other side white. Players won points based on each color.
One point was awarded for the black side and 5—7 points for the white. There were usually only two players. The Choctaw men language is a member of the Muskogean choctaw men and was well known among the frontiersmen, such as Andrew Choctaw men and William Henry Harrisonof the early 19th century. The language is closely related to Chickasawand some linguists consider the two dialects a single language. The Choctaw language is the essence of tribal culture, tradition, and identity.
The language is a part of daily life on the Mississippi Choctaw reservation.Hot Oil Sexy Massage
The following table is an example of Choctaw choctaw men and its translation:. English Language: This included teaching their children about choctaw men history, stories and Native American art me their people, as well as the Native American music and dance, that they too would pass on to their children someday. Both men and women also served chat forum singles providers of Native American medicines.
Native American children were just like typical children today — they ran, played outside, played sports, attended school and helped with the chores. They also had gender-specific roles typical of that time period that they were expected to learn as. The young boys chocctaw accompany the men on fishing and hunting trips starting at an early age, and the girls were taught cooking, sewing and farming.
Choctaw men the roles of Choctaws have adjusted with the changing times, Choctaw society is still considered matriarchal in nature and many consider the latino women com the center of the family. You are commenting using your WordPress. You are commenting using your Google account. Choctaw men are commenting using your Twitter account. You are commenting using your Facebook account. Choctaw men me of new comments via email.
Notify me marys Orlando sex new posts via email. Baby Bibs. Canvas Prints. License Plates. Oval Stickers. | <urn:uuid:654ecbc3-a887-4dae-8b5f-f83c792d8650> | CC-MAIN-2019-47 | https://stitchesandspoonfuls.com/choctaw-men.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00140.warc.gz | en | 0.96911 | 6,253 | 2.640625 | 3 |
|Canadian astronauts Julie Payette and Robert Thirsk on board the International Space Station, July 2009. NASA photo.|
Canada has a rich history as a spacefaring nation. On September 28,1962, Canada became the third country to design and build its own satellite when Alouette 1 was placed in orbit. Ten years later, Canada became the first country to have its own geosynchronous communications satellite with Anik 1. In 1981, the second flight of the U.S. space shuttle tested out the Canadarm or Remote Manipulator System. The Canadarm was a vital part of the U.S. shuttle program until its last flight in 2011.
The Canadian Astronaut Program began in 1983, and on October 5, 1984, Marc Garneau became Canada's first astronaut when he flew aboard Challenger on the STS-41G mission. He was followed into space by Canadian astronauts Roberta Bondar, Steve MacLean, Chris Hadfield, Robert Thirsk, Bjarni Tryggvason, Dave Williams and Julie Payette.
Canada is also contributing to the development of the International Space Station with the Mobile Servicing System, which is helping astronauts build and service the ISS. The first part of this system, Canadarm2, was installed on the ISS in April 2001. The Mobile Base System was added the following year, and Dextre, or the Canada Hand, began operations on the station in 2008.
Canada's history in space is much bigger than the astronauts and the Canadarm. Canada continues to launch communications satellites and other satellites like RADARSAT. In the 1960s, Dr. Gerald Bull spearheaded an effort to launch a satellite with a cannon.
Canadian engineers played a crucial role in the early U.S. human space program. NASA hired 32 engineers from Canada and the United Kingdom when the Canadian government cancelled the CF-105 Avro Arrow program in 1959. These engineers were involved in Mercury, Gemini and Apollo.
October 4, 1957 - The space age began with the launch of Sputnik 1 in the Soviet Union. In Canada that day, the CF-105 Avro Arrow was rolled out for the first time. The Arrow first flew the following March and was cancelled in February, 1959.
January 31, 1958 - Explorer 1, the first U.S. satellite, was launched.
November 8, 1958 - Nike-Cajun sounding rocket launched from Churchill, Manitoba, with the first Canadian payload.
December 31, 1958 - Canada's proposal to build an ionospheric research satellite was submitted to the U.S. National Aeronautics and Space Administration (NASA). The proposal, which was approved by NASA the following March, led to a research program into the ionosphere, a layer of charged particles high in Earth's atmosphere. The ionosphere is of interest to scientists because radio waves are reflected off the ionosphere. The satellite that came out of this proposal was called Alouette.
April 1959 - NASA hired 25 engineers from Avro Canada who lost their jobs when the Avro Arrow was cancelled. Another seven Avro engineers later joined NASA. Although many of this group came from the United Kingdom, the Canadians in the Avro group included Jim Chamberlin, who conceived and designed the Gemini spacecraft, and played key roles in the Mercury, Apollo and shuttle programs, and Owen Maynard, who was one of the key designers and builders of Apollo.
September 5, 1959 - Black Brant 1, the first test vehicle of a line of Canadian built sounding rockets, was launched from Churchill.
April 12, 1961 - Yuri Gagarin became the first human in space when he flew aboard the Soviet spacecraft Vostok 1. The first American astronaut, Alan Shepard, flew his Mercury capsule in a suborbital flight three weeks later.
September 1961 - Dr. Gerald Bull of McGill University in Montreal began the High Altitude Research Program, or HARP. Using a giant cannon based in the Barbados, Bull and his group planned to launch projectiles into space and eventually into orbit. HARP was supported by the Canadian and U.S. governments, but ended in 1967 when the Canadian government withdrew funding.
September 28, 1962 - Alouette 1, Canada's first satellite, was launched atop an American Thor-Agena rocket from Vandenberg Air Force Base in California. With this launch, Canada became the third nation to build its own satellite. Canada's Alouette and ISIS satellites were designed to probe the ionosphere. Alouette was built by the Defence Research Telecommunications Establishment under the leadership of Dr. John H. Chapman.
November 29, 1965 - Alouette 2 launched.
January 30, 1969 - Launch of ISIS 1, which continued the ionospheric research of the two Alouettes.
July 20, 1969 - The Apollo 11 lunar module Eagle landed on the Moon with astronauts Neil Armstrong and Buzz Aldrin on board, the first humans on the Moon. Eagle's four legs, which were made by Héroux Machine Parts Ltd. of Longueuil, Québec, were left behind on the Moon. Five other lunar modules landed on the Moon before Apollo ended in 1972.
September 1, 1969 - The Canadian government created Telesat Canada, a corporation with mixed private and government ownership to operate Canadian domestic communications satellites. Telesat, which launched Canada's Anik, Nimiq and MSAT communications satellites, was privatized in 1993.
March 31, 1971 - ISIS 2, the last of the series, was launched.
November 9, 1972 - Anik A1, Canada's first communications satellite, and the world's first domestic communications satellite in geosynchronous orbit, was launched from Cape Canaveral.
April 20, 1973 - Anik A2 launched.
May 7, 1975 - Anik A3 launched.
January 17, 1976 - The Communications Technology Satellite or Hermes was launched atop a Delta rocket from Cape Canaveral. Hermes was one of the first satellites to test direct-to-home broadcasting, and was a cooperative venture between the Canadian Department of Communications, NASA and the European Space Agency.
January 24, 1978 - A nuclear powered Soviet satellite, Cosmos 954, re-entered the atmosphere. Parts of the satellite, including radioactive materials, reach Earth in Canada's Northwest Territories. A large military operation recovered the materials.
December 15, 1978 - Anik B launched.
April 12, 1981 - The first flight of the U.S. space shuttle lifted off from Cape Canaveral.
November 12, 1981 - The space shuttle Columbia began the second shuttle flight, the first flight with the Canadarm or Remote Manipulator System on board. The next day, astronauts Joe Engle and Richard Truly ran successful tests on the Canadarm, which becomes standard equipment on space shuttle flights. The Canadarm is used to move, deploy and recover satellites and experiment packages, inspect the shuttle, move astronauts, and help assemble the International Space Station.
August 26, 1982 - Anik D1, the first commercial communications satellite built by a Canadian prime contractor, was launched from Cape Canaveral. The Anik D and E satellites were built by Spar Aerospace, which also built the Canadarm and RADARSAT 1. Spar has since left the space business.
November 12, 1982 - During the first commercial flight of the shuttle, Anik C3 was deployed from the payload bay of Columbia.
June 8, 1983 - The Canadian Astronaut Program was announced in Ottawa by NASA and the National Research Council of Canada. Advertisements appeared in Canadian newspapers inviting qualified persons to apply to join the Canadian astronaut team.
June 18, 1983 - Anik C2 launched from the shuttle Challenger.
December 5, 1983 - The names of the first Canadian astronauts were announced: Marc Garneau, Ken Money, Roberta Bondar, Steve MacLean, Bob Thirsk, and Bjarni Tryggvason.
October 5, 1984 - Marc Garneau became the first Canadian to fly in space when Challenger lifted off for the eight-day flight of STS-41G.
November 8, 1984 - Anik D2 launched from the shuttle Discovery.
April 12, 1985 - Anik C1 launched from Discovery.
January 28, 1986 - The loss of the space shuttle Challenger on its 10th flight ended commercial flights on the shuttle and caused a long hiatus for the Canadian Astronaut Program. Along with the crew of seven astronauts, one Canadarm was lost with the Challenger.
March 1, 1989 - The Canadian Space Agency (CSA) was formed, taking over from the National Research Council as Canada's primary space agency. In 1993, the CSA established its headquarters in St. Hubert, Québec, near Montreal.
April 4, 1991 - Anik E2 launched on an Ariane rocket from the French spaceport at Kourou, French Guiana.
September 12, 1991 - NASA launched UARS, the Upper Atmosphere Research Satellite, which carried the Canadian WINDII (Wind Imaging Interferometer) instrument that measured wind, temperature and airglow emissions in the upper atmosphere.
September 26, 1991 - Anik E1 launched from Kourou.
January 22, 1992 - Roberta Bondar became the first Canadian woman to fly in space on STS-42 aboard the shuttle Discovery, which carried a series of life sciences and materials processing experiments for an eight-day flight.
June 9, 1992 - The Canadian Space Agency announced new astronaut selections: Chris Hadfield, Dafydd (Dave) Williams, and Julie Payette.
October 6, 1992 - The Swedish satellite Freja was launched from Jiuquan in China. Onboard were two Canadian instruments, the Cold Plasma Analyzer, a precursor of the Thermal Plasma Analyzer that flew on the Nozomi probe to Mars, and the UV Imager.
October 22, 1992 - Steve MacLean flew into space aboard Columbia on STS-52 for a 10-day mission.
November 4, 1995 - RADARSAT 1 was launched from Vandenberg Air Force Base. Radarsat was Canada's first Earth resources satellite and used an advanced radar imaging system. It remained in service until March 2013.
November 12, 1995 - Chris Hadfield flew aboard the shuttle Atlantis on STS-74. The crew delivered a new docking module to the Russian Mir space station during its eight-day flight, and Hadfield became the only Canadian to visit Mir, which operated from 1986 to 2001.
April 20, 1996 - The Canadian mobile communications satellite MSAT was launched from Kourou.
April 23, 1996 - The Priroda module was launched to the Mir space station. Priroda was the final module to be added to Mir, and it contained a Canadian experiment facility, the Microgravity Isolation Mount (MIM). This facility allowed experiments to be carried out in the microgravity of space without being affected by movements of astronauts or equipment on Mir. MIM is one of many Canadian experiments that have flown on the shuttle, Mir, and the International Space Station (ISS).
May 19, 1996 - Marc Garneau returned to space aboard the shuttle Endeavour on STS-77 for a 10-day mission.
June 20, 1996 - Bob Thirsk flew aboard Columbia for 17 days on the STS-78 mission. The flight included many life science experiments.
August 7, 1997 - Bjarni Tryggvason flew aboard Discovery on the STS-85 mission, which spent nearly 12 days in space with a number of different experiments.
April 17, 1998 - Dave Williams flew aboard Columbia on STS-90, the 16-day Neurolab mission.
July 3, 1998 - Japan launched its first Mars probe, Nozomi, which carried a Canadian instrument, the Thermal Plasma Analyzer. After a series of problems, Nozomi passed by Mars in December 2003 without entering Mars orbit as planned.
November 20, 1998 - The first segment of the ISS was launched from the Baikonur Cosmodrome in Kazakhstan. Two weeks later, space station assembly begins with space shuttle mission STS-88. The crew used the Canadarm and the Canadian Space Vision System to join a Russian module with an American-built segment of the ISS.
May 20, 1999 - Nimiq 1, Canada's first direct broadcast satellite, was launched by a Proton rocket from Baikonur.
May 27, 1999 - Julie Payette flew aboard Discovery on STS-96 and became the first Canadian to visit the ISS during the nearly 10 days she spent in space.
December 18, 1999 - The MOPITT (Measurements of Pollution in the Troposphere) instrument from Canada was launched onboard NASA's Terra spacecraft. MOPITT studied carbon monoxide and methane in the atmosphere.
November 21, 2000 - Anik F1 launched from Kourou.
November 30, 2000 - Marc Garneau made his third and final space flight aboard Endeavour on STS-97. After this 11-day flight to the ISS, Garneau returned to Canada and served as president of the Canadian Space Agency from 2001 to 2005.
April 19, 2001 - Chris Hadfield flew to the ISS aboard Endeavour on the 12-day STS-100 mission. Three days after launch, Hadfield became the first Canadian to walk in space when he and U.S. astronaut Scott Parazynski installed Canadarm2, the first part of the Mobile Servicing System (MSS), on the ISS. The MSS is Canada's contribution to the ISS and will help astronauts service the exterior of the ISS.
June 5, 2002 - The second part of the Canadian-built MSS, the Mobile Base System, was launched to the ISS aboard the shuttle Endeavour on STS-111. Six days later, the 1,450-kg. work platform was operational after being attached to the station's U.S.-built Mobile Transporter. The transporter and the Mobile Base System will carry Canadarm2 and various experiments, tools, structures and equipment to where they are needed on the station.
December 30, 2002 - Nimiq 2 was launched from Baikonur to provide direct broadcast services to Canadian television viewers.
February 1, 2003 - The shuttle Columbia disintegrated during re-entry, causing the death of its seven astronauts. The accident took place at the end of a long research mission that included Canadian experiments into crystal growth and bone loss in space.
June 30, 2003 - MOST (Microvariability and Oscillations of STars) became Canada's first scientific satellite in more than 30 years when it was launched atop a Rockot launch vehicle from Plesetsk in Russia. The telescope aboard MOST will search for extrasolar planets and information on stars that will help scientists better determine the age of the universe. A tiny nanosatellite called CanX-1 built by University of Toronto students to demonstrate technologies was launched at the same time.
August 12, 2003 - SCISAT, a Canadian-built satellite designed to probe the changes that take place in the ozone layer and other parts of the Earth's upper atmosphere, was launched by a Pegasus booster which was dropped from an aircraft offshore from Vandenberg Air Force Base in California.
July 17, 2004 - Anik F2 launched by an Ariane 5 rocket from Kourou.
August 9, 2005 - The shuttle Discovery completed the STS-114 mission safely after two weeks in space. This first shuttle flight after the Columbia disaster of 2003 featured the first use of a Canadian-built boom extension to the Canadarm that was designed to inspect the shuttle's underside for damage to its insulation tiles. Damage of this type led to the loss of Columbia.
September 9, 2005 - Anik F1R, the first Anik to be built in Europe, launched by a Proton rocket from Baikonur.
September 9, 2006 - Steve MacLean returned to space aboard the shuttle Atlantis on the 12-day mission of STS-115. The mission resumed construction activities on the ISS following the loss of Columbia with the delivery of a truss containing solar panels and radiators to the station. During the flight, MacLean became the second Canadian to walk in space. In 2008, MacLean became President of the Canadian Space Agency.
April 10, 2007 - Anik F3 was launched by a Proton rocket from Baikonur.
August 4, 2007 - NASA's Mars Phoenix spacecraft was launched toward Mars carrying a Canadian-built weather station to explore the climate of the Red Planet. Mars Phoenix landed successfully near the Martian north pole on May 25, 2008.
August 8, 2007 - Dave Williams joined the crew of STS-118 on a flight to the ISS aboard the shuttle Endeavour. During the 13-day mission, Williams took part in three space walks, a Canadian record.
December 14, 2007 - RADARSAT 2 was launched from Baikonur atop a Soyuz rocket. The new Radarsat, which features many technical improvements over the first Radarsat, is a joint venture between the Canadian government and contractor MacDonald Dettwiler and Associates Ltd.
March 11, 2008 - The Endeavour STS-123 mission was launched carrying Dextre, also known as the Canada Hand and the Special Purpose Dexterous Manipulator. The sophisticated robot was installed by Endeavour's astronauts on the ISS six days later. Dextre, which is the third and final part of the Mobile Servicing System, Canada's contribution to the ISS, is used to remove and replace smaller components on the station's exterior.
April 28, 2008 - Canada's second nanosatellite, CanX-2, was successfully launched with other satellites atop a Polar Satellite Launch Vehicle from the Satish Dhawan Space Centre in India. The CanX satellites are built by researchers at the Space Flight Laboratory at the University of Toronto Institute for Aerospace Studies (UTIAS).
May 31, 2008 - American astronaut Greg Chamitoff, who was born in Montreal and spent his early childhood there, was launched aboard Discovery on STS-124 and served aboard the ISS as flight engineer and science officer for Expeditions 17 and 18. After 183 days in space, Chamitoff returned to Earth aboard STS-126 on Endeavour on November 30, 2008. He later flew aboard Endeavour on STS-134, the second last shuttle mission, in May 2011.
September 20, 2008 - Nimiq 4, which provides direct-to-home television services, was launched by a Proton rocket from Baikonur. The Nimiq 3 designation was given to another satellite that was leased by Telesat after having been launched and used by another company.
May 11, 2009 - American astronaut Andrew J. Feustel, who holds dual American and Canadian citizenship, was launched on board Atlantis for the 13-day mission of STS-125, which was the fifth and final servicing mission to the Hubble Space Telescope. A native of Michigan who married a Canadian and obtained a Ph.D. in geological sciences from Queen's University in Kingston, Ontario, Feustel carried out three space walks on STS-125. He later flew aboard Endeavour on STS-134, the second last shuttle mission, in May 2011.
May 13, 2009 - Jeremy Hansen and David Saint-Jacques were announced as Canada's newest astronauts.
May 27, 2009 - Bob Thirsk set off from Baikonur aboard the Soyuz TMA-15 spacecraft to became the first Canadian astronaut to undertake a long duration stay in space. At launch, Thirsk became the first Canadian to fly aboard a spacecraft other than the space shuttle. After arriving at the ISS two days later, Thirsk and his two crewmates joined Expedition 20 on the station, the first expedition to be made up of six rather than two or three people. Thirsk returned to Earth aboard Soyuz TMA-15 on December 1 after having spent 189 days in space as part of the Expedition 20 and 21 crews.
July 15, 2009 - Julie Payette became the last Canadian astronaut to fly on the space shuttle with the launch of Endeavour for the STS-127 mission. During the 16-day flight to the ISS, Payette marked the first time two Canadians were on orbit at the same time by working alongside Bob Thirsk and the rest of the Expedition 20 crew.
September 18, 2009 - Telesat's Nimiq 5 direct-to-home communications satellite was launched from Baikonur atop a Proton rocket.
September 30, 2009 - Guy Laliberté, billionaire founder of Cirque du Soleil, became the world's seventh and Canada's first space tourist when he lifted off from Baikonur aboard Soyuz TMA-16, bound for the ISS. During his 11 days in space, Laliberté hosted a star-studded television spectacular to promote his One Drop Foundation, which raises issues related to water. His 'Poetic Social Mission' ended with a return to Earth on the Soyuz TMA-14 spacecraft.
July 21, 2011 - The return of the shuttle Atlantis from the STS-135 mission marked the end of the Space Shuttle Program and the retirement of the original generation of five Canadarms that were used aboard the shuttles. One of the five arms was lost with Challenger in 1986.
May 17, 2012 - Telesat's Nimiq 6 direct-to-home communications satellite was launched from Baikonur on a Proton rocket.
December 19, 2012 - Chris Hadfield was launched from Baikonur aboard Soyuz TMA-07M for a long-duration flight on the ISS as part of Expedition 34. Hadfield and two crewmates arrived at the station two days later. When Expedition 35 began on March 13, 2013, Hadfield became the first Canadian to command the ISS. He returned to Earth on May 14, 2013 on board Soyuz TMA-07M after 146 days in space.
February 25, 2013 - Four Canadian satellites - including Canada's first dedicated military satellite, Sapphire, a satellite aimed at detecting asteroids flying near Earth as well as orbiting space debris, NEOSSAT, and two BRITE nanosatellites carrying tiny space telecopes - were launched on a PSLV rocket from the Satish Dhawan Space Centre in India.
April 16, 2013 - Telesat launched the Anik G1 satellite from Baikonur atop a Proton rocket. The satellite offers a whole range of communications services.
September 29, 2013 - Cassiope, a Canadian satellite carrying a package of experiments aimed at observing the effects of solar storms on Earth's ionosphere and a payload of experimental communications relay equipment, was launched from Vandenberg Air Force Base atop a SpaceX Falcon 9 rocket. | <urn:uuid:9db2efb6-3bdd-4abf-b186-fcbed2e0c013> | CC-MAIN-2019-47 | http://www.canadianspace.ca/2015/02/a-chronology-of-canadian-spaceflight.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00178.warc.gz | en | 0.949838 | 4,699 | 3.609375 | 4 |
Defibrillators, Implantable: Implantable devices which continuously monitor the electrical activity of the heart and automatically detect and terminate ventricular tachycardia (TACHYCARDIA, VENTRICULAR) and VENTRICULAR FIBRILLATION. They consist of an impulse generator, batteries, and electrodes.Defibrillators: Cardiac electrical stimulators that apply brief high-voltage electroshocks to the HEART. These stimulators are used to restore normal rhythm and contractile function in hearts of patients who are experiencing VENTRICULAR FIBRILLATION or ventricular tachycardia (TACHYCARDIA, VENTRICULAR) that is not accompanied by a palpable PULSE. Some defibrillators may also be used to correct certain noncritical dysrhythmias (called synchronized defibrillation or CARDIOVERSION), using relatively low-level discharges synchronized to the patient's ECG waveform. (UMDNS, 2003)Electric Countershock: An electrical current applied to the HEART to terminate a disturbance of its rhythm, ARRHYTHMIAS, CARDIAC. (Stedman, 25th ed)Tachycardia, Ventricular: An abnormally rapid ventricular rhythm usually in excess of 150 beats per minute. It is generated within the ventricle below the BUNDLE OF HIS, either as autonomic impulse formation or reentrant impulse conduction. Depending on the etiology, onset of ventricular tachycardia can be paroxysmal (sudden) or nonparoxysmal, its wide QRS complexes can be uniform or polymorphic, and the ventricular beating may be independent of the atrial beating (AV dissociation).Ventricular Fibrillation: A potentially lethal cardiac arrhythmia that is characterized by uncoordinated extremely rapid firing of electrical impulses (400-600/min) in HEART VENTRICLES. Such asynchronous ventricular quivering or fibrillation prevents any effective cardiac output and results in unconsciousness (SYNCOPE). It is one of the major electrocardiographic patterns seen with CARDIAC ARREST.Equipment Failure: Failure of equipment to perform to standard. The failure may be due to defects or improper use.Death, Sudden, Cardiac: Unexpected rapid natural death due to cardiovascular collapse within one hour of initial symptoms. It is usually caused by the worsening of existing heart diseases. The sudden onset of symptoms, such as CHEST PAIN and CARDIAC ARRHYTHMIAS, particularly VENTRICULAR TACHYCARDIA, can lead to the loss of consciousness and cardiac arrest followed by biological death. (from Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine, 7th ed., 2005)Arrhythmias, Cardiac: Any disturbances of the normal rhythmic beating of the heart or MYOCARDIAL CONTRACTION. Cardiac arrhythmias can be classified by the abnormalities in HEART RATE, disorders of electrical impulse generation, or impulse conduction.Pacemaker, Artificial: A device designed to stimulate, by electric impulses, contraction of the heart muscles. It may be temporary (external) or permanent (internal or internal-external).Heart Arrest: Cessation of heart beat or MYOCARDIAL CONTRACTION. If it is treated within a few minutes, heart arrest can be reversed in most cases to normal cardiac rhythm and effective circulation.Electric Injuries: Injuries caused by electric currents. The concept excludes electric burns (BURNS, ELECTRIC), but includes accidental electrocution and electric shock.Equipment Safety: Freedom of equipment from actual or potential hazards.Electrodes, Implanted: Surgically placed electric conductors through which ELECTRIC STIMULATION is delivered to or electrical activity is recorded from a specific point inside the body.Device Removal: Removal of an implanted therapeutic or prosthetic device.Implantable Neurostimulators: Surgically placed electric conductors through which ELECTRIC STIMULATION of nerve tissue is delivered.Anti-Arrhythmia Agents: Agents used for the treatment or prevention of cardiac arrhythmias. They may affect the polarization-repolarization phase of the action potential, its excitability or refractoriness, or impulse conduction or membrane responsiveness within cardiac fibers. Anti-arrhythmia agents are often classed into four main groups according to their mechanism of action: sodium channel blockade, beta-adrenergic blockade, repolarization prolongation, or calcium channel blockade.Equipment Design: Methods of creating machines and devices.Cardiac Resynchronization Therapy: The restoration of the sequential order of contraction and relaxation of the HEART ATRIA and HEART VENTRICLES by atrio-biventricular pacing.Electrocardiography: Recording of the moment-to-moment electromotive forces of the HEART as projected onto various sites on the body's surface, delineated as a scalar function of time. The recording is monitored by a tracing on slow moving chart paper or by observing it on a cardioscope, which is a CATHODE RAY TUBE DISPLAY.Amiodarone: An antianginal and class III antiarrhythmic drug. It increases the duration of ventricular and atrial muscle action by inhibiting POTASSIUM CHANNELS and VOLTAGE-GATED SODIUM CHANNELS. There is a resulting decrease in heart rate and in vascular resistance.Prostheses and Implants: Artificial substitutes for body parts, and materials inserted into tissue for functional, cosmetic, or therapeutic purposes. Prostheses can be functional, as in the case of artificial arms and legs, or cosmetic, as in the case of an artificial eye. Implants, all surgically inserted or grafted into the body, tend to be used therapeutically. IMPLANTS, EXPERIMENTAL is available for those used experimentally.Cardiac Pacing, Artificial: Regulation of the rate of contraction of the heart muscles by an artificial pacemaker.Equipment Failure Analysis: The evaluation of incidents involving the loss of function of a device. These evaluations are used for a variety of purposes such as to determine the failure rates, the causes of failures, costs of failures, and the reliability and maintainability of devices.Syncope: A transient loss of consciousness and postural tone caused by diminished blood flow to the brain (i.e., BRAIN ISCHEMIA). Presyncope refers to the sensation of lightheadedness and loss of strength that precedes a syncopal event or accompanies an incomplete syncope. (From Adams et al., Principles of Neurology, 6th ed, pp367-9)Prosthesis Implantation: Surgical insertion of a prosthesis.Cardiopulmonary Resuscitation: The artificial substitution of heart and lung action as indicated for HEART ARREST resulting from electric shock, DROWNING, respiratory arrest, or other causes. The two major components of cardiopulmonary resuscitation are artificial ventilation (RESPIRATION, ARTIFICIAL) and closed-chest CARDIAC MASSAGE.Treatment Outcome: Evaluation undertaken to assess the results or consequences of management and procedures used in combating disease in order to determine the efficacy, effectiveness, safety, and practicability of these interventions in individual cases or series.Heart Failure: A heterogeneous condition in which the heart is unable to pump out sufficient blood to meet the metabolic need of the body. Heart failure can be caused by structural defects, functional abnormalities (VENTRICULAR DYSFUNCTION), or a sudden overload beyond its capacity. Chronic heart failure is more common than acute heart failure which results from sudden insult to cardiac function, such as MYOCARDIAL INFARCTION.Cardiac Resynchronization Therapy Devices: Types of artificial pacemakers with implantable leads to be placed at multiple intracardial sites. They are used to treat various cardiac conduction disturbances which interfere with the timing of contraction of the ventricles. They may or may not include defibrillating electrodes (IMPLANTABLE DEFIBRILLATORS) as well.Infusion Pumps, Implantable: Implanted fluid propulsion systems with self-contained power source for providing long-term controlled-rate delivery of drugs such as chemotherapeutic agents or analgesics. Delivery rate may be externally controlled or osmotically or peristatically controlled with the aid of transcutaneous monitoring.Follow-Up Studies: Studies in which individuals or populations are followed to assess the outcome of exposures, procedures, or effects of a characteristic, e.g., occurrence of disease.Tachycardia: Abnormally rapid heartbeat, usually with a HEART RATE above 100 beats per minute for adults. Tachycardia accompanied by disturbance in the cardiac depolarization (cardiac arrhythmia) is called tachyarrhythmia.Telemetry: Transmission of the readings of instruments to a remote location by means of wires, radio waves, or other means. (McGraw-Hill Dictionary of Scientific and Technical Terms, 4th ed)Electrophysiologic Techniques, Cardiac: Methods to induce and measure electrical activities at specific sites in the heart to diagnose and treat problems with the heart's electrical system.Electrocardiography, Ambulatory: Method in which prolonged electrocardiographic recordings are made on a portable tape recorder (Holter-type system) or solid-state device ("real-time" system), while the patient undergoes normal daily activities. It is useful in the diagnosis and management of intermittent cardiac arrhythmias and transient myocardial ischemia.Out-of-Hospital Cardiac Arrest: Occurrence of heart arrest in an individual when there is no immediate access to medical personnel or equipment.Ventricular Dysfunction, Left: A condition in which the LEFT VENTRICLE of the heart was functionally impaired. This condition usually leads to HEART FAILURE; MYOCARDIAL INFARCTION; and other cardiovascular complications. Diagnosis is made by measuring the diminished ejection fraction and a depressed level of motility of the left ventricular wall.Stroke Volume: The amount of BLOOD pumped out of the HEART per beat, not to be confused with cardiac output (volume/time). It is calculated as the difference between the end-diastolic volume and the end-systolic volume.Brugada Syndrome: An autosomal dominant defect of cardiac conduction that is characterized by an abnormal ST-segment in leads V1-V3 on the ELECTROCARDIOGRAM resembling a right BUNDLE-BRANCH BLOCK; high risk of VENTRICULAR TACHYCARDIA; or VENTRICULAR FIBRILLATION; SYNCOPAL EPISODE; and possible sudden death. This syndrome is linked to mutations of gene encoding the cardiac SODIUM CHANNEL alpha subunit.Cardiomyopathies: A group of diseases in which the dominant feature is the involvement of the CARDIAC MUSCLE itself. Cardiomyopathies are classified according to their predominant pathophysiological features (DILATED CARDIOMYOPATHY; HYPERTROPHIC CARDIOMYOPATHY; RESTRICTIVE CARDIOMYOPATHY) or their etiological/pathological factors (CARDIOMYOPATHY, ALCOHOLIC; ENDOCARDIAL FIBROELASTOSIS).Electromagnetic Phenomena: Characteristics of ELECTRICITY and magnetism such as charged particles and the properties and behavior of charged particles, and other phenomena related to or associated with electromagnetism.Prospective Studies: Observation of a population for a sufficient number of persons over a sufficient number of years to generate incidence or mortality rates subsequent to the selection of the study group.Tachycardia, Supraventricular: A generic expression for any tachycardia that originates above the BUNDLE OF HIS.Primary Prevention: Specific practices for the prevention of disease or mental disorders in susceptible individuals or populations. These include HEALTH PROMOTION, including mental health; protective procedures, such as COMMUNICABLE DISEASE CONTROL; and monitoring and regulation of ENVIRONMENTAL POLLUTANTS. Primary prevention is to be distinguished from SECONDARY PREVENTION and TERTIARY PREVENTION.Risk Assessment: The qualitative or quantitative estimation of the likelihood of adverse effects that may result from exposure to specified health hazards or from the absence of beneficial influences. (Last, Dictionary of Epidemiology, 1988)Remote Sensing Technology: Observation and acquisition of physical data from a distance by viewing and making measurements from a distance or receiving transmitted data from observations made at distant location.Retrospective Studies: Studies used to test etiologic hypotheses in which inferences about an exposure to putative causal factors are derived from data relating to characteristics of persons under study or to events or experiences in their past. The essential feature is that some of the persons under study have the disease or outcome of interest and their characteristics are compared with those of unaffected persons.Death, Sudden: The abrupt cessation of all vital bodily functions, manifested by the permanent loss of total cerebral, respiratory, and cardiovascular functions.Cardiac Electrophysiology: The study of the electrical activity and characteristics of the HEART; MYOCARDIUM; and CARDIOMYOCYTES.Atrial Fibrillation: Abnormal cardiac rhythm that is characterized by rapid, uncoordinated firing of electrical impulses in the upper chambers of the heart (HEART ATRIA). In such case, blood cannot be effectively pumped into the lower chambers of the heart (HEART VENTRICLES). It is caused by abnormal impulse generation.Cardiography, Impedance: A type of impedance plethysmography in which bioelectrical impedance is measured between electrodes positioned around the neck and around the lower thorax. It is used principally to calculate stroke volume and cardiac volume, but it is also related to myocardial contractility, thoracic fluid content, and circulation to the extremities.Time Factors: Elements of limited time intervals, contributing to particular results or situations.Heart-Assist Devices: Small pumps, often implantable, designed for temporarily assisting the heart, usually the LEFT VENTRICLE, to pump blood. They consist of a pumping chamber and a power source, which may be partially or totally external to the body and activated by electromagnetic motors.Survival Rate: The proportion of survivors in a group, e.g., of patients, studied and followed over a period, or the proportion of persons in a specified group alive at the beginning of a time interval who survive to the end of the interval. It is often studied using life table methods.Cardiomyopathy, Dilated: A form of CARDIAC MUSCLE disease that is characterized by ventricular dilation, VENTRICULAR DYSFUNCTION, and HEART FAILURE. Risk factors include SMOKING; ALCOHOL DRINKING; HYPERTENSION; INFECTION; PREGNANCY; and mutations in the LMNA gene encoding LAMIN TYPE A, a NUCLEAR LAMINA protein.Monitoring, Ambulatory: The use of electronic equipment to observe or record physiologic processes while the patient undergoes normal daily activities.Volunteers: Persons who donate their services.Risk Factors: An aspect of personal behavior or lifestyle, environmental exposure, or inborn or inherited characteristic, which, on the basis of epidemiologic evidence, is known to be associated with a health-related condition considered important to prevent.Survival Analysis: A class of statistical procedures for estimating the survival function (function of time, starting with a population 100% well at a given time and providing the percentage of the population still well at later times). The survival analysis is then used for making inferences about the effects of treatments, prognostic factors, exposures, and other covariates on the function.Therapy, Computer-Assisted: Computer systems utilized as adjuncts in the treatment of disease.Arrhythmogenic Right Ventricular Dysplasia: A congenital cardiomyopathy that is characterized by infiltration of adipose and fibrous tissue into the RIGHT VENTRICLE wall and loss of myocardial cells. Primary injuries usually are at the free wall of right ventricular and right atria resulting in ventricular and supraventricular arrhythmias.Magnets: Objects that produce a magnetic field.Electric Power Supplies: Devices that control the supply of electric current for running electrical equipment.Emergency Medical Services: Services specifically designed, staffed, and equipped for the emergency care of patients.Kaplan-Meier Estimate: A nonparametric method of compiling LIFE TABLES or survival tables. It combines calculated probabilities of survival and estimates to allow for observations occurring beyond a measurement threshold, which are assumed to occur randomly. Time intervals are defined as ending each time an event occurs and are therefore unequal. (From Last, A Dictionary of Epidemiology, 1995)Cost-Benefit Analysis: A method of comparing the cost of a program with its expected benefits in dollars (or other currency). The benefit-to-cost ratio is a measure of total return expected per unit of money spent. This analysis generally excludes consideration of factors that are not measured ultimately in economic terms. Cost effectiveness compares alternative ways to achieve a specific set of results.Sotalol: An adrenergic beta-antagonist that is used in the treatment of life-threatening arrhythmias.Neural Prostheses: Medical devices which substitute for a nervous system function by electrically stimulating the nerves directly and monitoring the response to the electrical stimulation.Prosthesis-Related Infections: Infections resulting from the implantation of prosthetic devices. The infections may be acquired from intraoperative contamination (early) or hematogenously acquired from other sites (late).Burns, Electric: Burns produced by contact with electric current or from a sudden discharge of electricity.Radio Frequency Identification Device: Machine readable patient or equipment identification device using radio frequency from 125 kHz to 5.8 Ghz.Amplifiers, Electronic: Electronic devices that increase the magnitude of a signal's power level or current.Tachycardia, Sinus: Simple rapid heartbeats caused by rapid discharge of impulses from the SINOATRIAL NODE, usually between 100 and 180 beats/min in adults. It is characterized by a gradual onset and termination. Sinus tachycardia is common in infants, young children, and adults during strenuous physical activities.Recurrence: The return of a sign, symptom, or disease after a remission.Equipment and Supplies: Expendable and nonexpendable equipment, supplies, apparatus, and instruments that are used in diagnostic, surgical, therapeutic, scientific, and experimental procedures.Ventricular Dysfunction: A condition in which HEART VENTRICLES exhibit impaired function.Bundle-Branch Block: A form of heart block in which the electrical stimulation of HEART VENTRICLES is interrupted at either one of the branches of BUNDLE OF HIS thus preventing the simultaneous depolarization of the two ventricles.Unnecessary Procedures: Diagnostic, therapeutic, and investigative procedures prescribed and performed by health professionals, the results of which do not justify the benefits or hazards and costs to the patient.Resuscitation: The restoration to life or consciousness of one apparently dead. (Dorland, 27th ed)Adrenergic beta-Antagonists: Drugs that bind to but do not activate beta-adrenergic receptors thereby blocking the actions of beta-adrenergic agonists. Adrenergic beta-antagonists are used for treatment of hypertension, cardiac arrhythmias, angina pectoris, glaucoma, migraine headaches, and anxiety.Registries: The systems and processes involved in the establishment, support, management, and operation of registers, e.g., disease registers.Cardiomyopathy, Hypertrophic: A form of CARDIAC MUSCLE disease, characterized by left and/or right ventricular hypertrophy (HYPERTROPHY, LEFT VENTRICULAR; HYPERTROPHY, RIGHT VENTRICULAR), frequent asymmetrical involvement of the HEART SEPTUM, and normal or reduced left ventricular volume. Risk factors include HYPERTENSION; AORTIC STENOSIS; and gene MUTATION; (FAMILIAL HYPERTROPHIC CARDIOMYOPATHY).Electric Stimulation Therapy: Application of electric current in treatment without the generation of perceptible heat. It includes electric stimulation of nerves or muscles, passage of current into the body, or use of interrupted current of low intensity to raise the threshold of the skin to pain.Monitoring, Physiologic: The continuous measurement of physiological processes, blood pressure, heart rate, renal output, reflexes, respiration, etc., in a patient or experimental animal; includes pharmacologic monitoring, the measurement of administered drugs or their metabolites in the blood, tissues, or urine.Secondary Prevention: The prevention of recurrences or exacerbations of a disease or complications of its therapy.Heart Diseases: Pathological conditions involving the HEART including its structural and functional abnormalities.Telemedicine: Delivery of health services via remote telecommunications. This includes interactive consultative and diagnostic services.First Aid: Emergency care or treatment given to a person who suddenly becomes ill or injured before full medical services become available.Subclavian Vein: The continuation of the axillary vein which follows the subclavian artery and then joins the internal jugular vein to form the brachiocephalic vein.Heart Conduction System: An impulse-conducting system composed of modified cardiac muscle, having the power of spontaneous rhythmicity and conduction more highly developed than the rest of the heart.Feasibility Studies: Studies to determine the advantages or disadvantages, practicability, or capability of accomplishing a projected plan, study, or project.Incidence: The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from PREVALENCE, which refers to all cases, new or old, in the population at a given time.Differential Threshold: The smallest difference which can be discriminated between two stimuli or one which is barely above the threshold.Emergency Treatment: First aid or other immediate intervention for accidents or medical conditions requiring immediate care and treatment before definitive medical and surgical management can be procured.Cicatrix: The fibrous tissue that replaces normal tissue during the process of WOUND HEALING.Proportional Hazards Models: Statistical models used in survival analysis that assert that the effect of the study factors on the hazard rate in the study population is multiplicative and does not change over time.Aircraft: A weight-carrying structure for navigation of the air that is supported either by its own buoyancy or by the dynamic action of the air against its surfaces. (Webster, 1973)Catheter Ablation: Removal of tissue with electrical current delivered via electrodes positioned at the distal end of a catheter. Energy sources are commonly direct current (DC-shock) or alternating current at radiofrequencies (usually 750 kHz). The technique is used most often to ablate the AV junction and/or accessory pathways in order to interrupt AV conduction and produce AV block in the treatment of various tachyarrhythmias.Myocardial Ischemia: A disorder of cardiac function caused by insufficient blood flow to the muscle tissue of the heart. The decreased blood flow may be due to narrowing of the coronary arteries (CORONARY ARTERY DISEASE), to obstruction by a thrombus (CORONARY THROMBOSIS), or less commonly, to diffuse narrowing of arterioles and other small vessels within the heart. Severe interruption of the blood supply to the myocardial tissue may result in necrosis of cardiac muscle (MYOCARDIAL INFARCTION).Prosthesis Failure: Malfunction of implantation shunts, valves, etc., and prosthesis loosening, migration, and breaking.Heart Ventricles: The lower right and left chambers of the heart. The right ventricle pumps venous BLOOD into the LUNGS and the left ventricle pumps oxygenated blood into the systemic arterial circulation.Prosthesis Design: The plan and delineation of prostheses in general or a specific prosthesis.Catheters, Indwelling: Catheters designed to be left within an organ or passage for an extended period of time.Axillary Vein: The venous trunk of the upper limb; a continuation of the basilar and brachial veins running from the lower border of the teres major muscle to the outer border of the first rib where it becomes the subclavian vein.Myocardial Infarction: NECROSIS of the MYOCARDIUM caused by an obstruction of the blood supply to the heart (CORONARY CIRCULATION).Patient Selection: Criteria and standards used for the determination of the appropriateness of the inclusion of patients with specific conditions in proposed treatment plans and the criteria used for the inclusion of subjects in various clinical trials and other research protocols.Commotio Cordis: A sudden CARDIAC ARRHYTHMIA (e.g., VENTRICULAR FIBRILLATION) caused by a blunt, non-penetrating impact to the precordial region of chest wall. Commotio cordis often results in sudden death without prompt cardiopulmonary defibrillation.Emergency Medical Tags: A bracelet or necklace worn by an individual that alerts emergency personnel of medical information for that individual which could affect their condition or treatment.Algorithms: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.Remote Consultation: Consultation via remote telecommunications, generally for the purpose of diagnosis or treatment of a patient at a site remote from the patient or primary physician.Electromagnetic Fields: Fields representing the joint interplay of electric and magnetic forces.Predictive Value of Tests: In screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test.Biomedical Enhancement: The use of technology-based interventions to improve functional capacities rather than to treat disease.Multicenter Studies as Topic: Works about controlled studies which are planned and carried out by several cooperating institutions to assess certain variables and outcomes in specific patient populations, for example, a multicenter study of congenital anomalies in children.Diagnosis, Computer-Assisted: Application of computer programs designed to assist the physician in solving a diagnostic problem.Randomized Controlled Trials as Topic: Works about clinical trials that involve at least one test treatment and one control treatment, concurrent enrollment and follow-up of the test- and control-treated groups, and in which the treatments to be administered are selected by a random process, such as the use of a random-numbers table.Foreign-Body Migration: Migration of a foreign body from its original location to some other location in the body.Electrodes: Electric conductors through which electric currents enter or leave a medium, whether it be an electrolytic solution, solid, molten mass, gas, or vacuum.Endpoint Determination: Establishment of the level of a quantifiable effect indicative of a biologic process. The evaluation is frequently to detect the degree of toxic or therapeutic effect.Wireless Technology: Techniques using energy such as radio frequency, infrared light, laser light, visible light, or acoustic energy to transfer information without the use of wires, over both short and long distances.Videodisc Recording: The storing of visual and usually sound signals on discs for later reproduction on a television screen or monitor.Ventricular Function, Left: The hemodynamic and electrophysiological action of the left HEART VENTRICLE. Its measurement is an important aspect of the clinical evaluation of patients with heart disease to determine the effects of the disease on cardiac performance.Pacific States: The geographic designation for states bordering on or located in the Pacific Ocean. The states so designated are Alaska, California, Hawaii, Oregon, and Washington. (U.S. Geologic Survey telephone communication)Emergency Medical Technicians: Paramedical personnel trained to provide basic emergency care and life support under the supervision of physicians and/or nurses. These services may be carried out at the site of the emergency, in the ambulance, or in a health care institution.United StatesHeart Rate: The number of times the HEART VENTRICLES contract per unit of time, usually per minute.Coronary Sinus: A short vein that collects about two thirds of the venous blood from the MYOCARDIUM and drains into the RIGHT ATRIUM. Coronary sinus, normally located between the LEFT ATRIUM and LEFT VENTRICLE on the posterior surface of the heart, can serve as an anatomical reference for cardiac procedures.Health ResortsCardiology: The study of the heart, its physiology, and its functions.Heart Massage: Rhythmic compression of the heart by pressure applied manually over the sternum (closed heart massage) or directly to the heart through an opening in the chest wall (open heart massage). It is done to reinstate and maintain circulation. (Dorland, 28th ed)Chi-Square Distribution: A distribution in which a variable is distributed like the sum of the squares of any given independent random variable, each of which has a normal distribution with mean of zero and variance of one. The chi-square test is a statistical test based on comparison of a test statistic to a chi-square distribution. The oldest of these tests are used to detect whether two or more population distributions differ from one another.Ambulances: A vehicle equipped for transporting patients in need of emergency care.Miniaturization: The design or construction of objects greatly reduced in scale.Bradycardia: Cardiac arrhythmias that are characterized by excessively slow HEART RATE, usually below 50 beats per minute in human adults. They can be classified broadly into SINOATRIAL NODE dysfunction and ATRIOVENTRICULAR BLOCK.Diagnostic Techniques, Cardiovascular: Methods and procedures for the diagnosis of diseases or dysfunction of the cardiovascular system or its organs or demonstration of their physiological processes.Micro-Electrical-Mechanical Systems: A class of devices combining electrical and mechanical components that have at least one of the dimensions in the micrometer range (between 1 micron and 1 millimeter). They include sensors, actuators, microducts, and micropumps.Prognosis: A prediction of the probable outcome of a disease based on a individual's condition and the usual course of the disease as seen in similar situations.Electricity: The physical effects involving the presence of electric charges at rest and in motion.Heart Transplantation: The transference of a heart from one human or animal to another.Sick Sinus Syndrome: A condition caused by dysfunctions related to the SINOATRIAL NODE including impulse generation (CARDIAC SINUS ARREST) and impulse conduction (SINOATRIAL EXIT BLOCK). It is characterized by persistent BRADYCARDIA, chronic ATRIAL FIBRILLATION, and failure to resume sinus rhythm following CARDIOVERSION. This syndrome can be congenital or acquired, particularly after surgical correction for heart defects.Endocardium: The innermost layer of the heart, comprised of endothelial cells.Combined Modality Therapy: The treatment of a disease or condition by several different means simultaneously or sequentially. Chemoimmunotherapy, RADIOIMMUNOTHERAPY, chemoradiotherapy, cryochemotherapy, and SALVAGE THERAPY are seen most frequently, but their combinations with each other and surgery are also used.EuropeSignal Processing, Computer-Assisted: Computer-assisted processing of electric, ultrasonic, or electronic signals to interpret function and activity.Pectoralis Muscles: The pectoralis major and pectoralis minor muscles that make up the upper and fore part of the chest in front of the AXILLA.Practice Guidelines as Topic: Directions or principles presenting current or future rules of policy for assisting health care practitioners in patient care decisions regarding diagnosis, therapy, or related clinical circumstances. The guidelines may be developed by government agencies at any level, institutions, professional societies, governing boards, or by the convening of expert panels. The guidelines form a basis for the evaluation of all aspects of health care and delivery.Cause of Death: Factors which produce cessation of all vital bodily functions. They can be analyzed from an epidemiologic viewpoint.Laser Therapy: The use of photothermal effects of LASERS to coagulate, incise, vaporize, resect, dissect, or resurface tissue.Sensitivity and Specificity: Binary classification measures to assess test results. Sensitivity or recall rate is the proportion of true positives. Specificity is the probability of correctly determining the absence of a condition. (From Last, Dictionary of Epidemiology, 2d ed)Quality of Life: A generic concept reflecting concern with the modification and enhancement of life attributes, e.g., physical, political, moral and social environment; the overall condition of a human life.Cardiovascular Agents: Agents that affect the rate or intensity of cardiac contraction, blood vessel diameter, or blood volume.Echocardiography: Ultrasonic recording of the size, motion, and composition of the heart and surrounding tissues. The standard approach is transthoracic.Atrioventricular Block: Impaired impulse conduction from HEART ATRIA to HEART VENTRICLES. AV block can mean delayed or completely blocked impulse conduction.GermanyAnxiety: Feeling or emotion of dread, apprehension, and impending disaster but not disabling as with ANXIETY DISORDERS.Manifest Anxiety Scale: True-false questionnaire made up of items believed to indicate anxiety, in which the subject answers verbally the statement that describes him.Multivariate Analysis: A set of techniques used when variation in several variables has to be studied simultaneously. In statistics, multivariate analysis is interpreted as any analytic method that allows simultaneous study of two or more dependent variables.Implants, Experimental: Artificial substitutes for body parts and materials inserted into organisms during experimental studies.Long QT Syndrome: A condition that is characterized by episodes of fainting (SYNCOPE) and varying degree of ventricular arrhythmia as indicated by the prolonged QT interval. The inherited forms are caused by mutation of genes encoding cardiac ion channel proteins. The two major forms are ROMANO-WARD SYNDROME and JERVELL-LANGE NIELSEN SYNDROME.Evaluation Studies as Topic: Studies determining the effectiveness or value of processes, personnel, and equipment, or the material on conducting such studies. For drugs and devices, CLINICAL TRIALS AS TOPIC; DRUG EVALUATION; and DRUG EVALUATION, PRECLINICAL are available.Automobile Driving: The effect of environmental or physiological factors on the driver and driving ability. Included are driving fatigue, and the effect of drugs, disease, and physical disabilities on driving.Accelerated Idioventricular Rhythm: A type of automatic, not reentrant, ectopic ventricular rhythm with episodes lasting from a few seconds to a minute which usually occurs in patients with acute myocardial infarction or with DIGITALIS toxicity. The ventricular rate is faster than normal but slower than tachycardia, with an upper limit of 100 -120 beats per minute. Suppressive therapy is rarely necessary.Life Tables: Summarizing techniques used to describe the pattern of mortality and survival in populations. These methods can be applied to the study not only of death, but also of any defined endpoint such as the onset of disease or the occurrence of disease complications.Heart Injuries: General or unspecified injuries to the heart.Cohort Studies: Studies in which subsets of a defined population are identified. These groups may or may not be exposed to factors hypothesized to influence the probability of the occurrence of a particular disease or other outcome. Cohorts are defined populations which, as a whole, are followed in an attempt to determine distinguishing subgroup characteristics.Risk: The probability that an event will occur. It encompasses a variety of measures of the probability of a generally unfavorable outcome.American Heart Association: A voluntary organization concerned with the prevention and treatment of heart and vascular diseases.Imidazolidines: Compounds based on reduced IMIDAZOLINES which contain no double bonds in the ring.Body Surface Potential Mapping: Recording of regional electrophysiological information by analysis of surface potentials to give a complete picture of the effects of the currents from the heart on the body surface. It has been applied to the diagnosis of old inferior myocardial infarction, localization of the bypass pathway in Wolff-Parkinson-White syndrome, recognition of ventricular hypertrophy, estimation of the size of a myocardial infarct, and the effects of different interventions designed to reduce infarct size. The limiting factor at present is the complexity of the recording and analysis, which requires 100 or more electrodes, sophisticated instrumentation, and dedicated personnel. (Braunwald, Heart Disease, 4th ed)Phlebography: Radiographic visualization or recording of a vein after the injection of contrast medium.Reproducibility of Results: The statistical reproducibility of measurements (often in a clinical context), including the testing of instrumentation or techniques to obtain reproducible results. The concept includes reproducibility of physiological measurements, which may be used to develop rules to assess probability or prognosis, or response to a stimulus; reproducibility of occurrence of a condition; and reproducibility of experimental results.Biocompatible Materials: Synthetic or natural materials, other than DRUGS, that are used to replace or repair any body TISSUES or bodily function.Heart Block: Impaired conduction of cardiac impulse that can occur anywhere along the conduction pathway, such as between the SINOATRIAL NODE and the right atrium (SA block) or between atria and ventricles (AV block). Heart blocks can be classified by the duration, frequency, or completeness of conduction block. Reversibility depends on the degree of structural or functional defects.Catheterization, Central Venous: Placement of an intravenous CATHETER in the subclavian, jugular, or other central vein.Hospitalization: The confinement of a patient in a hospital.Product Surveillance, Postmarketing: Surveillance of drugs, devices, appliances, etc., for efficacy or adverse effects, after they have been released for general sale.Severity of Illness Index: Levels within a diagnostic group which are established by various measurement criteria applied to the seriousness of a patient's disorder. | <urn:uuid:ff1eb08f-9678-4156-8275-ef0767794a0c> | CC-MAIN-2019-47 | https://lookformedical.com/en/definitions/defibrillators-implantable | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00380.warc.gz | en | 0.880071 | 8,178 | 3.421875 | 3 |
Vitamin D Deficiency Symptoms
For many people, the symptoms of vitamin D deficiency are subtle and hard to recognize. But, even without obvious symptoms, too little vitamin D can pose health risks. Low blood levels of the vitamin D over short or long time, have been associated with the following symptoms:
- Osteoporosis - bones disease where the bone mineral density (BMD) is reduced, bone micro-architecture deteriorated and the amount and variety of proteins in bone tissue are altered. Osteoporosis leads to an increased risk of fractures.
- Osteomalacia - causes weakness of the muscular system and brittle bones often seen in adults with vitamin D deficiency.
- Weight Gain and Fat Loss - Vitamin D 'helps' hormone called leptin to regulate body weight. Insufficient amounts of vitamin D can prevent this 'cooperation' and can lead to undesirable weight gain. Also, sufficient amounts of vitamin D have been found to promote fat loss in calorie restricted diets.
- Rickets - skeletal deformation mostly seen in children with vitamin D deficiency.
- Multiple Sclerosis - Multiple sclerosis is an inflammatory disease of the central nervous system - brain and spinal cord - characterized by the loss of the myelin sheath that insulates cells. Although the cause of multiple sclerosis is not yet conclusively determined, both environmental and genetic factors and their interactions are claimed to be important, with vitamin D deficiency being linked to higher rates of multiple sclerosis in both children and adults.
- Hair Loss - vitamin D deficiency can interrupt normal hair-follicle cycle (anagen, catagen and telogen phases) and thus lead to hair loss.
- Depression - increasing vitamin D consumption can help in treating depression in both men and women even if vitamin D blood levels are relatively good. Also, some studies show that vitamin D deficiency (when vitamin D levels are low) can lead to depression.
- Fatigue - one of the most common symptoms of a deficiency in vitamin D is a constant feeling of tiredness or fatigue. In many studies dealing with fatigue, almost half of tested people had vitamin D blood levels low to very low.
- Cancer - it has been shown, statistically again, that some types of cancer are more numerous in people with low vitamin D levels. This is still very 'gray' area and much of research has to be done in order to clarify vitamin D role in fighting cancer. Nonetheless, proper vitamin D levels can help in fighting not only various types of cancer, but can lead to overall wellbeing.
- Diabetes - it has been shown by various studies that vitamin D deficiency impair insulin synthesis and secretion in humans and in animal models of diabetes, thus showing vitamin D deficiency's role in the development of type 2 diabetes.
- Psoriasis - vitamin D deficiency can affect people with psoriasis. UVB light treatment and skin creams which include vitamin D have been shown to significantly help such patients.
Vitamin D Deficiency and Osteoporosis
Osteoporosis is bones disease where the bone mineral density (BMD) is reduced, bone micro-architecture is deteriorated and the amount and variety of proteins in bone tissue are altered. As such, osteoporosis leads to an increased risk of bone fractures.
Osteoporosis as disease doesn't come over night, nor can be treated and cured over night. In fact, if diagnosed too late, it can have life long consequences.
Prevention of Osteoporosis
Just as with many other diseases, connection between vitamin D deficiency and osteoporosis is little bit unclear (to say the least) although many studies show 'statistically' that proper vitamin D levels in combination with adequate calcium intake can postpone osteoporosis, sometimes for very long period of time.
Lifestyle is very important factor for prevention of both vitamin D deficiency and osteoporosis. Adequate intake of vitamin D and calcium promote proper mineralization of bones and bone health in general. Also, regular exercises and physical activity can not only prevent bones demineralization, but in fact can lead to increased bone density.
Treating Vitamin D Deficiency and Osteoporosis
Treating osteoporosis is complex task. Although it should be done under supervision of your doctor, sometimes for long period of time, changes in lifestyle can help a lot.
If you are smoker and you consume alcohol, forget tobacco smoking and alcohol - they both can interfere with vitamin D and calcium processes in the body, they harm human immune system etc.
Increase your calcium intake by consuming foods rich in calcium like diary products (low fat and/or cottage cheese, milk if you can tolerate lactose etc) and similar food sources. Also, increase your vitamin D intake by consuming more vitamin D rich foods, by spending more time outside during the day and maybe by taking vitamin D and/or calcium supplements.
Spending more time outside and practicing some kind of sport will be very beneficial to you - you will increase your natural vitamin D production due to increased sun exposure, you will promote increase in bone density ('stressed' bones tend to have higher density - natural body response to the stress. That is why astronauts during and after spending some time in zero gravity conditions have problems with bone demineralization.).
If you are taking any medicaments that can interfere with vitamin D and calcium intake, your doctor will probably increase your daily vitamin D and calcium supplement doses. Also, in the cases of severe vitamin D deficiency, oral supplements might not be enough (if you are afraid of needles, good luck...).
Often, patients with problems with osteoporosis have joint and other problems - increasing omega-3 (group of essential fatty acids) intake can be very beneficial. Great thing is that omega-3 is found in larger quantities in fatty fish (also rich in vitamin D - feel free to check Vitamin D Rich Foods article).
Treating osteoporosis is complex process and is unique to each individual.
Osteoporosis or Osteomalacia
Many people consider osteoporosis and osteomalacia to be the same - they are very similar, but they are NOT the same. Their symptoms can be avoided or at least significantly minimized and diseases postponed by proper vitamin D and calcium intake and by changing life style. Of course, other issues play important role in both osteoporosis and osteomalacia, but that is way beyond the scope of this article ...
Vitamin D Deficiency and Osteomalacia
Although osteomalacia is very similar to osteoporosis, they are two different medical conditions.
Vitamin D deficiency can be one of the causes of both diseases, but things are little bit more complex regarding differences between osteoporosis and osteomalacia.
Osteoporosis vs. Osteomalacia
Symptoms: Their symptoms are similar, but not identical - main symptoms of osteoporosis are bone mass reduced with normal mineralization while main symptoms of osteomalacia are bone mass variable with decreased mineralization.
Onset Age: Age at onset - osteoporosis generally affects elderly people in general and postmenopausal women while osteomalacia affects persons of any age.
Etiology: Osteoporosis: idiopathic, endocrine abnormalities, inactivity, disuse, alcoholism, calcium deficiency etc. Osteomalacia: vitamin D deficiency, abnormality of vit D pathways, hypophosphatasia syndromes, long-term anticonvulsant therapy, renal tubular acidosis.
Pain Symptoms: Osteoporosis pain symptoms - referable to a site. Osteomalacia pain symptoms - generalized bone pain.
Signs: Signs of osteoporosis: tenderness at fracture site. Signs of osteomalacia: tenderness at fracture site and generalized tenderness.
Radiographic features: Radiographic features of osteoporosis: mainly axial skeleton fracture. Radiographic features of osteomalacia: often symmetric pseudofractures or completed fracture in appendicular skeleton.
Lab tests: Lab tests for osteoporosis: serum calcium and serum phosphate (normal), alkaline phosphatase (normal even within 5 days of new fracture), urinary calcium (high or normal), bone biopsy (tetracycline labels normal). Lab tests for osteomalacia: serum calcium (low or normal - high in hypophosphatasia), serum phosphate (low or normal), alkaline phosphate (elevated unless hypophosphatasia), urinary calcium (normal or low - high in hypophosphatasia), bone biopsy (tetracycline labels abnormal).
As one can see, osteoporosis and osteomalacia are very similar even for trained personnel. Treating osteomalacia and osteoporosis is very complex process and should be done individually, according to each patient needs.
Prevention of Osteomalacia
Prevention of osteomalacia and osteoporosis is similar, although many more studies must be taken in order to accurately determine what and when must be done for prevention.
Generally, prevention of osteomalacia starts even before pregnancy itself and lasts during pregnancy and continues during entire life. Vitamin D deficiency influence osteomalacia through vitamin D and calcium interaction, so increasing their intake can be beneficial in preventing osteomalacia. There are numerous studies suggesting such conclusions, but scientists are not yet 100% sure about many 'little' things.
Vitamin D Deficiency and Weight Gain
Although vitamin D is best known for it's very important role in bone health, numerous recent studies have found that vitamin D also helps in regulation of many vital processes in the body, such as immunity, energy production and regulation of cell growth. Obviously, vitamin D deficiency is associated with weight gain and obesity, but it is not clear if inadequate vitamin D causes obesity and weight gains or the other way around.
It has also been discovered that fat cells have receptors that bind to vitamin D and that the vitamin D can change the fat cell's metabolism and growth. This provides a viable link between vitamin D deficiency and weight gain.
Again, relation between vitamin D, vitamin D deficiency, weight gain and weight loss is pure - statistical. For example:
- people with adequate vitamin D levels starting low calorie diet wanting to lose weight (body fat) have better success rate than people with low vitamin D levels starting low calorie diet.
- people with normal vitamin D levels who successfully lost weight (fat) while dieting, on average, lost more fat at faster rate when compared with people with low(er) levels of vitamin D who also successfully lost weight (fat).
- people with higher body fat percentage (bf%) usually have lower vitamin D blood levels than people with lower body fat percentage.
Being 'sunshine vitamin', stimulation of natural production of vitamin D by spending more time outside, doing some sort of physical activity (just walking around or running, cycling etc) can also promote fat loss due to higher energy expenditure. This is indirect connection between vitamin D and fat loss, but when you want to lose fat, every little thing counts :o)
Vitamin D Deficiency and Rickets
Rickets is a medical condition of bones softening in children due to impaired metabolism or deficiency of vitamin D, phosphorus, magnesium and/or calcium, potentially leading to deformity and fractures.
In many developing countries, rickets is among the most frequent childhood diseases with predominant cause being a vitamin D deficiency. Lack of adequate calcium, phosphorus and magnesium intake in the diet may also lead to rickets (cases of severe diarrhea and vomiting may be the cause of these deficiencies).
Although rickets can occur in adults, the majority of cases occur in children suffering from severe malnutrition during the early stages of childhood.
Note: Increasing concern over excessive exposure to sunlight causing cancer, led people to avoid exposure to sunlight by keeping out of the sun, covering up and using sunblock creams to cause the incidence of rickets to increase significantly due to inadequate sun exposure and lowered natural vitamin D production.
Signs and symptoms of rickets include (but are generally not limited to): bone pain or tenderness, dental problems, muscle weakness, increased tendency for fractures, skeletal deformity, bowed legs, knock-knees, growth disturbance etc.
The primary cause of rickets is a vitamin D deficiency - vitamin D is required for proper calcium absorption from the gut and for proper mineralization of the bones. Of course, if minerals needed for proper bone health (primary calcium, but also phosphorous, magnesium etc) are insufficient, rickets is possible to occur.
Rickets may be diagnosed by: blood tests (serum calcium may show low levels of calcium, serum phosphorus may be low and serum alkaline phosphatase may be high, vitamin D levels), arterial blood gases may reveal metabolic acidosis and X-rays of affected bones may show loss of calcium from bones or changes in the shape or structure of the bones. Bone biopsy is rarely performed but will confirm rickets.
Prevention and Treatment of Rickets
Prevention starts even during mom's pregnancy and lasts during early childhood. Best prevention is adequate sun exposure of pregnant mom and later of small baby (for adults 15-20 minutes, 3 times per week at UV sun rays of around 3, for babies and small children - check with their doctor), adequate intake of vitamin D and calcium trough proper diet and supplementing vitamin D, calcium and other minerals using supplements.
Vitamin D Deficiency and Multiple Sclerosis
Link between vitamin D deficiency and multiple sclerosis is little bit unclear. There are many scientific studies going on and they give valuable information about this issue.
Multiple sclerosis is also known as "encephalomyelitis disseminata" or "disseminated sclerosis". It is an inflammatory disease in which the fatty myelin sheaths around the axons of the brain and spinal cord are damaged, leading to demyelination. It affects the ability of nerve cells in the brain and spinal cord to communicate with each other effectively. Multiple sclerosis has a broad spectrum of signs and symptoms, with onset usually occurring in young adults and it is more common in women.
It is believed that multiple sclerosis occurs as result of genetic, environment, infectious as well as other issues. Also, it is believed that it is immune mediated health problem due to complex interaction between individual genetics and yet undefined environment and/or infectious issues. Again, many data from researches are just statistical indicator about relationship between genetics and other factors that can lead to multiple sclerosis.
Multiple Sclerosis and Vitamin D
Statistically, multiple sclerosis is more common in people who live farther from the equator, although many exceptions exist. Decreased skin sunlight exposure has been linked with a higher risk of multiple sclerosis. Decreased vitamin D intake and natural production has been the main biological mechanism used to explain the higher risk among those less exposed to the sun.
Also, some scientific studies have shown that rare genetic variation, which causes reduced levels of vitamin D in children, appears to be directly linked to multiple sclerosis.
Again, these are 'just' statistical data, but are showing relationship between multiple sclerosis and vitamin D deficiency.
Increasing vitamin D levels by increasing natural vitamin D production from skin exposure to sun rays and by increasing vitamin D intake (from vitamin D rich foods and/or supplements) can be beneficial in (among other things) strengthening immune system and thus fighting vitamin D deficiency symptoms, multiple sclerosis included ...
Vitamin D Deficiency and Hair Loss
Plenty of vitamins contribute to strong and healthy looking hair - not only vitamin D. Many studies are underway with the goal of understanding the role of vitamin D deficiency in hair loss and benefits of higher levels of vitamin D in promoting hair growth.
People who have been taking supplements with vitamin D have experienced a gradual decrease in the hair loss. Vitamin D helps in the development and growth of healthy hair and maturity of hair follicles. Certain fatty acids (for example, fatty acids found in fish oils) together with fat soluble vitamins (mainly vitamins A, D, E) help in eliminating dandruff, scalp psoriasis and also hair loss by regulating the flow of oils that nourish the collagen and promote strength and health of hair. Also, such environment, rich in essential fatty acids and fat soluble vitamins helps in absorption of calcium (also magnesium, zinc and other minerals) which is also important for normal hair growth and health of skin.
Note that vitamin D deficiency can lead to hair loss, but also too much vitamin D can also cause hair loss. Mega doses of vitamin D can be toxic (for example 50.000 IU of vitamin D several times per week - such doses must be prescribed by your doctor!) and can cause imbalances with other minerals such as calcium and phosphorous. Hair that falls out in patches is mostly associated with some kind of an autoimmune condition (for example, alopecia areata), stress etc. If vitamin D deficiency is the only cause of hair loss, hair may respond very well once the levels of vitamin D are in the normal range (and other macro and micro nutrients are present in sufficient amounts).
Treatment of vitamin D deficiency is relatively easy - spend more time on the sun, eat more vitamin D rich foods and if needed, take some vitamin D supplement with lower or medium amount of vitamin D (400 - 1000 IU per pill) as prevention of vitamin D deficiency.
Good thing is that vitamin D rich foods are also often rich in essential fatty acids (for example, CLA, omega-3 and other fatty acids), other fat soluble vitamins (mainly A, E), protein and minerals (calcium, phosphorous, magnesium, zinc etc).
Vitamin D Deficiency and Depression
There are many types and symptoms of depression and anxiety, but recent researches show that there is a strong relationship between vitamin D deficiency and depression. Unfortunately, exactly how vitamin D and depression are linked is still unclear. Some scientists claim that data from studies show that it is not possible to determine if vitamin D deficiency results in depression or depression may increase risk for low vitamin D levels.
Vitamin D generally promotes overall wellbeing and health, two very important issues in feeling good, not depressed. But, there is lots more than that ...
Seasonal affective disorder is a situational mood disorder caused by decreasing daylight in the winter months. High doses of vitamin D during these months have proven to be a very effective remedy for seasonal affective disorder, leading most people to believe that normal neurotransmitter function depends in part on adequate vitamin D synthesis and vitamin D blood levels in general. However, every tissue in the body has vitamin D receptors, including the heart, muscles, brain etc, which means that vitamin D is needed at every level for the body to function properly.
Vitamin D levels are inversely related to those of another mood-regulating hormone - melatonin. Melatonin helps modulate circadian rhythms, with darkness triggering melatonin secretion by the pineal gland within human brain, bringing people down gently at night for sleep. Insomnia, mood swings and food cravings are influenced by melatonin. Sunlight shuts melatonin production off, while triggering release of vitamin D — that's why doctors recommend getting outdoors as a remedy for jet lag.
Most of people can sense the positive influence of sunlight in their own lives by the immediate lift they get from taking a walk outdoors on a beautiful sunny day. Now there may be many factors at work that brighten their mood in such cases, but sun exposure is almost certainly a critical piece.
Again, there are many numerous studies about vitamin D deficiency and depression, giving (again) statistical data about it - whatever data they provide, when you 'feel blue', go outside and enjoy little bit of sunshine, it will help you feel better, relief some stress and even promote natural production of vitamin D in your body ...
Vitamin D Deficiency and Fatigue
Many patients that often feel tired and have muscle cramps, after testing vitamin D blood levels find out that they are vitamin D deficient.
With multiple roles of vitamin D in human body, it's no wonder that there is strong link between vitamin D deficiency and fatigue.
When someone is vitamin D deficient, there are numerous of symptoms these people may experience with chronic fatigue being one of them.
In order to know for sure whether chronic fatigue is caused by a vitamin D deficiency, individuals will need to visit their physician or doctor. He or she can perform a blood vitamin D level test in order to measure the amount of vitamin D present. If an individual is found deficient, the doctor can determine the best method for increasing needed vitamin D intake with the patient. It's important to always visit a doctor before starting taking supplements or vitamins, as the doctor needs to determine the root cause of the chronic fatigue. Also, these supplements have various vitamin D amounts per pill and often doctor might prescribe you a multivitamin supplement (containing vitamin D among other micronutrients) rather than vitamin D only supplement.
Fighting vitamin D deficiency and fatigue often means changing life habits and changing daily nutrition - one needs to consume more protein, vitamins, minerals and healthy fats to promote muscle regeneration and repair after physical activities. Complex carbs should be used as source of energy and for replenishing glycogen reserves in muscles, liver and other organs not only after exercise, but also in everyday life.
Fish and similar foods should be consumed regularly - if not on daily basis, then 2-3 times per week, but consider fish species that are really rich in vitamin D and combine that with regular outside physical activities like walking, running, cycling etc.
Vitamin D Deficiency and Cancer
Connection between levels of vitamin D and various types of cancers are still unclear. There are numerous studies about this issue and their results are, like results in most of such studies, statistical.
Anyway, first connection between vitamin D deficiency and some types of cancers was made in the late 1970's, when it was found that the incidence of colon cancer was nearly three times higher in New York than in New Mexico and it was hypothesized that lack of sun exposure (resulting in a lack of vitamin D) played a significant role.
Some facts about vitamin D and cancer:
- many studies have found naturally produced vitamin D to be associated with reduced risk of breast, colon and rectal cancer
- a randomized controlled trial with 1100 IU/day vitamin D3 plus 1450 mg/day calcium found a 77% reduction in all-cancer incidence. (this study was published in 2007 and was criticized on several grounds including lack of data, use of statistical techniques etc. Nonetheless, it clearly showed direction for future studies.)
- geographical studies have found reduced risk in mortality rates for 15-20 types of cancer in regions of higher solar UVB doses
- mechanisms have been proposed to explain how vitamin D acts to reduce the risk of cancer from starting, growing, and spreading.
- some studies showed that those diagnosed with breast, colon and prostate cancer in summer in Norway had higher survival rates than those diagnosed in winter.
- those with higher vitamin D blood levels at time of cancer diagnosis had nearly twice the survival rate of those with the lowest levels.
- people with darker skin and those that 'always' use some kind of skin protection (clothes, high factor protection sun block creams) have an increased risk of cancer in part due to lower vitamin D blood levels because of lower vitamin D skin production.
- higher UVB exposure and nutrition rich in vitamin D early in life has been found associated with reduced risk of breast and prostate cancer.
- some studies found lowered risk of breast, colon and rectal cancer as vitamin D blood levels rise to over 40 ng/mL (100 nmol/L).
Measuring blood levels of 25-hydroxyvitamin D to determine vitamin D blood levels avoids some of the limitations of assessing dietary intake. However, vitamin D levels in the blood vary by race, with the season, nutritional habits and possibly with the activity of genes whose products are involved in vitamin D transport and metabolism. These variations complicate the interpretation of studies that measure the concentration of vitamin D in serum at a single point in time. To fully understand the effect of vitamin D on cancer and other health outcomes, new randomized trials and studies need to be carried out. Also, the appropriate dose of vitamin D to use in such trials is still not clear.
Vitamin D Deficiency and Diabetes
It has been shown by various studies that the occurrence of diabetes in a population generally increases in relation to distance from the equator.
The link between diabetes and vitamin D deficiency is still being studied but it seems that there are special areas within the insulin-producing pancreas that are targeted and influenced by vitamin D. It is theorized that low vitamin D levels leads to inadequate stimulation of these areas of the pancreas, decreased insulin production, delayed release of insulin and high sugar levels.
Vitamin D and Type 1 Diabetes
Scientists suggested a link between vitamin D deficiency during infancy and later development of type 1 diabetes. Vitamin D deficiency in breastfed babies results primarily from maternal deficiency during pregnancy and subsequent breastfeeding. Vitamin D deficiency in mothers and children results primarily from lack of sun exposure (or use of high-SPF sunscreens) and eating vitamin D deficient foods. It has been shown that vitamin D helps regulate the immune system and that when levels are low, it creates an environment where the immune system can attack the insulin-producing pancreatic cells resulting in their destruction and the onset of type 1 diabetes.
Vitamin D and Type 2 Diabetes
The link between vitamin D deficiency and type 2 diabetes seems less clear. It has been shown that vitamin D deficiency impair insulin synthesis and secretion in humans, suggesting a role in the development of type 2 diabetes. It is suggested that low vitamin D levels may create an environment that allows for elevated blood sugar levels due to the effect of vitamin D on insulin producing cells. This effect is supported by evidence of hyperglycemia in patients with low vitamin D levels. Based on this information, scientists have hypothesized that vitamin D deficiency may contribute to the development of type 2 diabetes. To further support the point that vitamin D influence blood sugar levels, numerous studies have also shown that high glucose can be improved simply through replacement of vitamin D.
Vitamin D Deficiency and Psoriasis
Psoriasis is type of an autoimmune disease that affects mostly the skin. It occurs when the immune system mistakes the skin cells as a pathogen and then sends out faulty signals that speed up the growth cycle of skin cells. Psoriasis as disease is not contagious, but it has been linked to an increased risk of stroke.
Psoriasis affects skin, joints (psoriatic arthritis) and nails (psoriatic nail dystrophy). Cause of psoriasis is not well understood, but it is believed that several components are included like genetics, skin injury, environmental factors, stress, weakened immune system, alcohol, smoking etc.
There are many available treatments for psoriasis patients, but because of its nature, psoriasis is still a challenge to treat. Treating high blood lipid levels may lead to improvement of patient condition.
One of the treatment of psoriasis is phototherapy in the form of sunlight, which has been used for a long time effectively. This phototherapy among other things, promotes natural vitamin D production - this increases vitamin D levels and helps regulate autoimmune system (among other things).
Also, treating patients with fish oils rich in omega-3 essential fatty acids, vitamin A, vitamin D and vitamin E has also showed beneficial results.
Unfortunately, there are no exact data regarding vitamin D deficiency and psoriasis - obviously, high(er) vitamin D intake can promote overall wellbeing and strengthen the immune system and thus can help in treating psoriasis.
Nonetheless, there are insufficient scientific studies to conclude direct relation between vitamin D deficiency and psoriasis.
If you do have psoriasis related problems, then changing a life style can be good for you in the long run (after all, psoriasis as medical condition is usually life time problem):
- spend more time on the sun, with problematic areas of skin exposed to the sun rays
- eat healthy meals with higher amounts of fat soluble vitamins (mostly vitamins A, D and E) and omega-3 EFAs
- monitor your health with your doctor regularly - if you notice drop in vitamin D levels, be sure to rectify that.
Importance of the vitamin D in human body is obviously huge. There are still many uncertainties about vitamin D role in many processes, but, hopefully future scientific studies are going to prove or disprove certain 'facts' about vitamin D.
In the mean time, enjoy some sunshine from time to time and enjoy your seafood ... | <urn:uuid:7748b416-e470-49eb-a8a6-7593bb8951dc> | CC-MAIN-2019-47 | http://www.mediterraneandiet101.com/articles/vitamin-d-deficiency-symptoms.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00180.warc.gz | en | 0.930182 | 5,913 | 3.265625 | 3 |
“New Glory to Its Already Gallant Record”
The First Marine Battalion in the Spanish-American War
Spring 1998, Vol. 30, No. 1
By Trevor K. Plante
|Guantanamo Marine Corps officers 1st Lt. Herbert L. Draper, Lt. Col. Robert W. Huntington (commander of the First Marine Battalion), and Capt. Charles L. McCawley. (NARA, 20M-514827)|
On April 16, 1898, five days before war began between the United States and Spain, in preparation for what he believed was an inevitable conflict, Secretary of the Navy John D. Long ordered the commandant of the Marine Corps, Charles Heywood, to organize one battalion of marines for expeditionary duty with the North Atlantic Squadron. By war's end, the First Marine Battalion could boast they had fought in the first land battle in Cuba and had been the first to raise the American flag on the island. They could also claim that of the six marines killed in action in the Spanish-American War, five were from their unit. The battalion yielded one Medal of Honor recipient, and two of the unit's officers would later serve as commandants of the Marine Corps.1 The First Marine Battalion's action in the Caribbean and its favorable press coverage gave the American public and the U.S. Navy a glimpse of the Marine Corps of the future.
At approximately 9:40 p.m. on the evening of February 15, 1898, an explosion sank the USS Maine in Havana Harbor, Cuba. The ship was manned by 290 sailors, 39 marines, and 26 officers. Of these officers and men, 253 were killed either by the explosion or drowning; seven more died later of wounds. Included in this number of killed were twenty-eight enlisted men from the Maine's marine detachment.2
The cause of the explosion was a source of contention between the United States and Spain. On March 21 a US naval court of inquiry called to investigate the Maine incident concluded that a mine in the harbor had caused the explosion. A Spanish naval court of inquiry reported the next day that the explosion had been due to internal causes.3 Although the cause was never established to either side's satisfaction, the event eventually led Congress to declare on April 25 that a state of war existed starting April 21, 1898.
The job of organizing the First Marine Battalion was assigned to Lt. Col. Robert W. Huntington, who had just recently taken command of the Marine Barracks in Brooklyn, New York. Huntington was approaching almost forty years of service in the Marine Corps, having been commissioned soon after the start of the Civil War.4
On April 17, Lt. Colonel Huntington began organizing the battalion, initially formed into four companies. A proposed second battalion was never formed because a number of marines were still needed to protect navy yards and installations in the United States. Instead, the First Marine Battalion was enlarged to six companies— five companies of infantry and one artillery. Each company had a complement of 103 men: 1 first sergeant, 4 sergeants, 4 corporals, 1 drummer, 1 fifer, and 92 privates. The battalion was also accorded a quartermaster, adjutant, and surgeon. The color guard comprised 1 sergeant and 2 corporals.5
Commandant Charles Heywood made mobilizing the battalion his highest priority. For this reason, both he and the Marine Corps quartermaster made sure that Charles McCawley, the battalion's quartermaster, had the supplies he needed or the funds to get them. On April 18 the commandant went to New York to personally observe preparations, staying until the twenty-third.6 The battalion quartermaster supplied the unit with ammunition, camp equipment, mosquito netting, woolen and linen clothing, wheelbarrows, pushcarts, pickaxes, shelter tents, and medical stores.7
On April 22 the marines were ready to sail. The men marched down the main street of the navy yard to the dock and at 5 p.m. boarded the recently purchased USS Panther. Lt. Colonel Huntington noted the "intense excitement manifested by people along the line of march, Navy Yard, docks, harbor front and shipping."8 At eight o'clock, as the ship pulled away from the dock, the naval band played "The Girl I Left Behind Me" to send off the marines.9
The men were overcrowded on the Panther because the vessel was too small to hold such a large unit. The ship's dining room accommodated only two hundred men, requiring three mess calls per meal.10 The ship, originally the Venezuela, had been recently purchased and converted to carry about 400 men, but after the additional companies were added, the battalion numbered close to 650 officers and men. The marines expected these crowded conditions to be temporary. At Key West, Florida, they were supposed to transfer to the Resolute, which was capable of carrying one thousand men and officers.11 Unfortunately, the marine battalion would not see the Resolute until after it arrived in Cuba in June. The battalion reached Fort Monroe off Hampton Roads, Virginia, on the evening of April 23 and waited for their convoy vessel to arrive. An escort was necessary, for the Panther was ill-equipped to defend itself should it encounter an enemy vessel. While at Hampton Roads, Maj. Percival C. Pope and 1st Lt. James E. Mahoney joined the battalion.12 On April 26 the Panther left Virginia accompanied by the cruiser Montgomery.
It was not long before tension developed on the Panther between the officers of the navy and the Marine Corps. Much of this strain was due to overcrowding, but some stemmed from questions regarding the men's required duty and who was responsible for discipline.13 Despite these problems, Huntington made the most of precious time. On the twenty-sixth the battalion began its first drills on board ship. The marine infantry companies were armed with Lee straight-pull 6mm rifles. The artillery company was equipped with four three-inch rapid-fire guns. From 2 p.m. to 4 p.m., Companies A, B, C, D, and E (the infantry companies) drilled in volley and mass firing; each man using ten rounds each. Next, the artillery company fired one round from each of the four artillery pieces and then, like the infantry companies, drilled in volley and mass firing of the Lee rifles using ten rounds each.14
The Panther arrived at Key West, on April 29.15 On May 24 Comdr. George C. Reiter, commanding the Panther, ordered the battalion to disembark and set up camp. This action prompted the commandant of the Marine Corps to telegraph Key West inquiring why the battalion was unloaded when the Panther was the sole transport of the marine battalion and had no other duties.16 While the battalion remained in camp for two weeks, monotony was eased by the arrival of supplies that were more suited for tropical weather. The marines exchanged their heavier blue uniforms for new brown linen campaign suits. With the lighter, cooler uniforms came new-style shoes and lightweight underwear, all very popular items with the officers and men. Huntington continued drilling while at Key West, and the battalion received daily instruction and target practice with their rifles.17
The officers were watching the men's health very closely. Huntington was keenly aware of health dangers caused by bad water and exposure to disease. Orders outlined procedures pertaining to water, cooking, and clothing. Water was prepared on board ship and brought to the marines on shore. No one was to drink unboiled water. Cooks were told how to prepare food and water for cooking, and any marine struck by diarrhea was to report it immediately to the medical officer. The men were also ordered to change their clothing whenever it got wet.18
While in Key West the battalion sent small detachments to participate in several funeral services held for navy personnel. Colonel Huntington also detailed men to patrol the streets of Key West to guard against men causing trouble while on liberty. The unit received a number of Colt machine guns, and navy Assistant Surgeon John Blair Gibbs also joined the battalion.19 On June 7 the naval base at Key West received a telegram from the acting secretary of the navy stressing, "Send the Marine Battalion at once to Sampson without waiting for the Army send Yosemite as convoy."20 The long wait was over, and that day the battalion finally sailed for Cuba, leaving behind Major Pope sick in the hospital.
On the voyage south, during the night of the ninth, the marines' transport collided with the Scorpion, causing damage to the converted yacht's stern rail.21 The Panther arrived off Santiago, Cuba, at 7 a.m. on June 10. Huntington reported to Adm. William T. Sampson, the commander in chief of the North Atlantic Fleet, on board the flagship New York and received orders to report to Comdr. Bowman H. McCalla of the Marblehead, commanding at Guantanamo Bay.22
Shortly after the war began, Admiral Sampson established a blockade of major Cuban ports. Guantanamo Bay was chosen as a good site for coaling navy vessels. Guantanamo has both an inner and outer bay, and the outer bay offered a good anchorage site for ships because of its depth. Sampson sent the marine battalion to protect any ships in the bay from being harassed from Spanish troops ashore.
McCalla had entered Guantanamo Bay on June 7 to clear the outer harbor. A battery near the telegraph station at Cayo del Toro on the western side of the bay fired on the US vessels Marblehead and Yankee. The Spanish gunboat Sandoval soon came down the channel from Caimanera. The two US Navy ships opened fire, silencing the gun battery and forcing the Sandoval to return back up the channel. On the morning of June 10, McCalla ordered marines from the Marblehead and Oregon to conduct a reconnaissance of an area just inside Guantanamo Bay. Capt. M. D. Goodrell led forty marines from the Oregon and twenty marines from the Marblehead. Goodrell selected a site for the marine battalion to establish their camp, and McCalla then sent him to brief Huntington on his intended position.23
The scene outside Guantanamo Bay was an awesome sight on June 10, for the outer bay was dominated by ships. The US Navy vessels present were the cruisers Marblehead, Yankee, and Yosemite; the battleship Oregon; the torpedo boat Porter; the gunboat Dolphin; the collier Abarenda; the Vixen and Panther; and several private vessels carrying newspaper reporters. The battalion began landing at two o'clock. Four companies disembarked while the other two remained on board to help unload supplies.24 The marines were ordered to stack their rifles and begin unloading supplies from the Panther. Men from Company C, the first company ashore, were deployed up the top of the hill as skirmishers to protect the landing against enemy attack.25 Sgt. Richard Silvey, Company C, First Marine Battalion, planted the American flag for the first time on Cuban soil. One hundred and fifty feet below the hill where the American flag now flew, houses and huts were in flames, and smoke rose from the small fishing village. McCalla had ordered the marines to burn the village on Fisherman's Point for health reasons, and no one was allowed to enter into any buildings. The remaining two companies disembarked on June 11.26
Huntington believed the hill chosen for his camp to be a "faulty position." He did not want his men on top of a hill where "the ridge slopes downward and to the rear from the bay" and was "commanded by a mountain, the ridge of which is about 1,200 yards to the rear."27 The battalion's position was partially protected by the navy vessels in the bay. Several times the battalion commander requested McCalla's permission to move the marines from this site to a more defensible position, but these requests were repeatedly denied.28 Despite this difference, Huntington named the marines' position Camp McCalla. Lt. Herbert Draper raised the American flag on a flagpole for the first time in Cuba at Camp McCalla.29 Eleven days later, Huntington sent this same flag to the commandant of the Marine Corps:
June 22', 1898
My Dear Colonel:
I sent you by this mail in a starch box the first US flag hoisted in Cuba. This flag was hoisted on the 11th June and during the various attacks on our camp floated serene above us. At times, during the darkness, for a moment, it has been illumined by the search light from the ships. When bullets were flying, and the sight of the flag upon the midnight sky has thrilled our hearts.
I trust you may consider it worthy of preservation, with suitable inscription, at Headquarters. It was first lowered at sunset last evening.
I am very respectfully
Lt Col Commd'g Bat'n
In an attack on the marine outposts, Privates Dumphy and McColgan of Company D were both killed.30 The bodies were first mistakenly reported mutilated. It was hard to tell the two apart, for both men had received a number of bullet wounds to the face; McColgan suffered twenty-one shots to the head and Dumphy fifteen.31 Soon the enemy made five small separate attacks on the marines' camp. All of these were repulsed. At about 1 a.m. a superior number of Spanish forces made a more combined attack. In this assault Assistant Surgeon Gibbs was killed by a bullet to the head.32 Sporadic firing back and forth continued throughout the night. Using a lesson learned from the Cubans, the enemy was making good use of camouflage by covering their bodies with leaves and foliage from the jungle.33 The smokeless powder of the Spanish Mauser rifles also made the enemy harder to detect.
On the morning of the twelfth, Sgt. Charles H. Smith was killed. Colonel Huntington moved much of the camp down the hill closer to the beach to a place known as Playa del Este. Huntington had the marines entrench their positions on the crest of the hill. Eventually earthworks were constructed in the shape of a square, with the blockhouse in its center. The artillery pieces were placed in the corners of the square, and the Colt machine guns were along the sides. Several newspaper reporters came ashore at the lower camp and offered assistance. They helped the marines bring the artillery pieces and Colt machine guns up the hill. The earthworks were constructed about chest high. On the outside of the dirt walls, trenches were dug measuring about five feet deep and ten feet wide. Later on June 12, Pvt. Goode Taurman died during an engagement.34
Harry Jones, the chaplain from the USS Texas, conducted a funeral service for the slain marines. He had heard about the marine deaths, and after receiving permission from his ship's captain, offered his services to the battalion commander. A lieutenant and marine guards from the Texas provided the funeral escort. Colonel Huntington, the battalion's surgeon, and as many officers and men who could be spared from the trenches attended the ceremony. The camp was still being harassed by the enemy, and at one point Jones dove into a trench to escape enemy fire. When he got back to his feet, the chaplain found that the marines were still standing at parade rest awaiting the ceremony. The service was conducted almost entirely under enemy fire. The marines' Lee rifles and Colt machine guns returned fire. The chaplain was still being fired on when he returned to his launch with two reporters.35
McCalla ordered the captain of the Panther to unload fifty thousand rounds of 6mm ammunition. McCalla also cleared up some of the confusion regarding duties by stating in the same order, "In the future do not require Col. Huntington to break out and land his stores or ammo. Use your own officers and crew."36
On the night of the twelfth, Sgt. Maj. Henry Good was killed. Another attack was made on the camp the next morning. After almost three days of constant harassment from the enemy either by attack or sniper fire, Huntington decided to take action. He issued an order to destroy a well used by Spanish troops. On the fourteenth, Capt. George F. Elliott set out with Companies C and D and approximately fifty Cubans to destroy the well at Cuzco, which was the only water supply for the enemy within twelve miles. The well, about six miles from the camp, was close to shore, and the USS Dolphin was sent to support the mission from sea.37
Upon leaving camp, Huntington asked Elliott if he would like to take an officer to act as adjutant. The captain declined, citing the shortage of officers present for duty as the reason. Instead, upon learning that a reporter was accompanying his force, Elliott requested Stephen Crane to act as an aide if needed. Crane's Red Badge of Courage had been published in 1895. The marine officer later reported that Crane carried messages to the company commanders while on this mission.38
The marines soon engaged in a terrific fight. Near the well they encountered great resistance from superior enemy forces. Lt. Louis Magill was sent with fifty marines and ten Cubans to reinforce Elliott. He was to cut off the enemy's line of retreat but was blocked by the Dolphin's gunfire. To help direct the naval gunfire, Sgt. John Quick volunteered to signal the ship. Using a blue flag obtained from the Cubans, the sergeant began to signal the ship with his back to the enemy and bullets flying all around him.
Later, two lieutenants with fifty men each were also sent to help Elliott, but neither participated in the fight. The Spanish escaped, but not before the marines inflicted a crippling blow. Elliott's force had a remarkably low casualty rate. Only two Cubans had been killed, and two Cubans and three marine privates had been wounded. Lt. Wendell C. Neville had also been injured descending a mountainside during the engagement. Twenty-three marines suffered from heat exhaustion and had to be brought back to the Dolphin.39 McCalla offered his opinion stating, "I need hardly call attention to the fact that the marines would have suffered much less had their campaign hats not been on the Resolute" (the ship had not yet arrived at Guantanamo Bay).40 Overall, the mission was considered a success because the well had been destroyed. McCalla stated, "the expedition was most successful; and I can not say too much in praise of the officers and men who took part in it."41 In fact, after the action, enemy attacks and sniper fire on the marine camp became almost nonexistent.
The following day, naval gunfire from the Texas, Marblehead, and Suwanee destroyed the Spanish fort at Caimanera on the eastern side of the bay. The three ships were accompanied by two press boats.42 Three days later, Huntington received orders that no reporters or civilians were to be allowed to land near his camp or enter his lines without a pass from McCalla. Those who disobeyed this order were to be arrested and taken on board the Marblehead as prisoners.43
At 4:30 p.m. on June 20 the USS Resolute arrived and unloaded stores for the battalion. The next day the captain of the Panther received orders from Admiral Sampson to transfer all stores including ammunition and quartermaster stores to the Resolute.44 The marines had finally received their larger transport. On June 24 the battalion placed headstones over the graves of Gibbs, Good, McColgan, Dumphy, and Taurman. A detail was sent out to place a headstone over the remains of Sergeant Smith, whose body could not be brought back to camp.45
McCalla ordered a reconnaissance to determine if Spanish forces still occupied the extremities of Punta del Jicacal on the eastern side of Guantanamo Bay. The enemy had been firing on American vessels from this point. At about 3 a.m. on the twenty-fifth, Huntington led a detail of 240 men encompassing Companies C and E of the First Marine Battalion and 60 Cubans under Colonel Thomas. The force used fifteen boats from the Helena, Annapolis, and Bancroft to travel to the other side of the bay. The landing was supported by the Marblehead and Helena, which took positions close to the beach south and west of the point. The landing force went ashore but made no contact with the enemy. They did, however, find signs that approximately one hundred men had been in the area and had left the previous day. The landing party withdrew at about 7:30 a.m.46
On July 3 the Spanish fleet was virtually annihilated during the naval battle of Santiago de Cuba, and the US Navy became responsible for a very large number of Spanish prisoners. It was decided to send the prisoners north to Portsmouth, New Hampshire, along with marines to guard them. On July 4 and 5, McCalla detached sixty marines from the battalion, including Capt. Allen Kelton and 1st Lt. Franklin Moses to join the Harvard. Prisoners on the St. Louis would be guarded by Capt. Benjamin Russell commanding twenty-one marines from the Marblehead and twenty-nine marines and a lieutenant from the Brooklyn.47 On July 10 the Harvard sailed north for New Hampshire, arriving with the Spanish prisoners at Camp Long just outside Portsmouth.
On the twelfth, McCalla ordered the harbor at Guantanamo under quarantine, with Huntington in charge of enforcing this order. On July 23 a letter from the commandant was read at parade acknowledging receipt of the first flag raised over Camp McCalla and praising the officers and men of the battalion for their conduct. Three days later, a large force of about eighty Cubans left camp. These men had fought and patrolled with the marines since June 12.48
The inactivity of the battalion soon led some marines to create their own diversions. On June 29, two privates from Company E left camp without permission and boarded a schooner in the harbor. They remained on board for several hours and later were reported displaying "improper conduct." Both were disciplined with ten days at double irons. Another private was caught buying liquor using a Spanish dollar.49
Pvt. Robert Burns supplied some of the men with a good story to tell. One night while on guard duty, the private heard something moving in the bushes approximately one hundred yards ahead. Having orders to shoot anything that moved, the private gave three verbal warnings to halt. There being no response, and still hearing movement in the bushes, Burns fired his weapon into the bushes. In the morning, a sergeant took six men to investigate the situation and found that Burns had not fired on the enemy but rather had downed a very large black pig.50
On August 5 the battalion broke camp and embarked on the Resolute. The transport left Guantanamo Bay four days later for Manzanillo under convoy of USS Newark to assist in the capture of the town.51 The Resolute, Suwanee, Hist, Osceola, Alvarado, and Newark all approached Manzanillo and anchored three miles outside town on the twelfth. The Alvarado was sent under a flag of truce to demand a surrender from the military commander. The commander replied that Spanish military code would not allow him to surrender without being forced by a siege or military operation. Captain Goodrich allowed time for noncombatants to vacate the town before beginning the naval bombardment. Naval gunfire started at 3:40 and lasted until 4:15, when it appeared that flags of truce were flying over some of the town's buildings. Goodrich ordered a cease-fire, and the navy vessels flying flags of truce approached. The vessels were soon fired upon, and the Newark returned fire. The action was soon broken off, and all ships anchored for the night at 5:30 p.m. Naval gunfire resumed at 5:20 a.m. the next morning, and when daylight came, white flags were flying over many buildings in town. A small boat from Manzanillo approached the navy ships and brought word to Captain Goodrich that an armistice had been proclaimed: the war was over. The captain of the Newark, observing the disappointment of the battalion commander, reported, "As part of the contemplated plan of operations was the landing of some or all of the marines of Colonel Huntington's command. This officer's regret at the loss of an opportunity to win additional distinction for his corps and himself was only equaled by his careful study of the necessities of the case and his zealous entrance into the spirit of the enterprise."52
On the eighteenth, the Resolute took on board 275 men from four US Army light artillery battery detachments for transport to Montauk Point, Long Island. The next day, the ship encountered rough seas, and most of the army detachment and marines were sick.53 After leaving Long Island, Resolute headed for New Hampshire, arriving at Portsmouth on August 26. The commandant had personally chosen this location for the battalion to recover from the tropical heat of the Caribbean. Huntington named their new site Camp Heywood in honor of the commandant of the Marine Corps. Six of the battalion's officers received promotions for gallantry, and the commandant commended all the battalion's officers and men and noted the favorable press coverage of the battalion's first few days in Cuba. On September 19, Huntington received orders to disband the battalion.54
Trevor K. Plante is an archivist in the Old Military and Civil Records unit, National Archives and Records Administration. He specializes in military records prior to World War II.
|Articles published in Prologue do not necessarily represent the views of NARA or of any other agency of the United States Government.| | <urn:uuid:0a62d36a-65b3-4c44-8b39-70e893be04fb> | CC-MAIN-2019-47 | https://www.archives.gov/publications/prologue/1998/spring/spanish-american-war-marines-1.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00140.warc.gz | en | 0.97861 | 5,253 | 3.203125 | 3 |
History is the study of the past as it is described in written documents. Events occurring before written record are considered prehistory, it is an umbrella term that relates to past events as well as the memory, collection, organization and interpretation of information about these events. Scholars who write about history are called historians. History can refer to the academic discipline which uses a narrative to examine and analyse a sequence of past events, objectively determine the patterns of cause and effect that determine them. Historians sometimes debate the nature of history and its usefulness by discussing the study of the discipline as an end in itself and as a way of providing "perspective" on the problems of the present. Stories common to a particular culture, but not supported by external sources, are classified as cultural heritage or legends, because they do not show the "disinterested investigation" required of the discipline of history. Herodotus, a 5th-century BC Greek historian is considered within the Western tradition to be the "father of history", along with his contemporary Thucydides, helped form the foundations for the modern study of human history.
Their works continue to be read today, the gap between the culture-focused Herodotus and the military-focused Thucydides remains a point of contention or approach in modern historical writing. In East Asia, a state chronicle, the Spring and Autumn Annals was known to be compiled from as early as 722 BC although only 2nd-century BC texts have survived. Ancient influences have helped spawn variant interpretations of the nature of history which have evolved over the centuries and continue to change today; the modern study of history is wide-ranging, includes the study of specific regions and the study of certain topical or thematical elements of historical investigation. History is taught as part of primary and secondary education, the academic study of history is a major discipline in university studies; the word history comes from the Ancient Greek ἱστορία, meaning'inquiry','knowledge from inquiry', or'judge'. It was in that sense; the ancestor word ἵστωρ is attested early on in Homeric Hymns, the Athenian ephebes' oath, in Boiotic inscriptions.
The Greek word was borrowed into Classical Latin as historia, meaning "investigation, research, description, written account of past events, writing of history, historical narrative, recorded knowledge of past events, narrative". History was borrowed from Latin into Old English as stær, but this word fell out of use in the late Old English period. Meanwhile, as Latin became Old French, historia developed into forms such as istorie and historie, with new developments in the meaning: "account of the events of a person's life, account of events as relevant to a group of people or people in general, dramatic or pictorial representation of historical events, body of knowledge relative to human evolution, narrative of real or imaginary events, story", it was from Anglo-Norman that history was borrowed into Middle English, this time the loan stuck. It appears in the 13th-century Ancrene Wisse, but seems to have become a common word in the late 14th century, with an early attestation appearing in John Gower's Confessio Amantis of the 1390s: "I finde in a bok compiled | To this matiere an old histoire, | The which comth nou to mi memoire".
In Middle English, the meaning of history was "story" in general. The restriction to the meaning "the branch of knowledge that deals with past events. With the Renaissance, older senses of the word were revived, it was in the Greek sense that Francis Bacon used the term in the late 16th century, when he wrote about "Natural History". For him, historia was "the knowledge of objects determined by space and time", that sort of knowledge provided by memory. In an expression of the linguistic synthetic vs. analytic/isolating dichotomy, English like Chinese now designates separate words for human history and storytelling in general. In modern German and most Germanic and Romance languages, which are solidly synthetic and inflected, the same word is still used to mean both'history' and'story'. Historian in the sense of a "researcher of history" is attested from 1531. In all European languages, the substantive history is still used to mean both "what happened with men", "the scholarly study of the happened", the latter sense sometimes distinguished with a capital letter, or the word historiography.
The adjective historical is attested from 1661, historic from 1669. Historians write in the context of their own time, with due regard to the current dominant ideas of how to interpret the past, sometimes write to provide lessons for their own society. In the words of Benedetto Croce, "All history is contemporary history". History is facilitated by the formation of a "true discourse of past" through the production of narrative and analysis of past events relating to the human race; the modern discipline of history is dedicated to the institutional production of this discourse. All events that are remembered and preserved in some authentic form constitute the historical record; the task of histori
The Bloodhound is a large scent hound bred for hunting deer, wild boar and, since the Middle Ages, for tracking people. Believed to be descended from hounds once kept at the Abbey of Saint-Hubert, Belgium, it is known to French speakers as the Chien de Saint-Hubert; this breed is famed for its ability to discern human scent over great distances days later. Its extraordinarily keen sense of smell is combined with a strong and tenacious tracking instinct, producing the ideal scent hound, it is used by police and law enforcement all over the world to track escaped prisoners, missing people, lost pets. Bloodhounds weigh from 36 to 72 kg, they are 58 to 69 cm tall at the withers. According to the AKC standard for the breed, larger dogs are preferred by conformation judges. Acceptable colors for bloodhounds are black, liver and red. Bloodhounds possess an unusually large skeletal structure with most of their weight concentrated in their bones, which are thick for their length; the coat typical for a scenthound is hard and composed of fur alone, with no admixture of hair.
This breed is gentle, is tireless when following a scent. Because of its strong tracking instinct, it can be willful and somewhat difficult to obedience train and handle on a leash. Bloodhounds have an even-tempered nature with humans, making excellent family pets. However, like any pet, they require supervision when around small children. Up to at least the seventeenth century bloodhounds were of all colours, but in modern times the colour range has become more restricted; the colours are listed as black and tan and tan, red. White is not uncommon on the chest, sometimes appears on the feet. Genetically, the main types are determined by the action of two genes, found in many species. One produces an alternation between brown. If a hound inherits the black allele from either parent, it has a black nose, eye rims and paw-pads, if it has a saddle, it is black; the other allele suppresses black pigment and is recessive, so it must be inherited from both parents. It produces liver noses, eye rims, paw-pads, saddles.
The second gene determines coat pattern. It can produce animals with no saddle; these last are sometimes referred to as'blanket' or'full-coat' types. In a pioneering study in 1969 Dennis Piper suggested 5 alleles in the pattern-marking gene, producing variants from the red or saddle-less hound through three different types of progressively greater saddle marking to the'blanket' type. However, more modern study attributes the variation to 3 different alleles of the Agouti gene. Ay produces the non saddle-marked "red" hound, As produces saddle-marking, at produces the blanket or full-coat hound. Of these Ay is dominant, at is recessive to the others; the interaction of these variants of the two genes produces the six basic types shown below. It is that a third gene determines whether or not there is a melanistic mask. Em, the allele for a mask, is dominant over E, the allele for no mask. Compared to other purebred dogs, Bloodhounds suffer an unusually high rate of gastrointestinal ailments, with gastric dilatation volvulus being the most common type of gastrointestinal problem.
The breed suffers an unusually high incidence of eye and ear ailments. Owners should be aware of the signs of bloat, both the most common illness and the leading cause of death of Bloodhounds; the thick coat gives the breed the tendency to overheat quickly. Bloodhounds in a 2004 UK Kennel Club survey had a median longevity of 6.75 years, which makes them one of the shortest-lived of dog breeds. The oldest of the 82 deceased dogs in the survey died at the age of 12.1 years. Bloat took 34 % of the animals; the second leading cause of death in the study was cancer, at 27%. In a 2013 survey, the average age at death for 14 Bloodhounds was 8.25 years. The St. Hubert hound was, according to legend, first bred ca. 1000 AD by monks at the Saint-Hubert Monastery in Belgium. It is held to be the ancestor of several other breeds, like the extinct Norman hound, Saintongeois, the modern Grand Bleu de Gascogne, Gascon Saintongeois and Artois Normande, as well as the bloodhound, it has been suggested, not at all uniform in type.
Whether they originated there, or what their ancestry was, is uncertain, but from ca. 1200, the monks of the Abbey of St Hubert annually sent several pairs of black hounds as a gift to the King of France. They were not always thought of in the royal pack. Charles IX 1550-74, preferred his white hounds and the larger Chiens-gris, wrote that the St Huberts were suitable for people with gout to follow, but not for those who wished to shorten the life of the hunted animal, he described them as pack-hounds of medium stature, long in the body, not well sprung in the rib, of no great strength. Writing in 1561 Jaques du Fouilloux with low, short legs, he says they have become mixed in breeding, so that they are now of all colours and distributed. Charles described the'true race' of the St Hubert as black, with red/tawny marks above the eyes and legs of the same colour, suggesting a'blanket' black and
Rock music is a broad genre of popular music that originated as "rock and roll" in the United States in the early 1950s, developed into a range of different styles in the 1960s and particularly in the United Kingdom and in the United States. It has its roots in 1940s and 1950s rock and roll, a style which drew on the genres of blues and blues, from country music. Rock music drew on a number of other genres such as electric blues and folk, incorporated influences from jazz and other musical styles. Musically, rock has centered on the electric guitar as part of a rock group with electric bass and one or more singers. Rock is song-based music with a 4/4 time signature using a verse–chorus form, but the genre has become diverse. Like pop music, lyrics stress romantic love but address a wide variety of other themes that are social or political. By the late 1960s "classic rock" period, a number of distinct rock music subgenres had emerged, including hybrids like blues rock, folk rock, country rock, southern rock, raga rock, jazz-rock, many of which contributed to the development of psychedelic rock, influenced by the countercultural psychedelic and hippie scene.
New genres that emerged included progressive rock. In the second half of the 1970s, punk rock reacted by producing stripped-down, energetic social and political critiques. Punk was an influence in the 1980s on new wave, post-punk and alternative rock. From the 1990s alternative rock began to dominate rock music and break into the mainstream in the form of grunge and indie rock. Further fusion subgenres have since emerged, including pop punk, electronic rock, rap rock, rap metal, as well as conscious attempts to revisit rock's history, including the garage rock/post-punk and techno-pop revivals at the beginning of the 2000s. Rock music has embodied and served as the vehicle for cultural and social movements, leading to major subcultures including mods and rockers in the UK and the hippie counterculture that spread out from San Francisco in the US in the 1960s. 1970s punk culture spawned the goth and emo subcultures. Inheriting the folk tradition of the protest song, rock music has been associated with political activism as well as changes in social attitudes to race and drug use, is seen as an expression of youth revolt against adult consumerism and conformity.
The sound of rock is traditionally centered on the amplified electric guitar, which emerged in its modern form in the 1950s with the popularity of rock and roll. It was influenced by the sounds of electric blues guitarists; the sound of an electric guitar in rock music is supported by an electric bass guitar, which pioneered in jazz music in the same era, percussion produced from a drum kit that combines drums and cymbals. This trio of instruments has been complemented by the inclusion of other instruments keyboards such as the piano, the Hammond organ, the synthesizer; the basic rock instrumentation was derived from the basic blues band instrumentation. A group of musicians performing rock music is termed as a rock group. Furthermore, it consists of between three and five members. Classically, a rock band takes the form of a quartet whose members cover one or more roles, including vocalist, lead guitarist, rhythm guitarist, bass guitarist and keyboard player or other instrumentalist. Rock music is traditionally built on a foundation of simple unsyncopated rhythms in a 4/4 meter, with a repetitive snare drum back beat on beats two and four.
Melodies originate from older musical modes such as the Dorian and Mixolydian, as well as major and minor modes. Harmonies range from the common triad to parallel perfect fourths and fifths and dissonant harmonic progressions. Since the late 1950s and from the mid 1960s onwards, rock music used the verse-chorus structure derived from blues and folk music, but there has been considerable variation from this model. Critics have stressed the eclecticism and stylistic diversity of rock; because of its complex history and its tendency to borrow from other musical and cultural forms, it has been argued that "it is impossible to bind rock music to a rigidly delineated musical definition." Unlike many earlier styles of popular music, rock lyrics have dealt with a wide range of themes, including romantic love, rebellion against "The Establishment", social concerns, life styles. These themes were inherited from a variety of sources such as the Tin Pan Alley pop tradition, folk music, rhythm and blues.
Music journalist Robert Christgau characterizes rock lyrics as a "cool medium" with simple diction and repeated refrains, asserts that rock's primary "function" "pertains to music, or, more noise." The predominance of white and middle class musicians in rock music has been noted, rock has been seen as an appropriation of black musical forms for a young and male audience. As a result, it has been seen to articulate the concerns of this group in both style and lyrics. Christgau, writing in 1972, said in spite of some exceptions, "rock and roll implies an identification of male sexuality and aggression". Since the term "rock" started being used in preference to "rock and roll" from the late-1960s, it has been contrasted with pop music, with which it has shared many characteristics, but from wh
Murder is the unlawful killing of another human without justification or valid excuse the unlawful killing of another human being with malice aforethought. This state of mind may, depending upon the jurisdiction, distinguish murder from other forms of unlawful homicide, such as manslaughter. Manslaughter is a killing committed in the absence of malice, brought about by reasonable provocation, or diminished capacity. Involuntary manslaughter, where it is recognized, is a killing that lacks all but the most attenuated guilty intent, recklessness. Most societies consider murder to be an serious crime, thus believe that the person charged should receive harsh punishments for the purposes of retribution, rehabilitation, or incapacitation. In most countries, a person convicted of murder faces a long-term prison sentence a life sentence; the modern English word "murder" descends from the Proto-Indo-European "mrtró" which meant "to die". The Middle English mordre is a noun from Old French murdre. Middle English mordre is a verb from the Middle English noun.
The eighteenth-century English jurist William Blackstone, in his Commentaries on the Laws of England set out the common law definition of murder, which by this definition occurs when a person, of sound memory and discretion, unlawfully kills any reasonable creature in being and under the king's peace, with malice aforethought, either express or implied. The elements of common law murder are: Unlawful killing through criminal act or omission of a human by another human with malice aforethought; the Unlawful – This distinguishes murder from killings that are done within the boundaries of law, such as capital punishment, justified self-defence, or the killing of enemy combatants by lawful combatants as well as causing collateral damage to non-combatants during a war. Killing – At common law life ended with cardiopulmonary arrest – the total and irreversible cessation of blood circulation and respiration. With advances in medical technology courts have adopted irreversible cessation of all brain function as marking the end of life.Сriminal act or omission – Killing can be committed by an act or an omission.of a human – This element presents the issue of when life begins.
At common law, a fetus was not a human being. Life began when the fetus passed through the vagina and took its first breath.by another human – In early common law, suicide was considered murder. The requirement that the person killed be someone other than the perpetrator excluded suicide from the definition of murder. With malice aforethought – Originally malice aforethought carried its everyday meaning – a deliberate and premeditated killing of another motivated by ill will. Murder required that an appreciable time pass between the formation and execution of the intent to kill; the courts broadened the scope of murder by eliminating the requirement of actual premeditation and deliberation as well as true malice. All, required for malice aforethought to exist is that the perpetrator act with one of the four states of mind that constitutes "malice"; the four states of mind recognized as constituting "malice" are: Under state of mind, intent to kill, the deadly weapon rule applies. Thus, if the defendant intentionally uses a deadly weapon or instrument against the victim, such use authorizes a permissive inference of intent to kill.
In other words, "intent follows the bullet". Examples of deadly weapons and instruments include but are not limited to guns, deadly toxins or chemicals or gases and vehicles when intentionally used to harm one or more victims. Under state of mind, an "abandoned and malignant heart", the killing must result from the defendant's conduct involving a reckless indifference to human life and a conscious disregard of an unreasonable risk of death or serious bodily injury. In Australian jurisdictions, the unreasonable risk must amount to a foreseen probability of death, as opposed to possibility. Under state of mind, the felony-murder doctrine, the felony committed must be an inherently dangerous felony, such as burglary, rape, robbery or kidnapping; the underlying felony cannot be a lesser included offense such as assault, otherwise all criminal homicides would be murder as all are felonies. As with most legal terms, the precise definition of murder varies between jurisdictions and is codified in some form of legislation.
When the legal distinction between murder and manslaughter is clear, it is not unknown for a jury to find a murder defendant guilty of the lesser offence. The jury might sympathise with the defendant, the jury may wish to protect the defendant from a sentence of life imprisonment or execution. Many jurisdictions divide murder by degrees; the distinction between first- and second-degree murder exists, for example, in Canadian murder law and U. S. murder law. The most common division is between first- and second-degree murder. Second-degree murder is common law murder, first-degree is an aggravated form; the aggravating factors of first-degree murder depend on the jurisdiction, but may include a specific intent to kill, premeditation, or deliberation. In some, murder committed by acts such as strangulation, poisoning, or lying in wait are treated as first-degree murder. A few states in the U. S. further distinguish third-degree murder, but they differ in which kinds of murders they classify as second-degree versus third-degree.
For example, Minnesota defines third-degree murder as depraved-heart murder, whereas Flori
Kew Gardens, Queens
Kew Gardens is a neighborhood in the central area of the New York City borough of Queens. Kew Gardens, shaped like a triangle, is bounded to the north by Union Turnpike and the Jackie Robinson Parkway, to the east by Van Wyck Expressway and 131st Street, to the south by Hillside Avenue, to the west by Park Lane, Abingdon Road, 118th Street. Forest Park and the neighborhood of Forest Hills are to the west, Flushing Meadows–Corona Park north, Richmond Hill south, Briarwood southeast, Kew Gardens Hills east. Kew Gardens was one of seven planned garden communities built in Queens from the late 19th century to 1950. Much of the area was acquired in 1868 by Englishman Albon P. Man, who developed the neighborhood of Hollis Hill to the south, chiefly along Jamaica Avenue, while leaving the hilly land to the north undeveloped. Maple Grove Cemetery on Kew Gardens Road opened in 1875. A Long Island Rail Road station was built for mourners in October and trains stopped there from mid-November; the station was named Hopedale, after Hopedale Hall, a hotel located at what is now Queens Boulevard and Union Turnpike.
In the 1890s, the executors of Man's estate laid out the Queens Bridge Golf Course on the hilly terrains south of the railroad. This remained in use until it was bisected in 1908 by the main line of the Long Island Rail Road, moved 600 feet to the south to eliminate a curve; the golf course was abandoned and a new station was built in 1909 on Lefferts Boulevard. Man's heirs, Aldrick Man and Albon Man Jr. decided to lay out a new community and called it at first Kew and Kew Gardens after the well-known botanical gardens in England. The architects of the development favored English and neo-Tudor styles, which still predominate in many sections of the neighborhood. In 1910, the property was sold piecemeal by the estate and during the next few years streets were extended, land graded and water and sewer pipes installed; the first apartment building was the Kew Bolmer at 80–45 Kew Gardens Road, erected in 1915. In 1920, the Kew Gardens Inn at the railroad station opened for residential guests, who paid $40 a week for a room and a bath with meals.
Elegant one-family houses were built in the 1920s, as were apartment buildings such as Colonial Hall and Kew Hall that numbered more than twenty by 1936. In July 1933, the Grand Central Parkway opened from the Kew Gardens Interchange to the edge of Nassau County. Two years the Interboro Parkway was opened, linking Kew Gardens to Pennsylvania Avenue in East New York. Since the parkways used part of the roadbed of Union Turnpike, no houses were demolished. Around the same time, the construction of the Queens Boulevard subway line offered the possibility of quick commutes to the central business district in Midtown Manhattan. In the late 1920s, upon learning the route of the proposed line bought up property on and around Queens Boulevard, real estate prices soared, older buildings were demolished in order to make way for new development. In order to allow for the speculators to build fifteen-story apartment buildings, several blocks were rezoned, they built apartment building in order to accommodate the influx of residents from Midtown Manhattan that would desire a quick and cheap commute to their jobs.
Since the new line had express tracks, communities built around express stations, such as in Forest Hills and Kew Gardens became more desirable to live. With the introduction of the subway into the community of Forest Hills, Queens Borough President George U. Harvey predicted that Queens Boulevard would become the "Park Avenue of Queens". With the introduction of the subway, Forest Hills and Kew Gardens were transformed from quiet residential communities of one-family houses to active population centers; the line was extended from Jackson Heights–Roosevelt Avenue to Kew Gardens–Union Turnpike on December 30, 1936. Following the line's completion, there was an increase in the property values of buildings around Queens Boulevard. For example, a property along Queens Boulevard that would have sold for $1,200 in 1925, would have sold for $10,000 in 1930. Queens Boulevard, prior to the construction of the subway, was just a route to allow people to get to Jamaica, running through farmlands. Since the construction of the line, the area of the thoroughfare that stretches from Rego Park to Kew Gardens has been home to apartment buildings, a thriving business district that the Chamber of Commerce calls the "Golden Area".
Despite its historical significance, Kew Gardens lacks any landmark protection. On November 22, 1950, two Long Island Rail Road trains collided in Kew Gardens; the trains collided between Kew Gardens and Jamaica stations, killing 78 people and injuring 363. The crash became the worst railway accident in LIRR history, one of the worst in the history of New York state. In 1964, the neighborhood gained news notoriety when Kitty Genovese was murdered near the Kew Gardens Long Island Railroad station. A New York Times article reported; the story came to represent the anonymity of urban life. The circumstances of the case are disputed to this day, it has been alleged that the critical fact reported by The New York Times that "none of the neighbors responded" was false. The case of Kitty Genovese is an oft-cited example of the bystander effect, the case that spurred research on this social psychological phenomenon. Kew Gardens remains a densely populated residential community, but Kew Gardens is becoming an upper-class residential area, with a mix of one-family homes above the million-dollar range, complex apartments, c | <urn:uuid:4dd41a37-fc70-468f-9d36-1eec522822eb> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Peter_Braunstein | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00018.warc.gz | en | 0.962361 | 5,620 | 3.78125 | 4 |
Aims: To evaluate the effectiveness of phenobarbitone as an anticonvulsant in neonates.
Methods: An observational study using video-EEG telemetry. Video-EEG was obtained before treatment was started, for an hour after treatment was given, two hours after treatment was given, and again between 12 and 24 hours after treatment was given. Patients were recruited from all babies who required phenobarbitone (20–40 mg/kg intravenously over 20 minutes) for suspected clinical seizures and had EEG monitoring one hour before and up to 24 hours after the initial dose. An EEG seizure discharge was defined as a sudden repetitive stereotyped discharge lasting for at least 10 seconds. Neonatal status epilepticus was defined as continuous seizure activity for at least 30 minutes. Seizures were categorised as EEG seizure discharges only (electrographic), or as EEG seizure discharges with accompanying clinical manifestations (electroclinical). Surviving babies were assessed at one year using the Griffiths neurodevelopmental score.
Results: Fourteen babies were studied. Four responded to phenobarbitone; these had normal or moderately abnormal EEG background abnormalities and outcome was good. In the other 10 babies electrographic seizures increased after treatment, whereas electroclinical seizures reduced. Three babies were treated with second line anticonvulsants, of whom two responded. One of these had a normal neurodevelopmental score at one year, but the outcome for the remainder of the whole group was poor.
Conclusion: Phenobarbitone is often ineffective as a first line anticonvulsant in neonates with seizures in whom the background EEG is significantly abnormal.
Statistics from Altmetric.com
Phenobarbitone remains the most frequently used first line treatment for neonatal seizures worldwide, in spite of accumulating evidence that it is ineffective in many babies.1 Seizures in the newborn are associated with underlying conditions such as brain haemorrhage, stroke, meningitis, and hypoxic ischaemic encephalopathy. Many studies evaluating the effectiveness of phenobarbitone have evaluated only clinical control, as the electroencephalogram (EEG) was not recorded at the time of seizures.2–4 Recent studies have shown that seizures in the newborn are often clinically silent (or “electrographic”), and the extent of the electrographic seizure burden in the sick baby is often greatly underestimated.5–9 Electrographic seizures cannot be diagnosed without an EEG and even when clinical correlates are present they are often very subtle. Video-EEG has been shown to be the most useful technique available to identify, classify, and quantify neonatal seizures.5,9 Effective seizure control in the neonate, therefore, implies abolition of clinical and electrographic seizures.
EEG monitoring of treated neonatal seizures shows that most current treatments are often ineffective in suppressing abnormal electrical activity.1,7 Phenobarbitone is a potent sedative as well as a powerful anticonvulsant. Our previous work has shown that electrographic seizures were more common in babies who had been treated with phenobarbitone.9 We, and others5 have observed that this drug sedates babies, suppressing clinical manifestations of seizures, but has little effect on the electrographic discharge.
It is still not known whether neonatal seizures, in particular, electrographic seizures are themselves harmful. However, evidence is accumulating regarding the potential detrimental effects of electroclinical and electrographic seizures in neonates.10–12 Animal studies suggest that seizures are deleterious to the development of the immature brain.13–16 Twenty to forty per cent of term neonates who suffer convulsions are subsequently handicapped, whereas in preterm neonates, adverse sequelae occur after seizures in 75–88%.17
The aim of this study was to use video-EEG to evaluate the effectiveness of phenobarbitone as a first line anticonvulsant in the neonate by careful examination of its effects on the duration of both electroclinical and electrographic neonatal seizures.
PATIENTS AND METHODS
From a prospective research study using video-EEG to monitor neonates at high risk of seizures, we included in this analysis only those neonates who fulfilled the following criteria:
Electrographic or electroclinical seizures were present
Monitoring started at least one hour prior to the administration of phenobarbitone
Monitoring was continued for at least 1–2 hours immediately after treatment
Monitoring was either continuous or repeated at regular intervals over subsequent days.
The ethics committee of King's College Hospital approved this study. Written informed parental consent was obtained from the parents of all babies studied.
A loading dose of 20 mg/kg phenobarbitone was administered intravenously over a 15–20 minute period. Further loading doses were administered up to a maximum of 40 mg/kg if seizures proved difficult to control. Serum concentrations of phenobarbitone were measured using the CEDIA homogeneous enzyme immunoassay methodology (Roche Diagnostics, Lewes, UK) in most babies and as close as possible to the time of EEG monitoring. If seizures failed to respond or recurred after phenobarbitone therapy, clonazepam or phenytoin was used.
A Telefactor (Modac or Beehive) video-EEG system was used to record 12 channels of EEG using the 10–20 system of electrode placement modified for neonates (F4–C4, C4–P4, P4–O2, F3–C3, C3–P3, P3–O1, T4–O2, T3–O1). A single channel electro-oculogram was recorded from the upper outer canthus of the right eye to the lower outer canthus of the left eye and the sub-mentalis electromyogram was also recorded on one channel. The remaining two channels were configured to display ECG and respiration via an output from the baby's clinical monitor. A video recording was made of each baby for the entire duration of study. The EEG-polygraphy waveforms were embedded on to the video picture and recorded by a Panasonic video recorder on to conventional videotape.
Each baby was monitored as soon as possible after the first seizure. Recordings were continued until seizure control was obtained and then repeated within 24 hours unless the baby was too unstable to tolerate the initial handling required for EEG monitoring. If seizure control was not obtained the recording was continued until further treatment was administered.
Each videotape was carefully reviewed and analysed for periods of (1) electroclinical seizure and (2) electrographic seizure. Each electroclinical and electrographic seizure was counted and timed. Results were then assigned to one of four time periods as follows:
The hour immediately prior to treatment (T0)
The first full hour after all phenobarbitone had been administered (T1) (20 minutes was allowed for the loading dose of phenobarbitone to be administered)
The second full hour after treatment was completed (T2)
12–24 hours after treatment (T24).
The diagnosis of an electrographic seizure required the evolution of sudden, repetitive, evolving stereotyped forms with a definite beginning, middle, and end.18 A minimal ictal duration of 10 seconds was used for this study. Neonatal status epilepticus was defined as continuous seizure activity for at least 30 minutes or recurrent seizures for more than 50% of the entire recording duration.19 Seizure control was defined as complete elimination of all electrographic seizures. Background activity of the EEG was classified according to the criteria defined in table 1. The main features of this classification have been defined previously.20
All surviving babies had a neurodevelopmental assessment at 1 year using the Amiel-Tison Test,21 the Griffiths developmental scale for babies,22 and a neurological examination. Neurodevelopmental outcome was classified as normal (Amiel-Tison score 0; Griffiths quotient ≥85%) or abnormal (Amiel-Tison score 1 or 2; Griffiths quotient <85%) with minor or major sequelae (cerebral palsy, motor or sensory deficits, and/or epilepsy).
Wilcoxon tests for matched pairs tests were used to compare changes in total seizure duration before and after treatment in the pooled results from all babies.23 Seizures were divided into electrographic and electroclinical for all babies; seizure duration at time T0 was compared with seizure duration at time T1, and seizure duration at time T0 was also compared with T24.
During a three year period, 33 babies had seizures but only 14 met the inclusion criteria for this study and had video-EEG monitoring at least one hour before a loading dose of phenobarbitone was administered.
Table 2 gives clinical information about the babies. Four term babies had hypoxic ischaemic encephalopathy. Three preterm babies had bilateral intracranial haemorrhage. Three babies had seizures caused by metabolic problems (hypoglycaemia in two and kernicterus in one). One baby had meningitis. Two preterm babies had evidence of birth depression and one otherwise healthy term baby had idiopathic seizures.
EEG response to phenobarbitone
Phenobarbitone concentrations were measured in 12 babies 24–48 hours after the first loading dose of phenobarbitone. One baby had a concentration of 46.9 mg/l, which was above the therapeutic range (table 2, case 8). All other babies had concentrations within the therapeutic range. Immediately after phenobarbitone administration, all babies showed a transient attenuation of background activity lasting 10–20 minutes.
Table 3 shows the seizure duration in seconds for each baby in the hour before treatment, the hour after treatment, two hours after treatment, and 12–24 hours later. Three babies received a second loading dose of phenobarbitone the following day during EEG recording. The change in seizure duration following both doses is included for these babies.
Six babies were in neonatal status prior to anticonvulsant administration. Two babies had purely electrographic seizures before treatment, of which one was therapeutically paralysed. The remaining babies had electrographic and electroclinical seizures in the hour preceding treatment.
Seizure duration rather than the number of seizures per hour was used to determine the severity of seizure burden as the number of seizures alone could be misleading. This was particularly evident in babies with neonatal status who often had just one seizure in an hour but this seizure could last for up to an hour in some cases.
Four babies were treated with a second line anticonvulsant before the final EEG was performed because clinical seizure activity had reappeared. Wilcoxon tests for matched pairs revealed no significant change in total seizure duration in all babies when comparing before treatment, one hour after treatment, two hours after treatment, and the next day.
Babies who responded to phenobarbitone therapy
Four of the 14 babies responded to a loading dose of phenobarbitone. Two term babies had focal temporal lobe seizures with apnoea and tonic posturing (table 3, cases 1 and 2). The background EEG in both of these babies showed only mild abnormalities. One baby responded within an hour to a loading dose of phenobarbitone and remained seizure free. The other baby had approximately one seizure per hour prior to phenobarbitone. After phenobarbitone he had no further fits all night. He had one seizure during the EEG the next day but no further seizures were ever seen. Both of these babies were normal at follow up.
One baby with kernicterus caused by glucose 6-phosphate dehydrogenase deficiency, who had focal tonic seizures and a moderately abnormal background EEG, responded to phenobarbitone therapy within one hour (table 3, case 6). He had one further electroclinical seizure and one electrographic seizure after the loading dose of phenobarbitone. He continued seizure free on a maintenance dose of phenobarbitone. At follow up he has a moderate hearing impairment.
One preterm baby with birth depression associated with myopathy had focal short duration electrographic seizures. These were suppressed almost immediately by a loading dose of phenobarbitone and no further fits were ever seen (table 3, case 11). The background EEG showed severe abnormalities. This baby remained ventilator dependent and died three weeks later, after intensive care was withdrawn. Figure 1A shows total seizure duration before treatment, one hour after treatment, and 12–24 hours later for these four babies.
Babies who did not respond to phenobarbitone therapy
The remaining 10 babies did not respond to phenobarbitone therapy. All had moderate to severe background EEG abnormalities. Figure 1B shows the change in total seizure duration before treatment, one hour after treatment, and 12–24 hours later in seven of these babies. Two babies did show a dramatic reduction in seizure burden one hour after treatment; however, by T24 seizures had reappeared. Four babies showed either continuation of clinical seizures or the re-emergence of clinical seizures before the final EEG was performed and were therefore treated with a second line anticonvulsant by T24 (see fig 1C).
Response to a second loading dose of phenobarbitone
At the time of this study it was unit policy to treat only those seizures with accompanying clinical manifestations. However, if electrographic seizures persisted for over 24 hours a second load of phenobarbitone was given, particularly if these electrographic seizures represented neonatal status. Three babies had two separate loading doses of phenobarbitone given during EEG monitoring. In one baby (table 3, case 8) the initial effect of phenobarbitone was to temporarily reduce the seizure burden. This baby was in electroclinical status epilepticus in the hour before phenobarbitone was given. Clinical signs were very subtle and difficult to assess, consisting of intermittent eye blinking and slight limb stiffening. After phenobarbitone, the seizure burden reduced to 1869 seconds in the first hour. Clinically, there appeared to be a dramatic response as these 1869 seconds were pure electrographic seizures. While technically this is still neonatal status (more than 50% of the recording time), it did signify some improvement. By the second hour after treatment, however, the baby had returned to electrographic status for almost 100% of the recording time. Clinically the baby was still thought to be improving as clinical signs of seizures were no longer obvious. In fact, only the clinical manifestation of the seizures had been removed. The second loading dose of phenobarbitone was given after 24 hours but failed to reduce electrographic seizures; seizures could only be reduced 12 hours later by the addition of clonazepam.
A second baby with bilateral intraventricular haemorrhage also had two loading doses of phenobarbitone during the EEG (table 3, case 9). This baby was in neonatal status and conversion of electroclinical seizures to electrographic seizures was also seen. The first load of phenobarbitone did not have any effect on seizure burden for two hours. After this time seizures became purely electrographic. The EEG the next day, 12 hours later, showed a reduced seizure burden but the re-emergence of electroclinical seizures. At this point a second dose of phenobarbitone was given but failed to have any further effect on seizure burden. Finally, the third baby who received two doses of phenobarbitone failed to respond. Electroclinical seizures were converted to electrographic seizures following the first dose of phenobarbitone and the second dose of phenobarbitone did not have any further effect (table 3, case 10). In these three babies it was concluded that the addition of a second load of phenobarbitone to control seizures that had failed to respond to the initial load did not produce any further benefits.
Measuring total seizure duration alone does not fully explain the apparent clinical reduction in seizure burden seen after treatment in this study. Total electrographic and electroclinical seizure duration were measured separately in each baby. Electroclinical seizures were more common before treatment. One hour after treatment, electroclinical seizures reduced significantly (p = 0.001, median change −158 seconds, 95% CI: −1047 to 0). By time T24 electroclinical seizures were still significantly reduced (p = 0.030, median change −573, 95% CI: −3098 to 0). There was a trend for electrographic seizures to increase in association with electroclinical seizure reduction, but the change was not significant. Figure 2 shows the results from this analysis in seven of the 10 babies who did not respond to phenobarbitone treatment and who did not receive a second line anticonvulsant. It is clear from individual results, however, that in most babies electroclinical seizures reduced and electrographic seizures increased after phenobarbitone. In other words, following phenobarbitone administration, the clinical component of the seizures reduced while electrographic seizures continued (electroclinical dissociation).
At follow up, in this group of babies with electrographic and/or electroclinical seizures, five had died, three had severe neurological impairment, four had mild to moderate impairment, and two were normal. All babies with a severely abnormal EEG (severely depressed trace or neonatal status), died, or had severe neurological impairment (table 2). Most babies who did not respond to phenobarbitone alone either died or had neurological abnormalities.
Using careful analysis of video-EEG we have shown that in the majority of neonates phenobarbitone administration is followed by a decrease in clinical manifestations of seizures while electrographic seizures continue. Phenobarbitone was only effective in 29% of babies. In our experience, these were babies with normal background EEGs or mild to moderate background abnormalities and relatively low seizure burden. Response to a loading dose of 20 mg/kg phenobarbitone was very rapid in this group, who had a good prognosis. Three of the four babies who responded were term and had metabolic abnormalities or seizures of unknown cause. The fourth responder was a preterm baby with birth depression and myopathy. He had short duration occipital electrographic seizures.
Babies who did not respond to phenobarbitone had abnormal background EEGs and over half were in neonatal status prior to the commencement of therapy. Three babies were given a further loading dose of phenobarbitone during EEG monitoring after the first dose failed to control seizures. Gal and colleagues4 achieved clinical control in up to two thirds of babies given a loading dose of 40 mg/kg. The EEG was not monitored so it was not possible to assess whether phenobarbitone merely reduced the clinical component of the seizures. No further reduction in electrographic seizures was seen following the additional dose of phenobarbitone given to three babies in our EEG study. Four of the 10 babies who did not respond to phenobarbitone therapy were treated with a second line anticonvulsant; seizures were abolished in a further two, one of whom had a normal neurodevelopmental outcome.
Phenobarbitone has been used as first line treatment for neonatal seizures worldwide for over 30 years. Despite the fact that babies with seizures have a poor outcome,9,12 there have been no formal trials to develop a more effective treatment strategy for seizure control. Until recently, electrographic seizures were not considered harmful to the developing brain, and there was no pressure to treat to electrical quiescence.12 If the clinical manifestations accompanying a seizure discharge diminished or disappeared following anticonvulsant administration, the treatment was deemed a success. Our results show that electrographic seizures continue in this situation. Of course, babies cannot communicate in the same way as adults, meaning that aura and sensory seizure manifestations cannot be measured at all. In babies who show electroclinical dissociation after phenobarbitone treatment, all we can say is that the previous clinical correlates were abolished.
This study shows the importance of EEG monitoring as a tool with which to measure seizure control in the neonate. The frequency with which the neonate shows electroclinical dissociation makes monitoring of treatment using clinical measures alone difficult to justify, given the accumulating evidence that electrographic seizures are equally important and have adverse effects on the developing brain.
The number of babies studied was small because we could only monitor those babies in whom it was possible to attach electrodes and eliminate interference quickly and where it was possible to record for one hour prior to therapy. These babies were recruited from a group of high risk babies; in many, seizures were not suspected at the time of monitoring or clinical seizure activity was subtle. In some babies there was doubt about the nature of abnormal movements so treatment was not commenced until the EEG confirmed that the abnormal movements were seizures.
To our knowledge this is the first study to use video-EEG to quantify the change in both electroclinical and electrographic seizure before and after phenobarbitone treatment in a group of neonates. We conclude from our results that phenobarbitone is ineffective as a first line anticonvulsant treatment in babies with severe seizures in whom the background EEG is also severely abnormal. In these babies, the addition of a second dose of phenobarbitone up to the maximum dose of 40 mg/kg does not appear to have any additional benefits. In our view, babies who fail to respond to phenobarbitone within two hours should be treated with a second line anticonvulsant. Painter and colleagues1 have shown that phenytoin is equally ineffective in controlling seizures as phenobarbitone. We agree wholeheartedly with these workers that there is an urgent need to develop a safe and effective treatment strategy for neonatal seizures.
G Boylan and G Wilson are supported by the National Lotteries Charities Board administered by the Fund for Epilepsy (grant RB214431). This project has also been supported by an equipment grant from the Bernard Sunley Foundation. We are very grateful for the continued support of nursing and medical staff at the NICU, King's College Hospital, and we would like to thank all parents who gave permission for us to study their babies. Without them, progress would be impossible.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | <urn:uuid:3c19deb5-fdfd-48f2-b9a0-4b5f29b26a98> | CC-MAIN-2019-47 | https://fn.bmj.com/content/86/3/F165?ijkey=a82c3b74d17582ed01545ed57ba134375ce8f672&keytype2=tf_ipsecsha | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00060.warc.gz | en | 0.957343 | 4,652 | 2.5625 | 3 |
Humans will inevitably become extinct due to environmental breakdown, but we have the power to design ourselves a "beautiful ending", says Paola Antonelli, who will next week open a major exhibition in Milan called Broken Nature.
Broken Nature: Design Takes on Human Survival, the XXII Triennale di Milano, will bring together 120 architecture and design projects from the last 30 years that explore humankind's fractured relationship with the planet.
The curator hopes it will make people aware of the "crisis in our humanity" – that our connection to our environment has been completely compromised, amidst an increase in extreme weather, wildfires and other environmental disasters around the globe.
"We will become extinct; extinction is normal, it's natural," she explained. "We don't have the power to stop our extinction but we have the power to make it count."
"Leave a legacy that means something"
Antonelli, who is also senior curator of architecture and design at MoMA in New York, insists that she's not trying to shock anyone. She instead wants to encourage designers, and everyone else, to make the most of the time they have left on the planet.
She claims that planning for the legacy of the human race is the same as planning for a person leaving their job, or when an elderly family member knows they are going to die. Only then will the next dominant species remember humankind with respect.
"I believe that our best chance is to design our own really elegant extension, so that we will leave a legacy that means something, and remains, in the future," she said.
"That means taking a very big leap in our perceptive abilities," she explained. "It means thinking really long-term, it means thinking at scale, it means really trying to understand that we are only one species on earth and one species in the universe."
Responsible design shouldn't mean sacrifice
The exhibition will include important milestones in restorative design, such as research project Italian Limes, which looks at shifting national borders as ice melts in the Alps, and SUN+, which seeks design alternatives to sun exposure.
Antonelli has also commissioned new projects that explore ways design could put humans back on a better path. Neri Oxman's contribution will see melanin – the pigment that produces skin tone – applied to architecture, while Formafantasma looked at new ways of upcycling waste.
The curator said changing our thinking requires us to think more deeply about design, and treat it with the same rigour applied to science. She said that, for too long, environmentally-friendly design has been seen as inferior.
"We always feel that for design to be ethical or responsible it has to sacrifice something. Usually the something that is sacrificed is the sensuality or the formal elegance. But it's not true," Antonelli explained.
"It's about design that has to do with the environment, with wasting less, with recycling more, with repairing things better and also with connecting to other human beings and to other species better," she said.
"Citizens are the true powerful agents in this change"
With the Milanese well-versed in design, Antonelli hopes the exhibition will be seen by more than just the design community. She hopes to engage a wider audience through the public programme, online platform and a catalogue of related essays.
"I believe that citizens are the true powerful agents in this change that needs to happen," explained Antonelli. "This is an exhibition for citizens. Of course it's for the design community. I always want things to be for the design community of course, but I also want it to be for the citizens."
Antonelli sees her role as giving people "stronger critical tools" that can be applied to "what they buy, what they use, how long they keep things for, when they throw them away". She thinks design can be "a Trojan horse" that gets into the mindset of a younger generation and provokes change.
"Without convincing children and their parents, and making it something that is asked of politicians and of corporations from the ground up, we will never go anywhere," she argued.
Politicians are ignoring environmental crisis
One thing Antonelli is keen to point out is that Broken Nature is not the only exhibition examining environmental issues.
Other examples include the Cooper Hewitt in New York's Nature exhibition that runs simultaneously at Cube design museum in Kerkrade, Netherlands, and the Eco-Visionaries show at the Royal Academy in London later this year.
Antonelli is frustrated that political leaders have been slow to respond.
"We're all working on this, artists, curators, writers, we're all trying our best. The only ones that are completely deaf are the powers that be that are supposed to legislate and help us put things in motion," she said.
Despite that, she believes that humankind is getting close to accepting its imminent extinction, as more exhibitions, television shows and news broadcasts start to focus on it.
"This is the beginning of people talking about the reality in a positive way," she added. "My ambition is for that statement to become normal."
The XXII Triennale di Milano runs from 1 March to 1 September 2019 at the Triennale Milano gallery. The Broken Nature exhibition will be shown alongside 27 installations from countries and institutions, including an entry from the European Union.
Read on for an edited transcript of the interview with Paola Antonelli:
Augusta Pownall: In the Broken Nature exhibition, there will 120 or so design and architecture projects from the last two or three decades. What argument are you making with these?
Paola Antonelli: The argument is that of restorative design, and reparations. Restorative design is very wide-ranging and I'm doing that on purpose as I'm hoping that people will come out of the exhibition with a feeling in their stomach rather than notions that they could write down in a notebook.
The idea goes back to the notion of restaurants. We always feel that for design to be ethical or responsible it has to sacrifice something. Usually the something that is sacrificed is the sensuality and the formal elegance, but it's not true. Restaurants were born in France in the 18th-century as somewhere that you could eat food that was good for your health, like bouillon, but also delightful and they rapidly became places for conviviality and pleasure. You don't have to sacrifice pleasure, delight, humanity, sensuality in order to be ethical and responsible and have a sense of both our position in the universe and also what we're doing to nature and other species. This is what the exhibition is about.
Augusta Pownall: Are there other themes that get pulled into the thesis of the exhibition?
Paola Antonelli: It's about design that has to do with the environment, with wasting less, with recycling more, with repairing things better and also with connecting to other human beings and to other species better. You could call it holistic even though that's a worn out term, but in truth it's about connecting, and it ends with empathy.
I am convinced because it's part of nature that things end and that species become extinct
Once again, when you have so much going wrong in the world, from injustice to poverty to disparity to the struggle for human rights, and also everything that's happening with the environment, there's various ways to go. Some of our leaders are demonstrating a way to go that is about entrenchment, and self-centredness and selfishness. The other way to go is the opposite, it's empathy and trying to connect with others in order to do better together. It really is about generosity of spirit.
What I always say when I talk about the exhibition is I sound so hopeful and optimistic but I do believe that we will become extinct. I believe that our best chance is to design our own really elegant extension, so that we will leave a legacy that means something, and remains, in the future. Because we will become extinct; extinction is normal, it's natural.
Augusta Pownall: You're sure that extinction is where we're heading. How should the design world react to that reality?
Paola Antonelli: I am convinced because it's part of nature that things end and that species become extinct. It's not design that will react in any certain way, it's human beings, the designers, more than design itself. So once again this exhibition is what designers can do, but there will be exhibitions and there have been several, that will be about what artists can do. Designers are human beings so their attitude is the same as it should be for others. Don't panic, but let's see how we can design a better legacy.
Augusta Pownall: It's a bold statement. What has the reaction been?
Paola Antonelli: The fact that we will become extinct is being advanced by so many different people, scientists amongst others. If anything the bold statement is the hopeful one, that we can design a beautiful ending.
Some people are taken aback but very few people tell me that I'm being pessimistic. That might be their first reaction but then they think about it further. So in a way I'm not saying anything new in that part of the phrase, maybe the second part of the phrase is the one that takes people aback, because that's where we have the power. We don't have the power to stop our extinction but we have the power to make it count.
Augusta Pownall: What's your vision of a beautiful ending?
Paola Antonelli: I can go completely into science fiction, but I just see it as the beautiful death of a human being, surrounded by family, in a serene way. Understanding that one's life ending means that someone else's life is beginning or continues. I would put it at the scale of the life of a human being. Someone's grandmother said something beautiful once to me. So many of us think that it's not fair that you die and life continues. She said to me, just think of it as you being at a great party and you go upstairs and take a nap. The whole universe is having a party and we're taking a nap, and hopefully the people downstairs at the party will miss us. Once again, perspective.
My ambition is for this statement to become normal. My ambition is not to shock
So I would see our ending the way we see an individual's ending. Serenity, a big family, good memories and having had a positive influence on humanity. People will have reactions of all kinds to what I'm saying and that's ok, because this is the beginning of people talking about the reality in a positive way. I'm just going to be a jackhammer and hopefully people will make it become normal. My ambition is for this statement to become normal. My ambition is not to shock. I think this will happen with this and other exhibitions. And also on television and during news broadcasts. There's just a groundswell of people that think this and want to share their thoughts.
Education and awareness – I think this exhibition is really about that.
Augusta Pownall: Are we getting to the tipping point, where people will start to see extinction as normal?
Paola Antonelli: I think so. I don't know if I can speak in such general terms. What I see is a kind of denial on the part of many political powers and awareness by many others and by citizens. Sometimes right now, even science is doubted. If we are in the situation in which even what scientists say is denied, we are in grave danger. But I think we're going to reach that groundswell, really soon, I really hope so. I'm trying my little bit.
Augusta Pownall: So what can we do to design for our extinction?
Paola Antonelli: The exhibition is one small part of a change of culture that should happen. I am never presuming that we will have the answers for everything, but it would already be very successful if we were able to at least point out something which is very necessary, and that's to think of our own legacy. That's what always happens when an editor-in-chief is leaving, or a person knows when he or she will die, we think of legacy. So we should think of legacy also for the human race.
That means taking a very big leap in our perceptive abilities. It means thinking really long-term, it means thinking at scale, it means really trying to understand that we are only one species on earth and one species in the universe. And very simply, as if we were putting together a beautiful play or a beautiful piece of art or design, we should really make it count and make it memorable and meaningful.
The reference we always use is Powers of Ten, the Charles and Ray Eames videos. If we were to really go up so many powers of 10 and another species in the future were able to zoom down, what would we want them to find?
Augusta Pownall: Are there any particular parts of the exhibition that point towards what we can do to design for the end?
Paola Antonelli: Nothing in particular, because I didn't want to have much speculation or science fiction. So everything is in that direction but nothing is grandly or spectacularly about the ending. For instance, Kelly Jazvac's Plastiglomerate and the fossils of the future, that's almost a negative example of something we don't want to leave behind. Or when instead you look at the Alma Observatory's Music of a Dying Star, that offers the sense of the long-term and perspective.
Clearly there's a crisis in humanity, in the sense of what it means to be human
Everything is about prepping ourselves for it, and nothing is about what we should do. Because that wouldn't be about design, that would be more literature and art. Maybe some people will do that, but I think it's such a daunting idea, that of trying to portray our ending, so I'd like to see who's going to do that. I don't think it can be prevented but it definitely can be managed.
Augusta Pownall: What do you think are the pressing problems that designers should addressing?
Paola Antonelli: Designers are about life and about the world and therefore they're very much in the present and also directed towards the future, if they're doing their job. The present is, and we hear it every single day, about this crisis of understanding of our position in the world and in the universe, a crisis that has to do with the environment and also with social bonds. It really is amazing what's going on politically in our countries, in all my countries Italy and the US and in the UK and in many other places. Clearly there's a crisis in humanity, in the sense of what it means to be human in connection with other humans and in connection with the universe. Of course I'm taking it at a very large scale, an almost cosmic perspective, but that translates in everything from cosmic perspectives to everyday lives. That's the thesis, the underlying theme of the exhibition.
Augusta Pownall: Do you think that design should be accorded as much respect as science?
Paola Antonelli: It's not about demeaning science but rather about elevating design. Science has been able to create this great mystique about itself. A very rightful mystique over the centuries about exactitude and worthiness. Of course now it's been put in discussion by the political powers that be that try to undermine that kind of faith and trust. So science has been able to build faith and trust in itself.
Design is very worthy of trust in most cases. Of course design can go wrong, just like science can go wrong and we've seen it many times. But it's never been able to project the gravitas and the kind of peer pressure that science has created for itself. People care about design a lot but they are not trained to seek design as a fundamental ingredient of their cultural makeup.
Augusta Pownall: So do you see this exhibition as a call to arms for designers, or is it more for the general public?
Paola Antonelli: This is an exhibition for citizens. Of course it's for the design community. I always want things to be for the design community, but I also want it to be for the citizens. I want this exhibition to really be inspiring for citizens so that people can leave it having a sense of what they can do in their everyday life.
I'm hoping that people that are not necessarily in the design world will go there, appreciate design as always and leave with a seed in their mind of what they can do in their real life to have a different attitude towards the environment, towards other species, towards our subsistence on planet Earth, towards all the important matters that we read about in the press all the time but sometimes don't get into our stomach. I really think that design can be a Trojan horse for people to really understand. I also believe that citizens are the true powerful agents in this change that needs to happen. Governments and corporations and institutions say and legislate, but citizens are the ones that can really put pressure on.
Augusta Pownall: Is there anything that people coming to the exhibition should be thinking about when it comes to alleviating the damage we have caused to the environment?
Paola Antonelli: Just thinking of it would be enough. I would love for people to leave the exhibition with even more of a sense of the aberration that single-use plastic is, but I'm not only talking about straws that have become the pet peeve, I'm talking about so much more.
In general, single-use plastics should be avoided at all costs. Not plastics, because plastics have some advantages, it's just about being mindful of every single thing. That is design. One thing that curators and people like me try to do is to show people what's behind objects, because we're used to taking objects at face value. I have in front of me a pencil. It's wood and inside is graphite, and just understanding where it comes from can give you more pleasure in understanding reality and more knowledge and awareness of what you can do to avoid wasting.
I cannot say that I'm optimistic or positive, I'm just doing something
That's my role, to give people stronger critical tools to act on the part of life that I have some say over, which is design, which means what they buy, what they use, how long they keep things for, when they throw them away. Another thing is the fast-fashion campaign, I mean it's horrible. There are many examples, but that's where my field of action is.
Augusta Pownall: You mentioned that you want the the exhibition to have a positive outlook, even if it's not always saying hugely positive things about humankind. Is that possible, given the horrifying things that we're hearing about our climate?
Paola Antonelli: I'm not optimistic per se, I'm just trying to energise. I believe that citizens are the only ones that can change things. I am hoping that efforts like mine... and mine is just one, luckily there are so many curators working on this, will make a difference.
Cooper Hewitt has just been doing an exhibition about nature, the Serpentine just hired a curator for these matters. There are so many people working on this. We're all working on this, artists, curators, writers, we're all trying our best. The only ones that are completely deaf are the powers that be that are supposed to legislate and help us put things in motion.
I cannot say that I'm optimistic or positive, I'm just doing something. I believe that it's a very "design" attitude of knowing your constraints and trying to make the best of those constraints. You can say that art is spilling over those constraints, or should, and design does too, but I believe we're all trying to sensitise and create a reaction of which we will be a part. Without convincing children and their parents, and making it something that is asked of politicians and of corporations from the ground up, we will never get anywhere. | <urn:uuid:8cfc1e08-dcf9-438f-951a-4afd3f7dc978> | CC-MAIN-2019-47 | https://www.dezeen.com/2019/02/22/paola-antonelli-extinction-milan-triennale-broken-nature-exhibition/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00497.warc.gz | en | 0.978332 | 4,216 | 2.9375 | 3 |
What about TDD and unit testing?As promised, let’s talk about how Test-Driven Development (TDD) fits into the over testing regime that we have been presenting.
TDD is a powerful technique for achieving high coverage unit tests and code with low coupling. TDD is a complex issue and there is quite a bit of controversy around it, so we will not take sides here. TDD is absolutely not a mandatory or necessary practice of Agile, but many very smart people swear by it, so be your own judge. It is even possible that TDD works for certain personality types and styles of thinking: TDD is an inherently inductive process, whereas top-down test design is a deductive process, and so TDD possibly reflects a way of thinking that is natural for its proponents. (See also this discussion of top-down versus bottom-up design. The book The Cathedral and the Bazaar also discusses the tension between top-down and bottom up design and development.)
The controversy between top-down and bottom-up approaches – and the personalities that are attracted to each – might even have analogy to the well known division in the sciences between those who are theorists and those who are experimentalists at heart: these two camps never seem to see eye-to-eye, but they know that they need each other. Thus, instead of getting into the TDD or no-TDD debate, we will merely explain TDD’s place in an overall testing approach, and a few things to consider when deciding whether to use it or not. Most importantly: do not let proponents of TDD convince you that TDD is a necessary Agile practice, or that teams that do not use TDD are inherently less “advanced”. These assertions are not true: TDD is a powerful design and test strategy, but there are other competing strategies (e.g., object-oriented analysis and design, functional design – both of which are completely compatible with Agile – and many others).
TDD operates at a unit test level (i.e., on individual code units or methods) and does not replace acceptance tests (including paradigms such as acceptance test-driven development (ATDD) and behavior-driven development (BDD)), which operate at a feature (aka story, or scenario) level. Unit testing is what most programmers do when they write their own tests for the methods and components that they write – regardless of whether or not they use TDD. Unit testing is also well suited for testing “failure mode” requirements such as that bad data should not crash the system. Used in conjunction with TDD and a focus on failure modes, failure mode issues can be resolved far sooner which is certainly very Agile.
Acceptance level testing is still critically important. Unit testing cannot replace acceptance tests, because one of the most important reasons for acceptance tests is to check that the developer’s understanding of the requirements is correct. If the developer misunderstands the requirements, and the developer writes the tests, then the tests will reflect that misunderstanding, yet the tests will pass! Separate people need to write a story’s acceptance tests and the story’s implementation. Therefore, TDD is a technique for improving test coverage and improving certain code attributes, and many see it as an approach to design – but it is not a replacement for acceptance tests.
One type of unit level testing that is very important,
regardless of TDD, is interface level testing.
One type of unit level testing that is very important, regardless of TDD, is interface level testing. Complex systems usually have tiers or subsystems, and it is very valuable to have high coverage test suites at these interfaces. In the TDD world, such an interface is nothing special: it is merely a unit test on those components. In a non-TDD world, it is viewed as an interface regression test and one specifically plans for it. For example, a REST based Web service defines a set of “endpoints” that are essentially remote functions, and those functions define a kind of façade interface for access to the server application. There should be a comprehensive test suite at that interface, even if there are user level (e.g., browser based, using Selenium) acceptance tests. The reason is that the REST interface is a reusable interface in its own right, and is used by many developers, so changes to it have a major impact. Leaving it to the user level tests to detect changes makes it difficult to identify the source of an error. In this scenario mocking is often the most advantageous way of unit level testing interfaces.
Another, much more important reason to have high coverage tests on each major interface is that the user level tests might not exercise the full range of functionality of the REST interface – but the REST level tests should, so that future changes to the user level code will not access new parts of the REST interface that have not been tested yet – long after the REST code has been written. The REST interface can also be tested much more efficiently, without having to run them in a browser. In fact, the performance tests will likely be performed using that interface instead of the user level interface.
Detection of change impact, at a component level, is in fact one of the arguments for Unit Testing (and TDD): if a change causes a test to fail, the test is right at the component that is failing. That helps to narrow down the impact of changes. The cost of that, however, is maintaining a large set of tests, which introduce a kind of impedance to change. Be your own judge on the tradeoff.
TDD can also impact the group process: it is generally not feasible in a shared code ownership environment to have some people using TDD and others not. Thus, TDD really needs to be a team-level decision.
It is possible that the preference (or not) for TDD
should be a criteria in assembling teams.
Legacy code maintenance is often a leading challenge when it comes to unit testing. TDD helps greatly to identify the impact when changes are made to an existing code base, but at the cost of maintaining a large body of tests, which can impede refactoring. Another example of a real challenge to utilizing TDD techniques is model-based development (see also this MathWorks summary) – often used today for the design of real time software e.g., in embedded controllers, using tools such as Simulink. These techniques are used because of the extremely high reliability of the generated code. There are ways of applying TDD in this setting (such as writing .m scripts for Simulink tests), but that is not a widespread practice. Acceptance Test Driven Development (ATDD) is potentially a better approach when using model-based development.
Finally, TDD seems to favor certain types of programmers over others. By adopting TDD, you might enable some of your team to be more effective, but you might also hinder others. It is therefore possible that the preference (or not) for TDD should be a criteria in assembling teams. Making an organization-wide decision, however, might be a mistake, unless you intend to exclude deductive thinkers from all of your programming teams.
The jury is still out on these issues, so you will have to use your own judgment: just be sure to allow for the fact that people think and work differently – from you. Do not presume that everyone thinks the way that you do. Do not presume that if you have found TDD to be effective (or not), that everyone else will find the same thing after trying it for long enough.
Some other types of testingThere are still many types of testing that we have not covered! And all are applicable to Agile teams!
Disaster RecoveryIn our EFT management portal example (see Part 1), the system needed to be highly secure and reliable, comply with numerous laws, and our development process must satisfy Sarbanes Oxley laws and the information demands of an intrusive oversight group. Most likely, there is also a “continuity” or “disaster recovery” requirement, in which case there will have to be an entire repeatable test strategy for simulating a disaster with failover to another set of systems in another data center or another cloud provider. That is one case where a detailed test plan is needed: for testing disaster recovery. However, one could argue that such a plan could be developed incrementally, and tried in successive pieces, instead of all at once.
SecurityNowadays, security is increasingly being addressed by enumerating “controls” according to a security control framework such as NIST FISMA Security Controls. For government systems, this is mandatory. This used to be executed in a very document-centric way, but increasingly it is becoming more real time, with security specialists working with teams on a frequent basis – e.g., once per iteration – to review controls. Most of the controls pertain to tools and infrastructure, and can be addressed through adding scanning to some of the CI/CD pipeline scripts, to be run a few times per iteration. These scans check that the OSes are hardened and that major applications such as Apache are hardened. In addition, the security folks will want to verify that the third party components in the project binary artifact repository (Nexus, etc.) are “approved” – that is, they have been reviewed by security experts, are up to date, and do not pose a risk. All this can be done using tools without knowing much about how the application actually works.
Unfortunately, we cannot test for
careful secure design: we can only build it in.
However, some controls pertain to application design and data design. These are the hard ones. Again, for Morticia’s website (see Part 1), we don’t need to worry about that. But for the other end of the spectrum, where we know that we are a juicy target for an expert level attack – such as what occurred for Target, Home Depot, and Sony Pictures in the past 12 months – we have no choice but to assume that very smart hackers will make protracted attempts to find mistakes in our system or cause our users to make mistakes that enable the hackers to get in. To protect against that, scanning tools are merely a first step – a baby step. The only things that really work are a combination of,
1. Careful secure design.
2. Active monitoring (intrusion detection).
Unfortunately, we cannot test for careful secure design: we can only build it in. To do that, we as developers need to know secure design patterns – compartmentalization, least privilege, privileged context, and so on. For monitoring, all large organizations have monitoring in place, but they need the development team’s help in identifying what kinds of traffic are normal and what are not normal – especially at points of interconnection to third party or partner systems. Teams should conduct threat modeling, and in the process identify the traffic patterns that are normal and those that might signify an attack. This information should be passed to the network operations team. Attacks cannot be prevented, but they can often be stopped while they are in progress – before damage is done. To do that, the network operations team needs to know what inter-system traffic patterns should be considered suspicious.
Compliance with lawsCompliance with laws is a matter of decomposing the legal requirements and handling them like any other – functional or non-functional, depending on the requirement. However, while it is important for all requirements to be traceable (identifiable through acceptance criteria), it is absolutely crucial for legal compliance requirements. Otherwise, there is no way to “demonstrate” compliance, and no way to prove that sufficient diligence was applied in attempting to comply.
Performance testingThere are many facets to performance testing. If you are doing performance testing at all, four main scenario types are generally universal:
1. Normal usage profile.
2. Spike profile.
3. Break and “soak” test.
4. Ad-hoc tests.
Normal usage includes low and high load periods: the goal is to simulate the load that is expected to occur over the course of normal usage throughout the year. Thus, a normal usage profile will include the expected peak period loads. One can usually run normal load profile tests for, say, an hour – this is not long duration testing. It is also not up-time testing.
The purpose of spike testing is to see what happens if there is an unusual load transient: does the system slow down gracefully, and recover quickly after the transient is over? Spike testing generally consists of running an average load profile but overlaying a “spike” for a brief duration, and seeing what happens during and after the spike.
Break testing is seeing what happens when the load is progressively increased until the system fails. Does it fail gracefully? This is a failure mode, and will be discussed further below. Soak testing is similar, in that lots of load is generated for a long time, to see if the system starts to degrade in some way.
The last category, “ad-hoc tests”, are tests that are run by the developers in order to examine the traffic between internal system interfaces, and how that traffic changes under load. E.g., traffic between two components might increase but traffic between two others might not – indicating a possible bottleneck between the first two. Performing these tests requires intimate knowledge of the system’s design and intended behavior, and these tests are usually not left in place. However, these tests often result in monitors being designed to permanently monitor the system’s internal operation.
In an Agile setting, performance tests are best run in a separate performance testing environment, on a schedule, e.g., daily. This ensures that the results are available every day as code changes, and that the tests do not disrupt other kinds of testing. Cloud environments are perfect for load testing, which might require multiple load generation machines to generate sufficient load. Performance testing is usually implemented as a Jenkins task that runs the tests on schedule.
Testing for resiliencyAcceptance criteria are usually “happy path”: that is, if the system does what is required, then the test passes. Often a few “user error” paths are thrown in. But what should happen if something goes wrong due to input that is not expected, or due to an internal error – perhaps a transient error – of some kind? If the entire system crashes when the user enters invalid input or the network connection drops, that is probably not acceptable.
Failure modes are extremely important to explicitly test for. For example, suppose Morticia’s website has a requirement,
Given that I am perusing the product catalog,
When I click on a product,
Then the product is added to my shopping cart.
But what happens if I double-click on a product? What happens if I click on a product, but then hit the Back button in the browser? What happens if someone else clicks on that product at the same instant, causing it to be out of stock? You get the idea.
Generally, there are two ways to address this: on a feature/action basis, and on a system/component basis. The feature oriented approach is where outcomes based story design comes into play: thinking through the failure modes when writing the story. For example, for each acceptance scenario, think of as many situations that you can about what might go wrong. Then phrase these as additional acceptance criteria. You can nest the criteria if you like: languages like Gherkin support scenario nesting and parameterized tables to help you decompose acceptance criteria into hierarchical paths of functionality.
Testing for resiliency on a component basis is more technical. The test strategy should include intentional disruptions to the physical systems with systems and applications highly instrumented, to test that persistent data is not corrupted and that failover occurs properly with minimal loss of service and continuing compliance with SLAs. Memory leaks should be watched for by running the system for a long time under load. Artifacts such as exceptions written to logs should be examined and the accumulation of temporary files should be watched for. If things are happening that are not understood, the application is probably not ready for release. Applying Agile values and principles to this, this type of testing should be developed from the outset, and progressively made more and more thorough.
Concurrency testing is a special case of functional testing.
Driving the system to failure is very important for high reliability systems. The intention is to ensure that the system fails gracefully: that it fails gradually – not catastrophically – and that there is no loss or corruption of persistent data and no loss of messages that are promised to be durable. Transactional databases and durable messaging systems are designed for this, but many web applications do not perform their transactions correctly (multiple transactions in one user action) and are vulnerable to inconsistency if a user action only partially completes. Tests should therefore check that as the system fails under load, nothing “crashes”, and each simulated update request that failed does not leave artifacts in the databases or file system, and as the system recovers, requests that completed as the system was failing do not get performed twice.
Concurrency testing is a special case of functional testing, but it is often overlooked. When I (Cliff) was CTO of Digital Focus (acquired by Command Information in 2006) we used to put our apps in our performance lab when the developers thought that the apps were done. (We should have done it sooner.) We generally started to see new kinds of failures at around ten concurrent simulated users, and then a whole new set of failures at around 100 concurrent users. The first group – at around ten users – generally were concurrency errors. The second group had to do with infrastructure: TCP/IP settings and firewalls.
Regarding the first group, these are of the kind in which, say, a husband and wife have a joint bank account, and the husband accesses the account from home and the wife accesses it from her office, and they both update their account info at the same time. What happens? Does the last one win – with the other one oblivious that his or her changes were lost? Do the changes get combined? Is there an error? These conditions need to be tested for, because these things will happen under high volume use, and they will result in customer unhappiness and customer support calls. There need to be test scenarios written, with acceptance criteria, for all these kinds of failure mode scenarios. These should be run on a regular basis, using a simulated light load of tens of users, intentionally inducing these kinds of scenarios. This is not performance testing however: it is functional testing, done with concurrent usage.
Is all this in the Definition of Done?The definition of done (DoD) is an important Agile construct in which a team defines what it means for a story – any story – to be considered done. Thus, the DoD is inherently a story level construct. That is, it is for acceptance criteria that are written for stories. DoD is not applicable to system-wide acceptance criteria, such as performance criteria, security criteria, general legal compliance criteria that might apply to the implementation of many stories, etc.
It is not practical to treat every type of requirement as part of the DoD. For example, if one has to prove performance criteria for each story, then the team could not mark off stories as complete until the performance tests are run, and each and every story would have to have its performance measured – something that is generally not necessary: typically a representative set of user actions are simulated to create a performance load. Thus, non-functional requirements or system-wide requirements are best not included in the DoD. This is shown in Figure 1 of Part 1, where a story is checked “done” after the story has passed its story level acceptance tests, has not broken any integration tests, and the user has tried the story in a demo environment and agrees that the acceptance criteria have been met. Ideally, this happens during an iteration – not at the end – otherwise, nothing gets marked as “done” until the end of an iteration. Thus, marking a story as “done” is tentative because that decision can be rejected by the Product Owner during the iteration review, even if a user has tried the story and thought that it was done. Remember that the Product Owner represents many stakeholders – not just the users.
Another technique we use with larger sets of teams (think portfolio) – and especially when there are downstream types of testing (e.g., hardware integration testing) – is definition of ready (DoR). The state of “ready” is a precursor to the state of being “done”. This helps to ensure that the DoD – which might include complex forms of testing – can be met by the team. The team first ensures that a story meets the DoR. These are other criteria such as that the story has acceptance criteria (DoD would say the acceptance criteria have been met), certain analysis have be completed, etc. – just enough so that development and testing have a much higher likelihood of being completed within an iteration. This works with teams and programs of all sizes. We do find that for larger programs, the DoR is almost always very useful. Defining a DoR with the Product Owner is also a great way of engaging the Product Owner on non-functional issues to increase their understanding of those issues and ensure the team is being fed high quality stories.
End Of Part 3Next time in Part 4 we will connect the dots on the organizational issues of all this! | <urn:uuid:2805cd79-d926-4b32-a207-8598a5f02ac0> | CC-MAIN-2019-47 | http://www.transition2agile.com/2014/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00457.warc.gz | en | 0.953545 | 4,491 | 2.71875 | 3 |
Environmental, Health & Safety:
514-848-2424, ext. 4877
Chemical safety program
Chemical safety has many scientific and technical components. Certain chemicals can be harmful to our health and environment, it is therefore important to ensure the safe management of chemical materials for use, disposal, storage, acquisition, inventory control, and regulatory compliance.
EHS personnel provides training and advice to faculty, staff and students in ensuring safe chemical practices. Our goal is to improve chemical users knowledge of chemical safety and to ensure regulatory compliance.
For more information refer to the chemical safety policy, procedures, guidelines, forms and manuals. If you require more information contact Environmental Health & Safety at ext. 4877.
WHMIS stands for Workplace Hazardous Materials Information System. It is a Canadian system implemented in 1988, ensuring worker protection through education and prevention. The objectives of the program are to train employees in order to:
- protect themselves from hazards
- respond to emergency situations
It also implements standardized labeling of controlled products and makes materials safety data sheets (MSDSs) available. A Controlled Product is the name given to products, materials, and substances that are regulated by WHMIS legislation.
WHMIS is governed by federal and provincial laws and regulations. The majority of the "information" requirements (and exemptions) of WHMIS legislation are under the Hazardous Products Act and the Hazardous Materials Information Review Act and apply to all of Canada. In Quebec, the CNESST is responsible for applying WHMIS according to the regulations. Any person supplying or using controlled products must comply with it.
As such, staff and students that work with, or may be exposed to, hazardous materials must be trained according to WHMIS legislation in the following aspects:
- Education –understanding the principles of WHMIS, and the meaning of the information on labels and MSDSs
- Training – workplace-specific training on how to apply this information to materials in actual use in the workplace, including: procedures for storage, handling, disposal, and personal protection.
EHS provides WHMIS trainings for the staff and students at both Concordia campuses. Please check the following link to get more details about WHMIS training offered at Concordia:
EHS provides WHMIS trainings for the staff and students at both Concordia campuses.
WHMIS divides hazardous materials into six main classes based on their specific hazards. If a product corresponds to one or more of these classes, it becomes a « controlled » product.
When a supplier produces or imports a product for distribution and sale in Canada, that supplier must prepare a supplier label, which will typically provide seven pieces of information:
- product identification;
- hazard symbols representing the classes and divisions into which the product falls;
- risk phrases;
- precautionary statements;
- first aid measures;
- a statement advising that a material safety data sheet (MSDS) is available;
- supplier's identification.
Furthermore, the text must be in English and French and contained within a hatched boarder.
Workplace labels are required on containers of controlled products produced on site and on containers in which the product has been transferred from a supplier's container. Workplace labels must provide three types of information:
- product name
- safe handling information
- reference to the MSDS
- in Quebec, according to the legislation, the minimum language requirement for workplace labels is French. However, Concordia University being an English-teaching institution, workplace labels should be written at least in English, a language understood by everyone working and studying at Concordia University.
The label requirements for reagents prepared in the laboratory are the following:
- Product name and concentration
- Abbreviations NOT permitted
- In certain cases (e.g. peroxidizable reagents), the date of purchase or preparation of the reagent
Additional requirements also include that:
- the MSDS must be available
- the chemical is not to be transported out of the laboratory.
Consumer Restricted Products
If a consumer can buy a chemical product in Canada through a retail store/outlet network, then that product must meet the requirements of the Consumer Chemicals and Containers Regulations, 2001 (CCCR, 2001), issued under the Hazardous Products Act (HPA). Since these consumer products are available to the public in retail stores, they are partially exempt from the labeling and MSDS requirements of WHMIS. However, if these products are brought and used at the workplace, they must be included within the WHMIS training requirements and must also follow the WHMIS workplace label requirements.
EHS highly recommends having MSDS copies available to workers handling chemicals covered under the Consumer Chemicals and Containers Regulations within their regular daily duties.
For more information concerning Consumer Products, please refer to the following link:
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) was adopted by the UN Economic and Social Council (ECOSOC) in July 2003. The purpose is to regroup all existing hazard communication systems on chemicals in order to develop a single, globally harmonized system to address classification of chemicals according to their hazards and communicate the related information through labels and safety data sheets.
In 2015, the Government of Canada published the Hazardous Products Regulations (HPR), which modified the Workplace Hazardous Materials Information System (WHMIS) 1988 to incorporate the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) for workplace chemicals. This modified WHMIS is referred to as WHMIS 2015.There are new standardized::
- classification rules and hazard classes based on:
- physical hazards
- health hazards
- environment hazards (not proposed to be adopted in Canada under WHMIS)
- format for Safety Data Sheets (SDSs) (formerly known Material Safety Data Sheets)
- label requirements:
- new hazard symbols/pictograms
- signal words (Danger and Warning)
- hazard statements
- precautionary statements
- EHS-DOC-206-Wokplace Labeling Requirement
- EHS-DOC-207-Guide to create Workplace Labels
- 1.5in by 2.5in (Landscape)
- 1.5in by 2.5in (Portrait)
- 2.5in by 4in (Landscape)
- 2.5in by 4in (Portrait)
- 4in by 6in (Landscape)
- 4in by 6in (Portrait)
While WHMIS 2015 includes new harmonized criteria for hazard classification and requirements for labels and safety data sheets (SDS), the roles and responsibilities for suppliers, employers and workers have not changed.
For more information about WHMIS 2015, please refer to the following links:
Personal protective equipment (PPE) is necessary to work with most hazardous materials and/or to perform certain experiments. It is a last resort protection system, to be used when substitution or engineering controls are not feasible. It may also be necessary to supplement the safety equipment in laboratories such as the fume hoods. PPE does not reduce or eliminate the hazard, protects only the wearer, and does not protect anyone else.
PPE includes gloves, eye protection, respiratory protection and protective clothing. The type of PPE required is highly dependent upon the nature of the experiment and the hazards associated with the material being used. However, the minimum requirements for any laboratory work done at Concordia University are the following:
- safety glasses
- lab coat
- long-sleeved and long-legged clothing worn underneath the lab coat
- closed shoes
Wearing eye and face protection is necessary to protect against splashing chemicals, biological materials and flying particles. Eye protection in the form of glasses, goggles or face shields is available and the choice will depend on the risk involved with the experiment or the type of material being used. The worker should consult the experiment SOP or the MSDS to choose the right type of protection in accordance with the CSA standard for Industrial Eye and Face Protectors Z94.3. The information in the Z94.3.1 “Protective Eyewear: A User's Guide” by Canadian Standards Association is also a good source of information or contact your supervisor or EHS.
Safety glasses must have side shields and must be worn whenever there is the possibility of objects striking the eye, such as particles, glass, or metal shards. Many potential eye injuries have been avoided by wearing safety glasses. If prescription safety glasses are to be worn in the laboratory, they must be equipped with side-shields in order to provide appropriate protection. Regular prescription glasses are not considered safety glasses. If regular prescription glasses are used, special safety glasses designed to fit over them must be worn.
However, safety glasses do not provide adequate protection against chemical splashes, aerosols or dusts/powders. They do not seal to the face, resulting in gaps at the top, bottom and sides, where chemicals may seep through. If such hazards are present in the laboratory work, goggles are best suited for this type of potential exposure.
Chemical Splash Goggles
Chemical splash goggles should be worn when there is potential for splash from a hazardous material, such as working with solvent and highly corrosive materials. Chemical splash goggles should be impact resistant and have indirect ventilation so hazardous substances cannot drain into the eye area. Some can even be worn over prescription glasses.
Face shields are to be used when working with large volumes of hazardous materials, such as highly corrosive substances, either for protection from splash to the face or flying particles. Face shields along do not provide sufficient eye protection; they must be used in conjunction with safety glasses or goggles.
- Contact lenses may be worn in the laboratory, but do not offer any protection from chemical contact. Therefore, lab workers wearing contact lenses must comply with the same rules in terms of eye protection.
- Lab workers should be aware of the following :
- You should advise their supervisors.
- You may have to remove lenses to perform certain experiments.
- Plastic used for contact lenses is permeable to some vapours found in a laboratory. Such vapours can be trapped behind the lens and may cause extensive irritation to the eye.
- If a contact lens becomes contaminated with a hazardous chemical, rinse the eye(s) using an eyewash station and remove the lens immediately.
- Contact lenses that have been contaminated with a chemical must be discarded.
Lab coats are required in all laboratories. These are available in various designs and materials and the choice should depend on the type of work being done and the risks involved. The typical lab coat is a knee length cotton-blend with long sleeves and front closure. Lab coats are appropriate for minor chemical splashes and solids contamination.
If highly toxic or corrosive liquids are to be used, rubber aprons and chemical smocks offer improved protection over regular lab coats. Disposable outer garments (i.e., Tyvek suits) may be useful when cleaning and decontamination of reusable clothing is difficult. Long-sleeved and long-legged clothing should be worn beneath the lab coat to protect the skin in case of a spill. For best protection a lab coats should be knee length, have long sleeves to the wrist and be buttoned up. Shorts and skirts should not be worn when working in a laboratory. Contaminated lab coats should not be washed at home with other laundry. A cleaning service is provided by certain departments.
Closed-toed shoes should be worn at all times in laboratories where chemicals are stored or used. Sandals, high heel shoes, canvas toed shoes, as well as open-toed and open-backed shoes should be avoided due to the danger of spillage of corrosive or irritating chemicals and broken glass. Chemical resistant overshoes or boots may be used to avoid possible exposure to corrosive chemical or large quantities of solvents or water that might penetrate normal footwear (e.g., during spill cleanup).
Skin contact is a potential source of chemical exposure. Protective gloves should be used to prevent the potential exposure to chemicals or biological hazards. The proper type of glove will depend on the materials being used. The MSDS is an important source of information for proper glove selection. Different glove types have different chemical permeability, therefore you can check with the manufacturer’s compatibility chart before choosing a specific glove type. Understanding the terms used in glove compatibility charts is primordial.
- Breakthrough time: Time it takes for the chemical to travel through the glove material. This is only recorded at the detectable level on the inside surface of the glove.
- Permeation Rate: Time it takes for the chemical to pass through the glove once breakthrough has occurred. This involves the absorption of the chemical into the glove material, migration of the chemical through the material, and then de-absorption once it is inside the glove.
- Degradation rating: This is the physical change that will happen to the glove material as it is affected by the chemical. This includes, but is not limited to, swelling, shrinking, hardening, cracking, etc. of the glove material.
Compatibility charts rating systems will vary by the manufacturer’s design of their chart. Many use a color code, where red = bad, yellow = not recommended, green = good, or some variation of this scheme. A letter code may be used, such as E + excellent, G = Good, P = poor, NR = Not Recommended. Any combination of these schemes may be used, so please understand the chart before making a decision on the glove to be used.
The following document includes major glove types and their general uses. This list is not exhaustive.
- Level of dexterity: Where fine dexterity is crucial, a bulky glove may actually be more of a hazard. Thinner, lighter gloves offer better touch sensitivity and flexibility, but may provide shorter breakthrough times. Generally, doubling the thickness of the glove quadruples the breakthrough time.
- Glove length: Should be chosen based on the depth to which the arm will be immersed or where chemical splash is likely. Gloves longer than 14 inches provide extra protection against splash or immersion.
- Glove size: One size does not fit all. Gloves which are too tight tend to cause fatigue, while gloves which are too loose will have loose finger ends which make work more difficult. The circumference of the hand, measured in inches, is roughly equivalent to the reported glove size.
- Glove care: All gloves should be inspected for signs of degradation or puncture before use. Disposable gloves should be changed when there is any sign of contamination. Reusable gloves should be washed frequently if used for an extended period of time.
The use of latex gloves has been associated with an increased sensitization and the development of allergy symptoms over the last decades. Such symptoms include skin rash and inflammation, respiratory irritation, asthma and shock. The amount of exposure needed to sensitize an individual to natural rubber latex is not known, but when exposures are reduced, sensitization decreases. Disposable latex gloves offer poor protection against chemicals and are not recommended to be used in the laboratory environment. However, if latex gloves must be used, choose reduced-protein, powder-free latex gloves and wash hands with mild soap and water after removing latex gloves.
As research facilities have increasingly moved away from latex exam gloves because of their well-known allergy-related symptoms, other types of skin irritation and allergy to non-latex gloves have also increased. Some people can potentially develop an allergic contact dermatitis with the use of nitrile gloves, mainly caused by chemical accelerators used in the production of nitrile and other latex-free gloves. While vinyl gloves may be an option in some circumstances, they lack the elastic quality of nitrile and latex gloves, and do not provide the same level of protection. Alternative glove options are available from different suppliers against nitrile and latex allergies such as:
- accelerator-free nitrile gloves
- nitrile with aloe gloves, which are easier on the skin
- cotton liners (for sweaty hands too): they put a barrier between the glove and the skin and also absorb some of the moisture, which can also give a rash
- Neo-Pro gloves (Neoprene chloroprene)
Additional information about a controlled or hazardous substance is provided on its Material Safety Data Sheet (MSDS) or Safety Data Sheet (SDS). The term MSDS is used in the WHMIS 1988 legislation; it is now being replaced by the term SDS in the WHMIS 2015 legislation. The MSDS/SDS is provided by the supplier with the initial product’s purchase to give users detailed information about the hazards and safe use of products. All MSDSs/SDSs must be accurate at the time of sale or import.
A MSDS/SDS mentions what the hazards of a product are, how to use the product safely, what to expect if the recommendations are not followed, how to recognize symptoms of exposure, and what to do if emergencies occur. Before using any product for the first time, students and staff should review the chemical’s MSDS/SDS for more information.
Most MSDSs/SDSs have now adopted the WHMIS 2015 format which divides the information into 16 sections (as opposed to the original 9-section format required by WHMIS 1988).
Information in a product MSDS should be presented as follows:
- Hazard(s) identification
- Composition/information on ingredients
- First-aid measures
- Firefighting measures
- Accidental release measures
- Handling and storage
- Exposure controls/personal protection
- Physical and chemical properties
- Stability and reactivity
- Toxicological information
- Ecological information
- Disposal considerations
- Transport information
- Regulatory information
- Other information
MSDS/SDS copies should be available to anyone who come in contact with controlled or hazardous chemicals during their daily work. Employers must ensure the MSDS/SDS's provided to their employees are the most accurate (updated) copy available. All MSDS/SDS copies should be kept in a dedicated location for each specific work area (laboratory, workshop, etc.). Employers may also computerize the MSDS/SDS information as long as:
- All employees have access are trained on how to use the computer or device;
- The computer/device is kept in working order;
- Ability to make a hard copy of the MSDS/SDS available to employees upon request.
Concordia University has a subscription to CHEMWATCH, a MSDS/SDS database, allowing anyone using a Concordia computer or wireless network to access MSDS/SDS data. A direct link to the CHEMWATCH database is available on the EHS webpage. A quick reference guide on how to access a MSDS/SDS from CHEMWATCH is available here.
Certain chemicals are more dangerous than others due to their reactivity or high toxicity. Specific precautions must be taken, as well as following the laboratory safety procedures, when handling these reagents. Each department is required department to develop specific Standard Operating Procedure (SOP) concerning the reagents. Refer to the following safety guidelines for specific reagents:
- Hydrofluoric Acid (HF) Safety Guidelines
- Perchloric Acid Safety Guidelines
- Picric Acid Safety Guidelines
- Piranha Solution Safety Guidelines
- Tetramethylammonium Hydroxide (TMAH) Safety Guidelines
- Cryogenics Safety Guidelines
- Mercury Safety Guidelines
- Isoflurane Safety Guidelines
- Formaldehyde Safety Guidelines
- Lead Acid Batteries Safety Guidelines
- Lithium Batteries Safety Guidelines
Although the use of metallic mercury is not prohibited at Concordia, EHS strongly suggests minimizing its usage. Mercury presents a health and safety hazard and has been recognized as a contaminant of the environment by Health Canada, Environment Canada and l’Institut national de santé publique du Québec. Lab users are most likely to encounter mercury in old laboratory equipment, such as thermometers, barometers or manometers. In the event of a mercury spill, special clean-up and disposal procedures must be followed.
EHS has therefore instituted a green exchange program to replace mercury-containing thermometers with less hazardous alternative models. This exchange program is only available to Concordia labs/studios/workshops. The non-mercury thermometers contain about 0.15 mL of different colored organic liquids which are less hazardous in the event of a spill. This safer, greener alternative does not compromise the quality or precision of measurements.
EHS can provide lab-grade replacement thermometers with temperature ranges from -20°C to 150°C and from -10°C to 260°C. If you have any mercury thermometers in your possession and wish to exchange them, please contact EHS at [email protected], mentioning how many thermometers you wish to exchange. Please note this program is based on a 1 to 1 exchange; EHS will distribute replacement thermometers only to labs that provide mercury thermometers in exchange.
For higher temperature ranges, electronic thermometers can represent an interesting alternative to mercury as they can measure temperatures from -50°C to 300°C. Most thermometers with mercury replacements (liquid or electronic) are unable to conduct temperature measurements above 300°C. Therefore, other alternatives such as thermocouples should be considered. Please note that these latest options are not available from EHS as part of the exchange program.
There are chemicals that have the possibility of developing peroxides over time. Organic peroxides are substances characterized by the R-O-O-R structure, where R represents organic radicals. The O-O bond of organic peroxides is unstable and can lead to spontaneous decomposition. These products are oxidant and unstable and can create fire and/or explosive hazards. Organic peroxides should be kept in refridgerators for better stability. Refer to the MSDS of the reagents to find the specific storage conditions.
All solvents and reagents that can form hazardous peroxides should be labeled once received and include the date and expiration date(s) (manufacturer's expiration date or "after reception" expiration date). It is the scientist’s responsibility to ensure the goods are received in this manner as well as the proper disposal of the solvents/reagents prior to the expiration date. Refer to the MSDS for guidance. Peroxide test strips can be purchased to assess the peroxide content of solvents/reagents that you are not certain of.
Peroxide-containing solvents or reagents should never be heated or concentrated in order to avoid explosive hazards. There are different techniques available to remove peroxides from solvents or reagents. If the peroxides can not be removed contact EHS to discard. (Do not mix with other chemicals)
Nanoparticle safety is an area where the safety research is just starting to catch up to the exciting research associated with new discoveries and applications.
Safety in an area where there are a great many unknowns and little to no regulation has to be handled in such a way that the known hazards are mitigated accordingly and the toxicity, especially unknown toxic effects are mitigated by exposure protection means.
Environmental Health & Safety (EHS) has created guidelines concerning the use of nanoparticles within laboratory facilities. These guidelines provide information about the properties of commonly used nanoparticles/nanomaterials, their health and safety hazards and ways to protect oneself from potential laboratory exposure. A training session on “Safe Handling of Nanomaterials” offered by EHS. Register on-line in the Safety Training section of the EHS webpage.
Hazardous materials are the responsibility of all those involved in the life cycle of the materials, from the beginning until the end. The goal of the University is to ensure that individuals working in areas where hazardous materials are used or stored are regularly informed and updated about risks, and avoid the acquisition of unnecessary stock. Keeping an inventory of hazardous materials is necessary to know what materials are already present in a laboratory and what is required. The inventory technician barcodes all hazardous materials and enters them in Vertére. Vertére is a website from HECHMET (Higher Education Co-operative for Hazardous Materials and Equipment Tracking). The EHS inventory technicians perform on-site updates to every laboratories inventory on demand. The inventory technicians enter new hazardous material into the database, scan all barcodes on existing bottles and containers, and remove consumed, missing or relocated materials from the database. Once the inventory process is completed, the Principal Investigator (PI) receives an updated list of their hazardous materials. All PI's are assigned a user name and password to view their chemical inventory in Vertére.
For questions or assistance please contact:
It is always preferable to refer to the MSDS for the storage requirements of individual chemicals. However, as a general rule, it is preferable to separate the different chemical classes from each other:
- flammable or combustible liquids
- toxic chemicals
- oxidizing agents
- corrosive chemicals
- compressed gases
- water-reactive chemicals
They must be stored in a way which will not allow chemicals to mix with one another if a container breaks e.g. secondary containment. Chemicals should not be stored in fume hoods. If a fume hood needs to be used for chemical storage, it must be clearly indicated and cannot be used to run experiments. Always store and use the minimum quantity of chemicals that are necessary for the laboratory.
The following charts can be used for general guidelines in terms of storage compatibility. You can also take a look at the Safe Storage of Hazardous Materials seminar on Moodle for a tutorial on how to safely store chemicals in your work area. However, you should always consult the SDS for storage requirements of the specific material to verify its compatibility.
Flammable and combustible liquids:
- Should be stored in a flammable storage (fire-resistant) cabinet
- Cabinet should be vented from the bottom (with duct and joints also being fire resistant) or capped if not vented
- No ignition sources should be present
- Storage areas must be well-ventilated
- Only one flammable storage cabinet per lab without prior approval
- Should not be stored with oxidizers (e.g. ammonium nitrate, chromic acid, hydrogen peroxide, nitric acid, sodium peroxide and halogens)
- Should be stored in a cool, dry, well-ventilated area, out of direct sunlight and away from heat
- Always keep the smallest amount necessary in laboratory
- Always consult the MSDS for storage requirements of the specific toxic material
- Should always be stored separately from flammable materials, organic chemicals, dehydrating agents or reducing agents
- Oxidizing agents should be stored in a fire-resistant, cool, and well-ventilated area
- Should be stored in a well-ventilated area to prevent a buildup of vapors and excessive corrosion
- Should be stored in vented corrosion-resistant storage cabinets
- Acids should be segregated from bases for storage
- Secondary trays (polyethylene, Teflon, neoprene, or nitrile) should be used to contain any potential spill
- Organic acids (e.g., glacial acetic acid) should be segregated from inorganic acids
- Should be stored in a cool, dry area, away from flammable materials, ignition sources, excessive heat and potential sources of corrosion
- Should be stored in storage cages when not used
- Should be securely strapped or chained to a wall anchor or bench top to prevent falling when used
- Cylinder cap should always be on whenever the cylinder is not in use
- Oxygen cylinders should be stored separately from flammable gases
- Flammable gases (e.g. propane) should be stored in a designated area or outside buildings
- Should always use a cylinder cart to move cylinders around
- Should be stored in a cool, dry, well-ventilated area, out of direct sunlight and away from heat
- Should not be stored with flammable or combustible liquids
- Should not be stored away from any source of moisture and preferably isolated by a waterproof barrier
- Can be stored in desiccators
The spill of any hazardous materials can pose a significant safety or health hazard to persons in the immediate vicinity due to the properties of hazardous materials (toxicity, volatility, flammability, explosiveness, corrosiveness, etc.) or by the release itself (quantity, space considerations, ventilation, heat and ignition sources, etc.). Therefore, it is imperative that each research group clearly establishes within their SOPs the types of spills that can safely be handled by lab personnel solely. EHS also provides a Minor Spill Response training upon request. Please contact EHS at [email protected] for more details concerning this training.
If laboratory staffs are to clean up a chemical spill, they must be sure to:
- Stay within their comfort zone
- Be familiar with the hazards of the materials
- Have been trained or have clean-up instructions available on the MSDS or SOPs
- Use appropriate PPE and necessary clean up equipment
For assistance, you may download: Typical Spill Kit
If the spill cannot be handled by lab personnel (e.g. large spill, highly toxic reagents, etc.), the staff should contact Security using the following procedure:
- Advise and warn co-workers.
- Evacuate and secure the area immediately.
- Notify Security at 3717 or 514 848-3717
- Provide the following information:
- Your name and location of spill
- Name of hazardous material
- Quantity involved
- Related health risks and precautions to be taken
- Your name and location of spill
- Provide Material Safety Data Sheet (MSDS) or appropriate documentation.
- Assist Security or the Chemical Spill Resource Person as required.
Any spill (large or small) involving hazardous material must be reported by filling up an incident report. For more information concerning hazardous material spills, please consult the Concordia Emergency Management website:
- Safety Programs
- Laboratory safety programs
- Hazardous waste disposal
- Non-hazardous waste
- Injury/Near-Miss Reporting
- EHS Team
- Safety Programs | <urn:uuid:a7591468-50cb-4c97-acce-2b4881e47ec1> | CC-MAIN-2019-47 | http://www.concordia.ca/campus-life/safety/lab-safety/chemical-safety.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00420.warc.gz | en | 0.911404 | 6,342 | 2.828125 | 3 |
It is an inescapable truth of the capitalist economy that the uneven, class-based distribution of income is a determining factor of consumption and investment. How much is spent on consumption goods depends on the income of the working class. Workers necessarily spend all or almost all of their income on consumption. Thus for households in the bottom 60 percent of the income distribution in the United States, average personal consumption expenditures equaled or exceeded average pre-tax income in 2003; while the fifth of the population just above them used up five-sixths of their pre-tax income (most of the rest no doubt taken up by taxes) on consumption.1 In contrast, those high up on the income pyramid—the capitalist class and their relatively well-to-do hangers-on—spend a much smaller percentage of their income on personal consumption. The overwhelming proportion of the income of capitalists (which at this level has to be extended to include unrealized capital gains) is devoted to investment.
It follows that increasing inequality in income and wealth can be expected to create the age-old conundrum of capitalism: an accumulation (savings-and-investment) process that depends on keeping wages down while ultimately relying on wage-based consumption to support economic growth and investment. It is impossible to do as suggested by the early-twentieth-century U.S. economist J. B. Clark—to “build more mills that should make more mills for ever”—in the absence of sufficient consumer demand for the products created by these mills.2
Under these circumstances, in which consumption and ultimately investment are heavily dependent on the spending of those at the bottom of the income stream, one would naturally suppose that a stagnation or decline in real wages would generate crisis-tendencies for the economy by constraining overall consumption expenditures. There is no doubt about the growing squeeze on wage-based incomes. Except for a small rise in the late 1990s, real wages have been sluggish for decades. The typical (median-income) family has sought to compensate for this by increasing its number of jobs and working hours per household. Nevertheless, the real (inflation-adjusted) income of the typical household fell for five years in a row through 2004. The bottom 95 percent of income recipients experienced decreasing real average household income in 2003–04 (with the top 5 percent, however, making sharp gains). In 2005 real wages fell by 0.8 percent.3
Nevertheless, rather than declining as a result, overall consumption has continued to climb. Indeed, U.S. economic growth is ever more dependent on what appears at first glance to be unstoppable increases in consumption. Between 1994 and 2004 consumption grew faster than national income, with the share of personal consumption expenditures in GDP rising from 67 to 70 percent.4 How is this paradox—declining real wages and soaring consumption—to be explained?
Commenting on this same problem in this space in May 2000 (near the end of the previous business cycle expansion) we asked:
But if this [stagnating wages] is the case, where is all of the consumption coming from? Has capital managed somehow to square the circle—to increase consumption rapidly while simultaneously holding down wages? The obvious answer—or a good part of it—is that in a period of stagnant wages, working people are increasingly living beyond their means by borrowing in order to make ends meet (or, in some cases, in a desperate attempt to inch up their living standards). To a considerable extent, the current economic expansion has been bought on consumer debt.
If this was the case six years ago just before the last economic downturn, it is even more so today and the potential consequences are worse. Since consumption expenditures have been rising in the United States much faster than income the result has been a rise in the ratio of overall consumer debt to disposable income. As shown in, the ratio of outstanding consumer debt to consumer disposable income has more than doubled over the last three decades from 62 percent in 1975 to 127 percent in 2005. This is partly made possible by historically low interest rates, which have made it easier to service the debt in recent years (although interest rates are now rising). Hence, a better indication of the actual financial impact of the debt on households is provided by the debt service ratio—consumer debt service payments to consumer disposable income. Chart 1 shows the rapid increase in the debt service ratio during the quarter-century from 1980 to the present, with a sharp upturn beginning in the mid-1990s and continuing with only slight interruptions ever since.
Aggregate data of this kind, however, does not tell us much about the impact of such debt on various income groups (classes). For information on that it is necessary to turn to the Federal Reserve Board’s Survey of Consumer Finances, which is carried out every three years. Table 2 provides data on what is known as the “family debt burden” (debt service payments as a percentage of disposable income) by income percentiles. Although the family debt burden fell for almost all levels of income during the most recent recession (marked by the 2001 survey) it has risen sharply during the latest sluggish expansion. For those families in the median-income percentiles (40.0–59.9), debt burdens have now reached their peak levels for the entire period 1995–2004. These families have seen their debt service payments as a percentage of disposable income increase by about 4 percentage points since 1995, to almost 20 percent—higher than any other income group. The lowest debt burden is naturally to be found in those in the highest (90–100) income percentiles, where it drops to less than 10 percent of disposable income.
All of this points to the class nature of the distribution of household debt. This is even more obvious when one looks at those indebted families who carry exceptionally high debt burdens and those that are more than sixty days past due in their debt service payments. Table 3 shows the percentage of indebted families by income percentiles that have family debt burdens above 40 percent. Such financial distress is inversely correlated with income. More than a quarter of the poorest indebted families—those in the lowest fifth of all families—are carrying such heavy debt burdens. Families in the next two-fifths above that, i.e., in the 20.0–59.9 income percentiles, have experienced increases in the percentage of indebted families carrying such excessive debt burdens since 1995—with the number of indebted families caught in this debt trap rising to around 19 percent in the second lowest quintile, and to around 14 percent even in the middle quintile. In contrast, for those in the 40 percent of families with the highest incomes, the percentage of households experiencing such financial distress has diminished since 1995. Thus with the rapid rise in outstanding debt to disposable income, financial distress is ever more solidly based in lower-income, working-class families.
Soaring family debt burdens naturally pave the way to defaults and bankruptcies. Personal bankruptcies during the first G. W. Bush administration totaled nearly five million, a record for any single term in the White House. Due to the harsh bankruptcy legislation passed by Congress in 2005 the number of bankruptcies has recently declined—at least in the short term. But by making it more difficult for families to free themselves from extreme debt burdens, this is certain to produce ever greater numbers of workers who are essentially “modern-day indentured servants.”5
Table 4 shows the percentage of indebted families in each income category that are sixty days or more past due on any debt service payment. For families below the 80th percentile in income the percentage of indebted families falling into this category has grown sharply since 1995. In contrast families in the 80th percentile and above have seen a drop in the percentage of indebted families that are overdue on a debt payment. Again, we see that the growth of financial distress in the United States today is centered on working-class households.
The biggest portion of debt is secured by primary residence, the main asset of the vast majority of families. Debt secured by homes has continued to soar. Between 1998 and 2001 the median amount of home-secured debt rose 3.8 percent; while from 2001–04 it rose a phenomenal 27.3 percent! Around 45 percent of homeowners with a first-lien mortgage refinanced their homes in 2001–04 (as compared with 21 percent in the previous three years), with more than a third of these borrowing money beyond the amount refinanced. The median amount of the additional equity extracted by such borrowers was $20,000.6 Despite skyrocketing house prices in recent years the ratio of homeowner’s equity/value of household real estate has continued to decrease from 68 percent in 1980–89, to 59 percent in 1990–99, to 57 percent in 2000–05.7
As house prices have soared more risky forms of mortgage lending have emerged. Left Business Observer editor Doug Henwood noted in The Nation (March 27, 2006),
Time was, you had to come up with a hefty down payment to buy a house. No longer: In 2005 the median first-time buyer put down only 2 percent of the sales price, and 43 percent made no down payment at all. And almost a third of new mortgages in 2004 and ’05 were at adjustable rates (because the initial payments are lower than on fixed-rate loans). At earlier peaks interest rates were near cyclical highs, but the past few years have seen the lowest interest rates in a generation. So adjustable mortgages are likely to adjust only one way: up.8
The typical family is also mired in credit card debt. At present nearly two-thirds of all cardholders carry balances and pay finance fees each month—with the average debt balance per cardholder rising to $4,956 at the end of 2005. In recent years, there has been a shift from fixed to variable rate cards, as interest rates have begun to rise, with about two-thirds of all credit cards now carrying variable rates—up from a little more than half a year ago. Interest rates on cards are rising rapidly—what the Wall Street Journal has called “The Credit-Card Catapult” (March 25, 2006). In February 2006 the average interest rate for variable-rate cards jumped to 15.8 percent from 12.8 percent for all of 2005. Meanwhile, the portion of credit card-issuer profits represented by fees went up from 28 percent in 2000 to an estimated 39 percent in 2004. Altogether unpaid credit card balances at the end of 2005 amounted to a total of $838 billion.9 The effects of this fall most heavily on working-class and middle-income families. According to the Survey of Consumer Finances, the percentage of households carrying credit card balances rises with income up until the 90th income percentile, and then drops precipitously.
Another realm of increased borrowing is installment borrowing, encompassing loans that have fixed payments and fixed terms such as automobile loans and student loans—constituting the two biggest areas of installment borrowing. In 2001–04 the average amount owed on such loans grew by 18.2 percent.10
Low-income families are more and more subject to predatory lending: payday loans, car title loans, subprime mortgage lending, etc.—all of which are growing rapidly in the current climate of financial distress. According to the Center for Responsible Lending,
A typical car title loan has a triple-digit annual interest rate, requires repayment within one month, and is made for much less than the value of the car….Because the loans are structured to be repaid as a single balloon payment after a very short term, borrowers frequently cannot pay the full amount due on the maturity date and instead find themselves extending or ‘rolling over’ the loan repeatedly. In this way, many borrowers pay fees well in excess of the amount they originally borrowed. If the borrower fails to keep up with these recurring payments, the lender may summarily repossesses the car.11
The growing financial distress of households has led to the rise of an army of debt collectors, with the number of companies specializing in buying and collecting unpaid debts rising from around 12 in 1996 to more than 500 by 2005. According to the Washington Post, this has led to: “Embarrassing calls at work. Threats of jail and even violence. Improper withdrawals from bank accounts. An increasing number of consumers are complaining of abusive techniques from companies that are a new breed of debt collectors.”12
In this general context of rising household debt, it is of course the rapid increase in home-secured borrowing that is of the greatest macroeconomic significance, and that has allowed this system of debt expansion to balloon so rapidly. Homeowners are increasingly withdrawing equity from their homes to meet their spending needs and pay off credit card balances. As a result, “in the October to December period, the volume of new net home mortgage borrowing rose by $1.11 trillion, bringing the level of outstanding mortgage debt to $8.66 trillion—an amount that equaled 69.4 percent of U.S. GDP.”13 The fact that this is happening at a time of growing inequality of income and wealth and stagnant or declining real wages and real income for most people leaves little doubt that it is driven to a considerable extent by need as families try to maintain their living standards.
The housing bubble, associated with rising house prices and the attendant increases in home refinancing and spending, which has been developing for decades, was a major factor in allowing the economy to recover from the 2000 stock market meltdown and the recession in the following year. Only two years after the stock market decline, the iconoclastic economist and financial analyst Stephanie Pomboy of MacroMavens was writing of “The Great Bubble Transfer,” in which the continuing expansion of the housing bubble was miraculously compensating for the decline in the stock market bubble by spurring growth in its stead. Yet, “like the bubble in financial assets,” Pomboy wrote,
The new real estate bubble has its own distinctly disturbing characteristics. For example one could argue, and quite cogently, that the home has become the new “margin account” as consumers through popular programs like “cash-out” Refi[nancing] increasingly leverage against unrealized gains in their single largest asset. Perhaps the most disturbing hallmark of this Refi mania is the corresponding plunge in homeowners’ equity-stake….The cash-out Refi numbers reveal a “speculative fervor” that makes the Nasdaq mania look tame. According to estimates by Fannie Mae, the average cash-out Refi is $34,000. This sounds like a lot to me, particularly considering that the median home price is just $150,000…e.g., the average Joe is extracting 20% of his home value!14
The surprising strength of consumption expenditures, rising faster than disposable income, has most often been attributed to the stock market wealth effect (the notion that the equivalent of a couple of percentages of increases in stock market wealth go to enhanced consumption expenditures by the rich—those who mostly own the nation’s stocks).15 Pomboy argues, however, that “there is evidence to suggest that the housing wealth effect may be significantly larger than the stock market wealth effect….Based on a recent study by Robert Schiller (of ‘Irrational Exuberance’ fame) housing has always been a more important driver for consumers than the stock market. In his rigorous state by state and 14 country analysis, he found housing to have twice the correlation with consumption than the stock market has.” For Pomboy, this suggested that the writing was on the wall: “With homeowners’ equity near all-time lows, any softening in home prices could engender the risk of a cascade into negative equity. But even more immediately, the increase in mortgage debt service (again, despite new lows in mortgage rates) does not bode well for consumption as the Fed prepares to reverse course”—and raise interest rates.
The decrease in home equity and the increase in mortgage debt service (and the debt service ratio as a whole) suggest how great the “speculative fervor” underpinning consumption growth actually is today. The housing bubble and the strength of consumption in the economy are connected to what might be termed the “household debt bubble,” which could easily burst as a result of rising interest rates and the stagnation or decline of housing prices. Indeed, the median price of a new home has declined for four straight months at the time of this writing, with sales of new single-family homes dropping by 10.5 percent in February, the biggest decline in almost a decade, possibly signaling a bursting of the housing bubble.
In a recent interview, “Handling the Truth,” in Barron’s magazine, Stephanie Pomboy argued that the U.S. economy was headed into “an environment of stagflation [tepid growth combined with high unemployment and rising prices].” Among the reasons for this, she claimed, were the weaknesses in wage income and the inability of consumer’s to continue to support the household debt bubble. “Already, consumer purchasing power is limited by…lackluster income growth, specifically wages.” For Pomboy, corporations have been increasingly focusing on the high end of the consumer market in recent years, while the low end (that part supported by wage-based consumers) is in danger of collapsing. Even Wal-Mart, the bastion of low-prices that caters primarily to the working class, is beginning to stock products that they hope will attract higher-income families.16
The weakness of incomes at the bottom, and the squeeze on working-class consumption—so-called “low-end consumption”—is a serious concern for an economy that has become more and more dependent on consumption to fuel growth, given the stagnation of investment. With declining expectations of profit on new investment, corporations have been sitting on vast undistributed corporate profits, which rose, Pomboy says, as high as $500 billion and are now around $440 billion. The total cash available to corporations, just “sitting in the till,” at the end of 2005 was, according to Barron’s, a record $2 trillion. “The shocking thing, obviously,” Pomboy states, “is that they have been sitting on this cash and they are not doing anything with it despite incredible incentives to spend it, not just fiscally but from an interest-rate-standpoint. It’s not like keeping and sitting on cash is a particularly compelling investment idea right now. It speaks a lot about the environment that CEOs see out there with potentially the continued [capital] overhang that we’ve got from the post-bubble period.”17
The truth is that without a step-up in business investment the U.S. economy will stagnate—a reality that speculative bubbles can hold off and disguise in various ways, though not entirely overcome. But investment is blocked by overaccumulation and overcapacity. Hence, the likely result is continued slow growth, the further piling up of debt, and the potential for financial meltdowns. There is no growth miracle whereby a mature capitalist economy prone to high exploitation and vanishing investment opportunities (and unable to expand net exports to the rest of the world) can continue to grow rapidly—other than through the action of bubbles that only threaten to burst in the end.
The tragedy of the U.S. economy is not one of excess consumption but of the ruthless pursuit of wealth by a few at the cost of the population as a whole. In the end the only answer lies in a truly revolutionary reconstruction of the entire society. Such a radical reconstruction is obviously not on the table right now. Still, it is time for a renewed class struggle from below—not only to point the way to an eventual new system, but also, more immediately, to protect workers from the worst failures of the old. There is no question where such a struggle must begin: labor must rise from its ashes.
- ↩ See U.S. Department of Labor, Bureau of Labor Statistics, Consumer Expenditures in 2003, June 2005, table 1, http://www.bls.gov/cex/.
- ↩ Clark quoted in Paul M. Sweezy, The Theory of Capitalist Development (New York: Monthly Review Press, 1970), 168–69.
- ↩ “Economy Up, People Down,” August 31, 2005, and “Real Compensation Down as Wage Squeeze Continues,” January 31, 2006, Economic Policy Institute, http://www.epi.org.
- ↩ The shares of investment, government, and exports remained the same in 1994 and 2004 at 16, 19, and 10 percent, respectively, while the share of imports (subtracted from GDP) went from –12 to –15 percent. U.S. Department of Labor, Bureau of Labor Statistics, Occupational Outlook Quarterly 49, no. 4 (Winter 2005–06): 42, hhttp://www.bls.gov/opub/ooq/2005/winter/contents.htm/.
- ↩ Kevin Phillips, American Theocracy (New York: Viking, 2006), 324–25.
- ↩ “Recent Changes in U.S. Family Finances” (see note to table 2 in this article), A28–A29.
- ↩ “Household Financial Indicators,” Board of Governors, Federal Reserve System, Flow of Funds, 2006.
- ↩ Doug Henwood, “Leaking Bubble,” The Nation, March 27, 2006.
- ↩ “The Credit-Cart Catapult,” Wall Street Journal, March 25, 2006; Phillips, American Theocracy, 327.
- ↩ “Recent Changes in U.S. Family Finances” (see note to table 2 in this article), A28.
- ↩ The Center for Responsible Lending and the Consumer Federation of America, Car Title Lending (April 14, 2005), http://www.responsiblelending.org.
- ↩ “As Debt Collectors Multiply, So Do Consumer Complaints,” Washington Post, July 28, 2005.
- ↩ “Household Financial Conditions: Q4 2005,” Financial Markets Center, March 19, 2006, http://www.fmcenter.org.
- ↩ Stephanie Pomboy, “The Great Bubble Transfer,” MacroMavens, April 3, 2002, http://www.macromavens.com/reports/the_great_bubble_transfer.pdf.
- ↩ See for example the treatment of this in Council of Economic Advisors, The Economic Report of the President, 2006, 29–30, http://www.gpoaccess.gov/eop/.
- ↩ Stephanie Pomboy, “Handling the Truth,” Barron’s, February 7, 2005, Barrons Interview; “Wal-Mart Fishes Upstream,” Business Week Online, March 24, 2006.
- ↩ Pomboy, “Handling the Truth” “Too Much Cash,” Barron’s, November 7, 2005. See also “Long on Cash, Short on Ideas,” New York Times, December 5, 2004. | <urn:uuid:4e6f45eb-15fa-4301-a625-f07d93cd9746> | CC-MAIN-2019-47 | https://monthlyreview.org/2006/05/01/the-household-debt-bubble | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00497.warc.gz | en | 0.956546 | 4,864 | 2.625 | 3 |
What is recoil?
When you pull the trigger, the mechanism sets fire to the gunpowder, which in turn is converted into exhaust gases or ejecta. This pressure will push the bullet out of the barrel. As the bullet starts its forward momentum, the ejecta follows its movement. At this point, the pressure is so great that as the bullet exits the chamber, the ejecta is also released. This is when recoil occurs pushes the gun backward.
It is widely assumed that shoulder-fired guns have reached a state of mechanical perfection and that no new great advances are left to discover. Whether you look at the question in terms of ballistic energy on target, perfect cycling of the next round, flawless aiming, or perfect fire control, two things remain unchanged: Recoil and its effect on the human-looking through the sights.
I found out early on in life that I love things that shake and go bang. When I was younger I ran dirt bikes, and the control those machines had at 70 miles an hour across rough terrain always amazed me. That amazement was so powerful that it led me to a Ph.D. in mechanical engineering at the age of 26, and has led to a lifetime of building stuff that shakes and goes bang; like guns, for example.
So when I talk about the science of recoil, I have interesting things to say.
The name of the science of shake and bang is Dynamics. How Dynamics applies to a machine as old as Christopher Columbus is very important, especially to you, dear reader, if you intend to master comfort and confidence when shooting guns.
I want to describe a slow-motion video for you to play in your head as you read this: Imagine the bullet in the bore of the gun sitting in front of a slug of gun powder. Behind the powder is a primer cap, and behind it is a firing pin with a spring that is held back by a trigger. Your finger is on that trigger and your brain commands nerve impulses to set off the motion once you decide that your eye is perfectly in line through the sights on the barrel with the target. OK so far?
Sounds simple, but thinking in slow motion allows you to appreciate all of the things that can screw up your shot before the bullet hits the center of the bull’s eye. When your nerve cells talk down the line from your brain to your finger, it takes about 1/5th of a second for the muscle to actually move, but the trigger releasing the firing pin might only take 1/5000th of a second for it to hit the primer. In another 1/1000th of a second, the friction in the primer cap sets off a chain reaction in the powder. Your eye typically doesn’t see discrete occurrences shorter than 1/14th of a second, but 1/2000th of a second later, the pressure behind the bullet builds enough for it to start moving down the barrel. The force is so great it accelerates to bullet speed in only 1/1000th of a second.
Add it all up and it looks like this:
- Guns go bang in .0027 seconds, but humans take about .2 seconds to make it happen.
- Include also the .1 second blink, the .2 second flinch, the .07 ocular refresh rate, and the swing of the barrel while it pans across the target from poor breathing control, and it’s a wonder you can even put a hole in the side of the barn…
- In other words, the gun goes bang about 200 times faster than it takes you to aim, make up your mind, and actually pull the trigger.
- Once the bullet moves, the gun does too, but in the other direction. That is recoil.
If you’re going to shoot with more comfort and confidence, you need to consider all aspects of the system. Here is a comprehensive description of how each characteristic affects comfort and confidence. They are separated into two groups; those that are characteristics of the gun’s behavior, and those which are caused by the human behind the sights.
The mass of the gun
The heavier the ratio of the weight of the gun to the bullet, the less acceleration the gun will have. Lighter guns kick more. The Conservation of Momentum only involves the bullet and gun whereas the Conservation of Energy must include the gas expansion as well. The kickback velocity varies linearly with the gun weight but the kickback energy varies with the square of the gun weight. As a result, a gun that weighs half as much will kick 4 times more!
“A gun that weighs half as much will kick 4 times more!”
The mass of the bullet
Heavier bullets kick more but because the muzzle velocity is lower, the heavier bullet offsets the effective energy increase. So the ratio of the mass of the gun to a light or heavy bullet doesn’t change the recoil as much as a lighter gun.
The type of action
There are two categories of gun action: closed breech and actuated.
Closed breech guns, like break action and pump-action shotguns and lever or bolt action rifles, do not open until well after the bullet is gone and the gas pressure equalizes. Because of that, all of the gasses leaving the barrel after the bullet cause some recoil. In fact, 60% of closed-breech gun recoil comes from the bullet acceleration, and 40% of the recoil comes from the gas-and-burned-bits-of-gunpowder accelerating out of the muzzle, called ejecta.
Actuated breech guns may use some of the gas pressure, bled from a point a ways down the barrel, and use it to work a piston pushing on the breech actuation. By redirecting some of the energy to move the breechblock backwards, the vector of thrust generated by accelerating the breechblock away from you reduces the total recoil by that amount. Some others schemes of varying effectiveness include letting the recoil move the gun backwards while the breech block stays still thereby cycling the action as in inertia driven actuation. Others have rotating breech locks or slip-strike mechanisms that hold the breech locked while the pressure is highest, and allow the breech block to be pushed backwards by the blowback pressure to cycle the action.
The take away is that actuated breech guns recoil less than closed breech guns, all else being equal.
Muzzle porting or muzzle brakes
Some shotguns have muzzle porting, where holes in the barrel drilled pointing upwards near the muzzle eject gas and push against muzzle rise. These carefully formed holes produce a balancing force that holds the barrel from rising and allows flat follow-through of the muzzle in order to permit better control in clays or skeet shooting. They are generally tuned for one load/charge combination.
Muzzle brakes are principally used to reduce recoil on rifles. A series of holes are provided in a screw-on attachment on the end of the barrel which redirects the high-pressure gas behind the bullet towards the shooter. It sounds brutal, and it is. The surfaces of the holes which the gasses push against cause a force vector that subtracts a substantial amount of recoil from the shooter. Although they may reduce recoil by 40% or so, the noise signature of the shock wave directed towards the shooter makes muzzle brakes categorically dangerous to human hearing, and hearing protection mandatory.
A muzzle brake adds weight to the outer end of the muzzle, increasing the polar inertia of panning the muzzle. Further, the muzzle brake lengthens the barrel, and unless it was manufactured with mounting threads, the barrel is required to be materially altered.
Chokes are a controlled reduction in diameter of the end portion of shotgun barrels. Chokes modify the shape of the expansion of buckshot rounds by constricting them as they exit the barrel. Some are machined directly into the barrel, as others are interchangeable sleeves which can be field-changed in order to match the shot pattern with the target distance. The change in recoil from open bore to full choke is marginal.
The angle between the bore and the main contact point of the stock to the shoulder is called the stock drop angle. The drop angle affects the position of the shooter’s eye line of sight along the barrel. While rifles and shotguns with a higher drop angle seem more comfortable to aim, they produce a higher muzzle rise than straighter stock drop angles. Taken to the extreme, tactical weapons have a zero drop angle where the bore is directly in line with the center of the butt plate, and a sight rail is added to the top of the receiver to raise the sightline to a comfortable position. While a high drop angle does not directly affect the recoil energy, it does change the control by introducing more muzzle rise.
Butt plate pads
Many recoil-reducing strategies have been attempted with varying degrees of success. OEM or aftermarket pads can be had which provide a soft rubber or foam volume on the end of the stock. This provides a spring distance which permits the recoiling gun time to decelerate. The exchange of time for force reduces the peak force transmitted to the shooter. Although rubber-only pads reduce peak force, they do not remove any energy. Think of them as springs. Once they are wound up, they eventually will give back that energy. Typical recoil impulse transfers with hard plates occur in 4 to 5 milliseconds, whereas soft foam pads extend the time into the range of 10-12 milliseconds.
Moving mass recoil-reducing devices
Mercury filled buffer tubes reduce the energy transmitted to the human by causing some of the energy to be converted to heat within the mechanism. A tube mounted usually within the stock with baffles inside is partly filled with mercury. Owing to its high density and fluidity, when recoil moves the gun, the mercury’s inertia causes it to stay put as the tube and baffles move past it. This causes friction to occur in the buffer and reduce that which is absorbed in the human.
Another method used in reducing the peak force transmitted to the shooter is to use a spring breakaway between you and the gun. In this scheme, a spring between two plates is guided by several rods to permit the butt pad and the gun to approach each other during recoil. While this method reduces peak force and the perception of recoil relief, it does not remove energy but rather delays the transfer to the shooter. Variants incorporate a piston-and-orifice shock absorber which does, but at great cost, mechanical complexity and the usual requirement to modify heavily the stock to accept the mechanism.
The energy available in the gunpowder
The energy of the gunpowder’s combustion gets used up several different ways:
- Some of the energy of the expanding gasses is converted to mechanical energy in the bullet’s mass times its velocity.
- Another place some of the energy goes is in the hot gasses expanding after the bullet leaves the barrel.
- Yet more energy is left in powder burning after the bullet leaves the barrel.-Some of the energy heats the barrel up.
The burning rate of the propellant
Depending on the burning rate of the powder and the length of the barrel, mismatched cartridge/barrel length combinations can make a great big fireball because the powder does not have enough time to burn before the bullet leaves the barrel. Think of a snub nose revolver and you get the picture. On the other hand, black powder is more pressure-sensitive to burning rate, and the effect is more like a slow explosion. Black powder tends to have a higher initial breech pressure and a quicker pressure drop as the bullet in the barrel. Smokeless powder can be manufactured with different burning rates in order to cause a lower breech pressure, but sustain it at a constant during more of the bullet’s travel down the barrel length. Well-matched cartridges give the maximum power to the projectile.
The inertia of a heavy bullet will cause a higher pressure to build up behind the bullet. Larger bores have more area on the back of the projectile for the expanding gasses to push against, so will tend to have lower breech pressures. Regardless of the breech pressure, recoil will be the same given the same muzzle energy in the projectile, the powder burning rate and the length of the barrel.
Higher breech pressure caliber/cartridge combinations tend to wear out barrels faster. Over pressurizing a barrel will cause a pig-in-the-python bulge, or blow it up in your face.
On rifles, the breech pressure can go upwards of 60,000 psi, whereas shotguns field loads generally are below 15,000 psi, and heavy loads rarely exceed 18,000 psi.
Length of the barrel
To get all of the burning powder’s energy into the bullet, it would be ridiculous to permit the expanding gasses to reach atmospheric pressure and temperature just as the bullet leaves the barrel. The barrel would be miles long. By balancing the length of the barrel, the bore size, the projectile weight and the burning rate of the powder charge, a reasonable compromise can be reached where the majority of the combustion energy is transferred to the projectile with a practical barrel length. Very short barrels can be practically as efficient as long ones in transferring the energy to the projectile. Short barrels have less discrimination in aiming, but that can be overcome with good optics. The minimum barrel length in rifles is about 10-12” for suppression fire type spray-and-pray type weapons; 16” barrels or longer are generally used for ringing the gong at a 1000 yards. 30” or more and you’re just showing off… Shorter barrels make a bit more noise but don’t affect recoil much.
Ballistic energy as measured at the muzzle
The mass of the bullet times the velocity as measured as it leaves the muzzle to dictate the entire ballistic energy carried by the projectile. The muzzle energy of the bullet plus the ejecta energy constitutes the entire recoil energy input.
Takeaway: Ballistic energy plus Ejecta energy equals recoil energy.
Type of projectile
Solid projectiles produce exactly the same recoil as buckshot projectiles, provided that the weight and muzzle velocity is the same.
The bore diameter affects the active area on the back of the projectile which the burning gasses push against. Rifled bore shotguns are surprisingly accurate when shooting sabot slugs. Sabots are a cradle that holds a small diameter projectile in the larger bore. The large bore gives a big advantage to the pressure, and the small-diameter projectile housed within the sabot has an efficient flight at medium distances.
Gun recoil velocity
Gun recoil velocities less than 10 feet per second (FPS) are generally tolerated universally well by all shooters, including youth. Anything over 12 FPS causes anyone to pay attention, and anything over 18 FPS generally has the potential for injury, including bruising, flinch conditioning, scope-and-head collisions, concussions, etc. High recoil velocities can be tamed by energy reducing systems.
The measure of recoil energy is derived from the mass and velocity of the bullet and the ejecta acting against the mass of the gun. The energy transferred to the shooter through the impulse is measured in foot-pounds of energy.
Joules if you think metric.
Recoil energy transmitted to the shooter through the recoil impulse
There are a host of events that take place as the recoil of the gun advances towards the shooter. First, humans behave like a bucket of juice when you whack them with a stick at 18 feet per second. Think of the slow-motion film of an arrow hitting a bedsheet. The concentric circles increase from the point of impact outwards until the arrow’s energy is used up and the arrow and sheet move as one. The human’s flesh does exactly the same. As the concentric circles expand, more mass becomes involved in the impulse transfer. A point of balance is achieved where the entire mass in play achieves the same velocity. The molecular friction of the shock wave as it propagates through the human is the principal way the energy is absorbed; it is also the principal way your brain hears about the bang when all of your nerves talk to your brain at once.
Reducing recoil energy absorbed into the shooter’s soft tissue reduces fatigue, pain, damage, and increases enjoyment and trainability.
Traditional recoil pad pressure distribution
Hard butt plates push hardest against the first thing they touch; the collar bone, rotator cuff cartilage, tendons, etc. The soft tissue will eventually get energy transferred to them as the hard bits get shoved backwards.
Rubber-only foam pads tend to collapse somewhat and reduce the peak force on the hard bits in your shoulder, but still have a high force gradient between the hard bits and the soft tissue.
Gel pads are somewhat better than rubber-only pads but behave rather hard at high recoil velocities. Think of a belly flop in mud if your parachute doesn’t open…
Fluid-filled recoil-reducing pads have the ability to adjust to the shape of the shooter’s anatomy and push evenly on the top of the bumps as well as the bottom of the holes in your shoulder due to hydraulic force equalization.
Traditional recoil pad contact area
The contact area of a hard butt plate is dictated by how much the human wraps itself around the gun upon recoil. Rubber only foam pads tend to collapse in length only, and therefore do not increase the contact area appreciably.
FalconStrike Hydraulic Recoil reducing system behavior
A new type of hydraulic recoil-reducing device has been developed which provides many improvements to the transfer of the recoil impulse and provides the single largest increase in comfort and confidence of any additive recoil-reducing accessory.
Three modes of operation are present in the standard 1-3/16” length-of-pull recoil pad.
First, the fluid-filled bladder acts like a whippletree evener on a horse hitch. The fluid flows to push evenly on the entire contact area; it pushes the same amount on the collar bone as it does on the hollows beside it. The best rubber-only pads provide a 15/70/15 force distribution above, on, and below the collar bone. The FalconStrike has a 26/42/32 force distribution for maximum impulse transfer comfort and confidence.
Secondly, the hydraulic bladder expands sideways by 12% as the shock wave transfers from the gun to the human. This increased contact area further reduces the point loading on any part of the shoulder, thereby increasing shooting stamina and reducing bruising or pain.
Third and most importantly, the patented shock absorber contained within the FalconStrike recoil reducing system removes 35% of the total recoil energy before it reaches the shooter. That is a decrease equal to or better than adding a muzzle brake, going to a semi-automatic, or shooting 2 calibers smaller.
Think of it this way; The 5 chrome balls in Newton’s cradle that ping back and forth do so because most all of the energy is transmitted through each ball. If you lift one up and drop it, one leaves the other end of the stack. Now imagine the center ball being a boiled egg. The energy would squish the egg and the last ball would not lift nearly as much. By the same analogy, instead of the recoil energy being distributed into the human directly by the gun, there is now a third player in the chain of events where the FalconStrike recoil-reducing shock absorber eliminates 80% of the energy before reaching the shooter.
Said another way, FalconStrike works so well because now instead of the gun accelerating you, it squishes the shock absorber and you get 35% less recoil energy distributed into your body.
That amount of recoil reduction can be added to any platform, in addition to a muzzle brake or barrel venting.
Cone of probability
The cone of probability describes the range of deviation of the projectile flight path from the point of aim. The true value is the sum of all the gun-borne and human-induced variables. The cone of probability is included in the human-induced variable category because the majority of poor accuracy is the fault of the monkey holding the gun. Guns shoot straight. Humans don’t.
Fast and slow-twitch muscle movements
Humans have two types of muscle control: Fast and slow twitch.
- Fast-twitch muscle movements pull your hand out of the fire. The command initiates from the stem of your brain, and the ability to plan and coordinate fine movements is not possible.
- Slow-twitch muscle movements allow you to walk all day. Those muscles are commanded from slower but deliberate position control and are used in things like brain surgery, for example.
Generally, if you use the fast-twitch muscles to hold or fire the gun, the erratic motion generated is too fast for your eyes to appreciate, so the gun goes bang when the sightline is pointing in a larger cone of probability, rather than dead on target.
Don’t hold the gun too tightly; the twitching increases the cone of probability a lot.
Ocular refresh rate
The human ocular refresh rate is the shortest time the human eye can distinguish between two separate images flashed in rapid succession. Humans generally can’t see things faster than 1/14th of a second. It sounds crazy, but your brain has learned to ignore the time your eyes are closed when you blink. The world plays in a continuous movie in your head, but you only see chunks of it, and your brain stitches all the pictures together to make you think it is continuous. That is a problem if you are straining to brace the gun because the rapid twitch muscle movement makes the sightline flutter, but your brain sees the sight picture dead steady. You move the gun a lot more than you can see. Figure out how to increase the steadiness of the gun.
Human muscular control feedback time lag
The recoil input event is over in 15 milliseconds or less, but it generally takes 250 milliseconds for the human to perceive the shock and produce a muscular counter-action. Even the fastest athletes have a reaction time in the range of 200 milliseconds. The gun goes bang a lot faster than your head does. Nothing you think you’re doing during the recoil event can improve accuracy; I’m sorry to inform you, but you are just plain too slow!
Sensory input stun point
The sensory input stun point is a rate of recoil energy which causes so much molecular friction and sensory nerve inputs into the shooter that they revert to singularly respond to that input stimulus only; it hurts so much, nothing else matters. Higher recoil signatures like 300 Win-Mag, 7mm Ultra, .338 Lapua, and the like produce over 15-foot pounds of recoil energy approaching the practical limit for any shooter. Even larger caliber platforms like the .50 BMG are categorically dangerous for cumulative physical harm to the shooter. That doesn’t sound like much fun at all…
The low end of the recoil energy input range would include cartridge platforms like .22 rimfire, where the sensory input is so low you are certainly going to run out of bullets before you run out of stamina.
Sighting systems, as the human interprets them
The human eye has two modes of perception. Acute vision focuses the image on the retina, and peripheral vision senses relative changes in position outside of the central focal point. Both these eye characteristics are used differently in the aiming of guns.
If the goal is to align the sighting system with as little parallax error to the aiming point as possible, acute vision is used. Open sights are limited by the judgment of the operator. Optics, which magnify the image, increase the resolution presented to the human eye. Reticle cross hairs superimposed by the optics on the aiming point help to develop a field of context which further increases accuracy.
Open sights come in many variations but generally employ a pin at one end of the barrel and a notch at the other. Repeatable aiming involves practicing to interpret the alignment of the pin in the notch superimposed on the desired point of impact. Another variant of the open sight is the peep sight.
Where the goal is to track flying targets, peripheral vision is crucially important in perceiving the motion of the target and planning the motion of bringing the bead or rib vent rail in line with the anticipated point of impact. In action type shooting, the sighting is more about muscle memory tuning in order to develop a repeatable target leading in anticipation of the point of impact.
Anticipation, (flinch response)
The learned response of anticipating the noise of the shot and the pain of the recoil is called flinching. People develop bad habits in a bid to protect themselves somewhat from the effects of recoil, invariably at the expense of accuracy. A common effect is to close one’s eyes momentarily as the trigger is pulled. The symptom is hidden by the 1/14th second ocular refresh time lag and the ¼ second human reaction time. The blast and recoil are so large as a sensory input that the shooter doesn’t realize they closed their eyes before the shot; closing one’s eyes while aiming invariably leads to an increased cone of probability in the projectile’s direction.
Another common trait of flinching is to hold the gun too tightly or snatch the trigger, which leads to fast-twitch muscle motion and further increasing the cone of probability.
The average length of time for a blink is 1/10th of a second. Any change in position of the new perceived point of impact while you blink will be blended by your mind to the previous sight picture. The difference between the perceived and the actual aiming point increases the cone of probability and directly affects accuracy. Blinking and flinching is not the same thing. Be aware of both.
Shooting firearms is a violent event. The noise and recoil can lead to an involuntary fear reaction that causes involuntary muscle contractions or cognitive disconnects, thus impairing the ability to shoot accurately.
An improperly fitted gun previously learned fear responses from heavy recoil, and poor technique aggravating the fear response all contribute to continued poor performance.
The learned response to the pain of recoil can degrade the ability to shoot accurately and lead to flinching.
Human mass involved in the impulse transfer
The average human mass analogy at play is 65 pounds in a man shooting from a standing position. The radius of gyration about the center of gravity is in the range of 48 inches. When the gun recoils, the resolution of the rapidly moving gun resolves to produce an inverted pivot that as the gun’s and the human’s velocities equalize, tent to come into the time required to input muscular effort to straighten back up. The rock back distance of a field load 12 gauge shotgun shell or 270 rifle move the average shooter backward around 3 inches before voluntary muscle control takes over and you straighten back up.
The compliance of the human, when subjected to recoil, is amazingly stretchy and looks nothing short of a bowl of Jell-O when viewed with a high-speed camera. Study of the entire recoil impulse progression shows a shockwave propagating concentrically from the gunstock on the shoulder, through the flesh, up to the arm and the neck of the shooter, and ending in the trigger finger and the earlobe of the shooter at about the same time…
Shooting grip technique
Poor technique, especially holding the rifle or shotgun too tightly in an attempt to increase the mass in play and therefore control the pain or fear of recoil causes two opposing sets of muscles to twitch if held too tightly or too long. The instinct to hold on tightly to avoid the fear or the pain of recoil tends to reduce accuracy. Instead, by holding the gun with your voluntary muscles in a controlled but relaxed grip, you will shake less, thereby increasing accuracy.
The goal of good trigger control is to produce a movement in the trigger which overcomes the tipping point without inducing any secondary motion in the aiming point. By using slow-twitch muscles, the action is smoother and therefore more predictable. The approximate timing of the shot should coincide with the lull in breathing but should surprise you. The placement of the finger on the trigger is generally best positioned with the center of the pad of the first finger joint on the trigger because the mechanics of the hand produces less sideways motion if done so. Jerking and slapping are less accurate.
Mastering the ability to control one’s breathing is crucial to accurate shooting; the changing shape and position of the shooter’s chest when breathing causes the gun to move in relation to the perceived aiming point. The time lag between the position of the gun and perception of motion by the human mind causes the true bore aim direction to differ from the perceived point of aim. Instead, there is a cone of probability that covers the extent of the motion. By perfecting the skill of holding your breath so the shot leaves during a period of non-chest-motion, the cone of probability is decreased in size, and accuracy increases.
Just like breathing, each heartbeat changes the shape and position of the chest. This inherently affects the aiming point. To maximize accuracy, the goal is to reduce your heart rate to the base minimum and to squeeze the trigger such that the shot surprises you between beats.
There are many positions to shoot from. The most accurate position is one which gives the greatest stability possible, such as the prone or bench rest position. Medium accuracy stances would include at least three points of contact, such as sitting with the forestock steadied on a knee. The least accurate is unsupported shooting from a standing position.
In shooting moving targets, the shouldering position of the gun should be natural and repeatable in order to tune the muscle reflex with the eye and point of aim. Engaging rapidly moving targets while cross-panning can cause the gun to shift positions on the shoulder which increases the risk of pain and injury and decreases accuracy.
Fit of the gun
Length of pull is the distance from the center of the trigger contact surface to the surface of the recoil buttstock. The ideal fit for the length of pull can also be measured with your shooting arm bent at 90 degrees with your trigger finger in a natural trigger pulling configuration. A measurement is taken from the first joint of the shooting finger to the crook of your elbow. 13-3/4 is the standard length of pull for guns manufactured in the USA.
Myself, I am 6’-3” tall and my length of pull is 14-1/2”. A well-fitted gun and a FalconStrike are the two biggest performance-enhancing features available. Get both.
Rate of fire
Rate of fire affects accuracy. The shorter the time available to compose the ideal shot, the more the cone of probability will increase. Rapid semi-auto or full-auto cycling will have so much recoil energy transmitted to the shooter that the muscular effort required to maintain the line of sight will increase the cone of probability.
Shots per session
The number of shots fired in any given shooting activity defines when it goes from being fun to being painful. Reaching the point of refusal, or worse, risking the development of flinching is dependent on many factors, but the biggest contributor is total recoil energy absorbed by the shooter in any one session.
Having analyzed all of the aspects listed above of both the gun’s behavior and of the human’s shortcomings, the single largest gain in comfort and confidence available is the FalconStrike hydraulic recoil energy converter.
Apart from a well-fitted gun, eliminating recoil energy before it reaches the shooter is the single largest gain available in order to advance in proficiency, and here’s why:
- By substantially increasing comfort and reducing the recoil energy transmitted to the shooter, the stun point is avoided, fear and anticipation eliminated, sensory input leading to the point of refusal is reduced leading to more enjoyment and time shooting, and most importantly, the cone of probability is reduced leading to greater consistency and accuracy.
- Mechanically, the FalconStrike installs easily with 2 screws on the butt of your gun, but will reduce energy transmitted to you by as much as a muzzle brake. FalconStrike is equally suited to increase comfort and confidence on shotguns or rifles. The revolutionary level of energy reduction and increase in comfort is akin to shooting two calibers smaller.
- In addition to permitting you to master the human factors, the single largest modification to any gun platform performance available is the FalconStrike. By maximizing the comfort of the recoil event and minimizing the energy transmitted, you will enjoy more time shooting, attain new levels of performance and enjoy the thrill of shooting like never before.
The increase in performance is summarized by these three successive improvements in human technology: Pointy sticks, Gunpowder, FalconStrike
It would be fair to warn you, you will go through more boxes of ammo. | <urn:uuid:993f2c03-8295-4ee3-8de3-e45763a8ae66> | CC-MAIN-2019-47 | https://www.falconstrikeusa.com/recoil/what-is-recoil/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00297.warc.gz | en | 0.931093 | 6,777 | 3.296875 | 3 |
Pronunciation: dees-TREE-toh feh-deh-RAHL.
Origin of state name: Describes the location of the federal government of Mexico.
Capital: Ciudad de México (Mexico City).
Coat of Arms: The coat of arms contains a picture of a golden castle that is surrounded by three stone bridges. There are two lions supporting the castle tower. Around the border of the shield are ten thorny cactus leaves.
Holidays: Año Nuevo (New Year's Day—January 1); Día de la Constitución (Constitution Day—February 5); Benito Juárez's birthday (March 21); Primero de Mayo (Labor Day—May 1); Revolution Day, 1910 (November 20); and Navidad (Christmas—December 25).
Flag: There is no official flag.
Time: 6 AM = noon Greenwich Mean Time (GMT).
The Distrito Federal (DF, Federal District) is the capital of Mexico. (Most of its territory is occupied by Mexico City.) It has an area of 1,547 square kilometers (597 square miles). It is about ten times the size of the District of Columbia, the US capital. Although there are no municipalities in Mexico's Distrito Federal, there are sixteen political districts.
The Distrito Federal is located in the center of the country. It is bordered on the north and west by the state of México and on the south by the Mexican state of Morelos.
The Distrito Federal lies on the high valley of Mexico, where the elevation is 2,280 meters (7,525 feet). (The high valley also encompasses parts of the Mexican states of Hidalgo, Puebla, Tlaxcala, and México.) The high valley is surrounded by mountains. Two volcanoes—Popocatépetl and Ixtaccíhuatl—are sometimes visible from the DF. One of the best-known canals in the Federal District is the Xochimilco canal.
The climate is generally dry, with the greatest rainfall occurring during the
In 1985, the DF region was struck by a devastating earthquake.
The area was once home to dense forests. Deforestation and heavy development has reduced much of the habitat of the native animals. During the winter months, migrating butterfly species may be viewed around the DF.
The government has created special reserves for protection of native plants and animals. The Chichinautzin Ecological Reserve, established in the late 1980s, has volcanic craters that sprout unique vegetation . Ajusco National Park has pine and oak forests. Xochimilco floating gardens is a popular tourist spot. Native animals can be seen in the zoo in Chapultepec Park.
Mexico City has some of the worst air pollution of any city in the world. In 1988, the government passed a law aimed at
Mexico's first national park, Desierto de los Leones (Desert of the Lions), was established in the DF in 1917. It is a rich forest area (the name desert came from its remote location in 1917). In response, the government launched a program to try to restore the health of the forest. Trees are being replanted and woodpeckers are being reintroduced.
Distrito Federal had a total population of 8,605,239 in 2000; of the total, 4,110,485 (48%) were men and 4,494,754 (52%) were women. The population density was 5,799 people per square kilometer (15,019 people per square mile). The Distrito Federal is the most densely populated region of the country. Almost all residents speak Spanish, but about 1.8% of the population speaks one of the indigenous languages as their first language.
According to the 2000 census, 81% of the population, or 7 million people, were Roman Catholic; 3%, or 277,400 people, were Protestant. That year there were also 7,852 Seventh-Day Adventists, 21,893 Mormons, 74,140 Jehovah's Witnesses, and 18,380 Jews. Over 280,000 people reported no religion.
Mexico-Benito Juárez Airport provides international flights to and from the Distrito Federal. Mexico City has been served by the Sistema de Transporte Colectivo Metro, an extensive metro system (over 200 kilometers/125 miles), since the 1960s. More than four million people travel on it each day. The fare is approximately us$0.20.
Mexico City is one of the oldest cities continuously inhabited in Latin America. Remains found at Tlatilco point to 1500 B . C . as the first time when a permanent settlement was built in Mexico City. Between 100 and 900 A . D . the center of human activity in the central valley of Mexico moved to Teotihuacán, an area north of modern day Mexico City. There , impressive pyramids were built as sites of worship for the Sun and the Moon. The valley where the city of Mexico is located was populated by Toltec Indians who began to grow and expand from southern Mexico and reached Teotihuacán after the Teotihuacán culture had begun its decline.
During the 13th century, Mexica Indians, also known as Aztecs, arrived in the region. According to the historical myth, they left a northern (probably mythical) city of Aztlán. They were led by their god, Huitzilopochtli, represented by a warrior-like figure. The myth says that when they arrived in the lake-filled region, the Aztecs witnessed the vision of an eagle devouring a snake while perched on a cactus. They believed that this was a message that their god wanted them to stay there. Tenochtitlán-Mexico was founded on June 8, 1325.
Under the leadership of monarchs Izcoatl, Montezuma I (Moctezuma I, d. 1469), Axayacatl (15th century), Tizoc, Ahuizotl (d. 1503), and Montezuma II (Moctezuma II, 1466–1520), the Aztecs emerged as the most important civilization in the central valley of Mexico. The highly disciplined warriors, who took the eagle and jaguar as their symbols, rapidly expanded the areas under Aztec domination. Yet, the Aztecs also absorbed and incorporated religious beliefs and cultural values of the groups they entered into contact with and dominated.
One of the most impressive constructions in Mexico City was the Templo Mayor, a double pyramid dedicated to the gods Tlaloc (god of water and rain) and Huitzilopochtli (god of war). They believed that Templo Mayor was the center of a universe, and human sacrifices were required to sustain it as such. In addition, there were other temples dedicated to Quetzalcóatl (father of civilization), Tezcatlipoca (god who creates and changes all things), and Ehecatl (god of wind). Together with the temples, the city rapidly grew as a commercial and military center of a vast Aztec empire. Located in the middle of a number of small lakes, the city gave way to the construction of a number of canals that surrounded plots of lands, called chinampas.
When the Spaniards arrived in the 1500s, Mexico-Tenochtitlán was probably one of the most populated cities in the world. Its structures and buildings must have deeply impressed the Spanish conquistadores (conquerors). Because of a religious myth, Aztecs expected the return of Quetalzcóatl. Believing Spanish explorer Hernán Cortés (1485–1547) was the returning god, Emperor Montezuma II met Cortés at the entrance of the city to welcome him. Cortés imprisoned Montezuma and moved on to kill many of the Aztec nobility. Despite the resistance of the Aztecs led by Cuahtémoc (c. 1495–1522), Cortés successfully conquered Mexico-Tenochtitlán on August 13, 1521.
Cortés quickly moved to control the rest of the central valley of Mexico but made Mexico-Tenochtitlán the capital city of the new Spanish territory. Roman Catholic churches were built on top of the ruins of Aztec temples, and the new government buildings replaced other sacred Aztec constructions. The Mexican presidential palace, the city's cathedral, and the main central square were located exactly in the same place where the Aztec's most important buildings were erected. That gives Mexico City a profound sense of the dramatic changes that occurred with the arrival of the Spanish conquistadores. It also reflects the deep history of a city that has been one of the world's greatest cities for centuries. Most of the people of Mexico are mestizo (mixed Amerindian and European descent). They are like Mexico City, where two rich and expanding civilizations came together.
After Cortés conquered it, Mexico-Tenochtitlan (now Mexico City) became the center of colonial rule—through the 16th, 17th, and 18th centuries, and through the heart of the independence movement between 1810 and 1821. It was at the center of the political instability that characterized Mexico between 1823 and 1867. It was the place where president Benito Juárez (1806–1872) began adopting his celebrated reforms. Mexico City was the main objective of the revolutionary leaders of 1910.
Plaza de la Constitución, the square in the center of the city (commonly referred to as the Zócalo ), symbolizes the three cultures of Mexico City: the original Aztec, the invading Spanish, and the resulting Mexican culture that blends the two. The DF is one of the most populated cities in the world. It
By the 1970s, the city began to experience extreme problems with air pollution. Mountains surrounding the city exacerbated the problem, since they caused the air to be trapped over the city.
The chief of government ( jefe de gobierno ) is the chief executive (or mayor) in the Distrito Federal. Previously appointed by the president as a cabinet minister, since 1997 the chief executive has been democratically elected, by those residing in the Distrito Federal, for a six-year, nonrenewable term. The first elected chief of government was Cuauhtémoc Cárdenas Solórzano (b.1934). The legislative assembly of the DF is comprised of sixty-six members, forty elected in single member districts and twenty-six elected by proportional representation. The chief of government of the DF is elected concurrently with the president of Mexico on a separate ballot. As in other DFs, the federal government retains some authority to decide on matters that pertain to financial and administrative issues.
The local and state governments are the same, since the Federal District is the local government of the capital city of Mexico.
The three main political parties in all of Mexico are the Institutional Revolutionary Party (PRI), the National Action Party (PAN), and Party of the Democratic Revolution (PRD). These three parties have a strong presence in the DF. Voters first democratically elected their chief of government in 1997. Former PRD presidential candidate Cuauhtemoc Cárdenas Solórzano won the election. The PRD's Andrés Manuel López Obrador won the 2000 election and emerged as one of the most important contenders for the 2006 presidential election. The PRD is strongest in the DF, but the PAN has also made inroads. Since the PRI has mostly lost support in urban areas, its strength in the DF has also diminished significantly since the mid-1980s.
The Superior Tribunal of Justice is the highest court in the Distrito Federal. Its members are appointed by the chief of government, with congressional approval, for renewable six-year terms. Although there is also an electoral tribunal and lower courts, the presence of the national Supreme Court and the Federal Electoral Tribunal usually render the Federal District Superior Tribunal and Electoral Tribunal less important than its counterparts in the other thirty-one states. Yet, as the Distrito Federal slowly changes from being a bureaucracy highly controlled by the federal government into a more state-like autonomous entity, the independence, the autonomy, and the importance of the federal district judicial system will become more important.
Over 10% of Mexico's gross domestic product (GDP) is produced in the DF. Many manufacturing concerns are headquartered in the DF; it is also a center for Mexico's tourism industry.
Major industries located in the DF include the manufacture of auto parts, food products, electrical equipment, electronics, machine tools, and heavy machinery.
The US Bureau of Labor Statistics reported that Mexican workers saw their wages increase 17%, from $2.09 per hour in 1999 to $2.46 per hour in 2000. (The average US worker earned $19.86 per hour in 2000.) After one year, workers are entitled by law to six days paid vacation.
A few small dairy farms lie on the outskirts of the city, with the milk and cheese sold locally. Some families also raise pigs and chickens in backyard pens. A typical family might keep three pigs and one or two dozen chickens. Some of the animals are consumed by the family, but most are raised to be sold by local butcher shops. Vegetables and fruits are also raised, but only in small family gardens.
There was once logging in the DF, but most of the forests are now protected.
The Federal Electricity Commission (CFE) manages the generation of electricity in Mexico; the smaller, but also government-owned, Luz y Fuerza del Centro (LFC) supplies electricity to much of the DF.
The Distrito Federal (Mexico City) has 109 general hospitals, 699 outpatient centers, and 589 surgical centers.
Most of the Mexican population is covered under a government health plan. The IMSS (Instituto Mexicano de Seguro Social) covers the general population. The ISSSTE (Instituto de Seguridad y Servicios Sociales de Trabajadores del Estado) covers state workers.
Housing in the DF varies from luxury townhouses and apartments to housing built from poor quality materials. About 10% of the housing is in need of upgrading. The rapid growth in population in the DF means that there is an ongoing shortage of affordable housing.
The system of public education was first started by President Benito Juárez in 1867. Public education in Mexico is free for students from ages six to sixteen. There were about 1.5 million school-age children in the DF in 2000. Many students elect to go to private schools, especially those sponsored by the Roman Catholic Church. The thirty-one states of Mexico all have at least one state university. The National University of Mexico (UNAM) is located in the DF, as are El Colegio de Mexico (College of Mexico) and the Instituto Politécnico Nacional (National Polytechnic Institute).
Mexico City is home to many dance and music performing arts groups. The Ballet Folklórico Nacional de Mexico (National Folk Ballet of Mexico) performs in the Palacio de Bellas Artes. Three other companies—Ballet Independiente, Ballet Neoclásico, and Ballet Contemporánea—perform in Mexico City. There are three major orchestras (including a children's symphony orchestra). ¡Que Payasos! (Clowns) is a popular rock-and-roll group that performs at festivals, especially for young people. There are over thirty-seven theaters and auditoriums sponsoring plays, concerts, and other types of performances.
There are 390 branches of the national library system in the Distrito Federal. There are also 127 museums. The most important museums are the art studio of famous painter Diego Rivera (1886–1957); an archeological museum; the home of artist Frida Kahlo (1907–1954); a national zoo; a science museum; a museum of paleontology (the study of fossils); a stamp museum; a mural museum dedicated to the art of Diego Rivera; a museum of the Mexican Revolution (1910–1920); a cultural institute of Mexico and Israel; the Palace of Fine Arts; the Basilica of the Virgin of Guadalupe (patron saint of Mexico); a Bible museum; an Olympic museum; Chapultepec Castle (located in Chapultepec Park—a huge city park); the archeological museum of Xochimilco; and many others.
Mexico City has numerous newspapers. Some of the most popular ones are Cuestión, Diario de México, Diario Oficial de la Federación, El Economista, El Heraldo de México, El Sol de México, El Universal, Esto, Etcerera, Excelcior, Expanción, La Afición, La Crónica de Hoy, La Prensa, Novedades, Reforma, and Uno Mas Uno.
Mexico City's downtown area has a beautiful baroque cathedral called the Catedral Metropolitana . Alameda Central, dating from the 17th century, is the oldest park in the country. The Palacio de Bellas Artes has murals by Diego Rivera and David Alfaro Siqueiros (1896–1974) and crystal carvings of Mexico's famous volcanoes, Popocatépetl and Iztaccihuatl. The Zona Rosa (Pink Zone), a famous shopping area, also has two beautiful statues, La Diana Cazadora and the statue of Cristóbal Colón (Christopher Columbus). Chapultepec Park houses the castle of the former emperor Maximilian (1832–1867) and empress Carlotta (1840–1927), a zoo and botanical gardens, and the famous Museum of Anthropology, which has the old Aztec calendar and a huge statue of Tlaloc, the Aztec rain god. The main avenue of Mexico is La Avenida de la Independencia, featuring the statue of the Angel of Independence, a famous landmark of the city. The shrine of Our Lady of Guadalupe, the patron saint of Mexico, is visited by thousands of pilgrims, some who climb the steps on their knees. The floating gardens of Xochimilco may be viewed by boat. University City houses the Universidad Autónoma de México (UNAM); its modern campus buildings feature murals and mosaics by Diego Rivera on their outer walls.
Bullfighting is popular in Mexico City. Spectators may choose to pay higher prices to guarantee seats in the shade, since the sun in Mexico City can be scorching.
Soccer is the most popular sport, and Mexico City has six soccer stadiums. Four teams—National Team, Atlante, América, and Necaxa—play their home games in the huge Estadio Azteca (Aztec Stadium); it seats 114,465 people and was the site of the World Cup finals in 1970 and 1986. The soccer team from the Universidad Autónoma de México plays in the 72,449-seat Olympic Stadium, built for the 1968 Olympics. Another soccer team, Cruz Azul, plays in a 39,000-seat stadium.
The professional baseball team, Diablos Rojos, plays in the 26,000-seat Foro Sol stadium. Mexico City also hosts bullfighting in the 40,000-seat Plaza Mexico and in the 10,000-seat Plaza de Toreo Cuatro Caminos. Toluca has minor league soccer teams that play in the 26,000-seat Nemesio Diez stadium. Nezahualcayotl has a professional soccer team, Neza, which plays in the 37,000-seat Neza 86 stadium.
Hernán Cortés (1485–1547) was a Spanish conquistador who conquered Aztec emperor Montezuma II to become the founder of Spanish Mexico. Though born in Spain, after his death his remains were placed in a vault at the Hospital de Jesus chapel, which he helped build. Octavio Paz (1914–1998), winner of the 1990 Nobel Prize for Literature, was born in Mexico City. Composer Carlos Chávez (1899–1978) produced works that combined Mexican, Indian, and Spanish-Mexican influences. Cantinflas (Mario Moreno Reyes 1911–1993) was a popular comedian, film producer, and writer who appeared in more than fifty-five films, including a role as Passepartoute in the 1956 version of Around the World in Eighty Days. Agustín Lara (1900–1970) was a popular composer who made his mark in the film industry from 1930 to 1950, a period known as the Golden Age of Mexican cinema. Carlos Fuentes (b. 1928) is a renowned writer, editor, and diplomat. He was head of the department of cultural relations in Mexico's ministry of foreign affairs from 1956 to 1959 and Mexican ambassador to France from 1975 to 1977 . His fiction works deal with Mexican history and identity and include A Change of Skin, Terra Nostra, and The Years with Laura Díaz (all of which have been translated into English from Spanish). José Joaquín Fernández de Lizardi (1776–1827) was a journalist, satirical novelist, and dramatist, known by his pseudonym El Pensador Mexicano. His best known work is El Periquillo Sarniento (The Itching Parrot). Manuel Gutiérrez Nájera (1859–95) is considered to be one of the first Mexican modernist poets. The life of painter Frida Kahlo (1907–1954) was the subject of a 2002 feature film, Frida, starring Salma Hayek.
Caistor, Nick. Mexico City: A Literary and Cultural Companion. New York: Interlink Books, 2000.
Carew-Miller, Anna. Famous People of Mexico. Philadelphia: Mason Crest Publishers, 2003.
Laidlaw, Jill A. Frida Kahlo. Danbury, CT: Franklin Watts, 2003.
Supples, Kevin. Mexico. Washington, DC: National Geographic Society, 2002.
Mexico City. http://www.mexicocity.com (accessed on June 17, 2004).
Mexico for Kids. http://www.elbalero.gob.mx/index_kids.html (accessed on June 15, 2004). | <urn:uuid:b784663f-56ef-462b-8d9a-d1639492801a> | CC-MAIN-2019-47 | https://www.nationsencyclopedia.com/mexico/Aguascalientes-M-xico/Distrito-Federal.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00540.warc.gz | en | 0.949651 | 4,720 | 3.53125 | 4 |
For a city of around 275,000 inhabitants, Verona can swell twice it’s size between June and September, most of t come for the Summer Opera Season at the Verona Arena or to see the (fictional) home of Juliet Capulet.
In between these two big draws there is an amazing UNESCO World Heritage City filled with ancient Roman Ruins, Medieval Castle Fortresses, Ancient City Walls, Gothic and Romanesque Churches and beautiful 14th-16th century Piazze. There is a lot packed into this small city.
Thanks to the strategic location on the Adige River and the fertile Po River Valley, Verona has enjoyed a prosperous life since around 500 BC. It became a Roman colony is 89 BC. In 49 BC all Veronese were granted Roman citizenship by Julius Caesar who used to vacation here himself.
By the end of the 1st century, Verona had a population of over 25,000, a city wall with large gates, Roman Forts, a Forum, wealthy villas, a theater and a large amphitheater that held 30,000.
Ancient Verona was also well situated between three important Roman roads; the via Postumia (built in 148 BC) that traveled from Genoa to Aquileia in Friuli, the Via Claudia Augusta (built in 47 AD) that traveled from the Po River Valley through the Alps into southern Germany) and the Via Gallica that went from the Po River valley to Brescia, Bergamo, the Lakes and Milan. It’s no wonder why Verona became a hub for ground and naval transportation and the defense of the northern Empire against Gallic and Celtic tribes.
The Verona Arena is the most famous of the ancient treasures here. It’s the 2nd most complete amphitheater in Italy (the Coliseum in Rome is #1). During the span of the Roman Empire over 230 amphitheaters were built. The best survivors are in Rome (Italy), Arles (France), Nimes (France), El Djem (Tunisia), Pula (Croatia) and here in Verona.
The Arena was built in the 1st century. Some attribute it to the reign of Tiberius, around 30 AD, while others say it was later, around 75 AD, during the reign of Vespasian. The Coliseum in Rome and the Amphitheater of Pula (Croatia) were also built during the time of Vespasian so it kind of makes sense that the Verona Arena was built at the same time.
The original Amphitheater was built at 152m x 113m (498’ x 371’) but after an earthquake in 1117 collapsed parts of the outer ring and the subsequent dismantling of stones by local scavengers, the overall size was reduced down to 140m x 110m (459’ x 361’), although when the Venetians took the city in 1405 they restored a lot of the outer wall.
Most of the interior shape is still in tact but the outer ring known as the ‘Aula’ or wing is mostly gone. It’s still very big and with 45 tiers of steps, there is plenty of room to stage a grand Opera performance, which has been happening every year since 1913.
Over 30,000 spectators watched gladiatorial games here till 404, when the Western Roman Emperor Honorius ended the entertainment of mortal combat. By the way, he ended this spectacle of death over 50 years after the Roman Empire became a Christian Empire. I guess back then the early Christians still enjoyed a good blood sport.
When the Goths took the city in 489, they used it for festivals and entertainment. The Goths were Christian but we have no evidence of mortal combat games reinstated in the Amphitheater.
During the Middle Ages, executions were added to the list of events held in the Arena. One of the worst occurred during the 13th century when 166 Cathars were burned alive.
Cathars were Christians who believed the mortal world was created by Satan and only the belief in the “good” god would save our everlasting souls. Pope Innocent III believed the Cathars should be wiped from the face of the earth. The slaughter of Cathars through France and Italy was one of the worst tragedies ever committed in the name of Christianity. In Béziers France alone, 20,000 Cathars were slaughtered.
I’ve written more about the Cathars in my post from the Languedoc in 2004.
In 1889, William ‘Buffalo Bill’ Cody and his Wild West Show featuring Annie Oakley performed inside the Verona Arena
On August 10, 1913, Verdi’s Aida was performed in the ‘Arena’ as a tribute to the 100th anniversary of Verdi’s birth and with the exception of war years (1915-1918 and 1940- 1945) there has been a summer (June through August) Opera season here for over 100 years.
The Arena seats 15,000 for each production, half of what the original audience was in the 1st century. Tickets range from to sit on the hard stone bleachers in the nosebleed section to for the nice padded seats on the Arena floor.
You can book your tickets at the Arena website.
The Verona Arena is mostly constructed of pink and white Jurassic age limestone quarried from nearby Sant’Ambrosia di Valpolicella, about 10 miles north of Verona in a hilly area near Lake Garda mostly known for it’s wines.
If you walk across the limestone tiers of seating and look carefully you’ll see the coiled mollusk shapes in the stone. Although they might look like the modern nautilus shell, they are actually another species known as ‘ammonites’, more closely related to an octopus or squid than a nautilus and NOT related to the Ammonites who were a pre biblical Semitic people from the Jordan River valley. Whatever these organisms were related to, they went extinct in the Cretaceous period at the same time as the dinosaurs, over 100 millions years ago.
The name comes from Pliny the Elder who coined them ‘Ammonus Cornua’ in the 1st century when he first noticed them and though they looked like the rams horns worn by the Egyptian god Ammon. Pliny the elder died in 79 AD when he decided to get closer to the volcanic eruption of Mount Vesuvius that destroyed Pompeii and also destroyed Pliny the elder.
The tiers of the Verona Arena are filled with ammonite mollusk fossils, creating wonderful spiral designs in the pink and yellow stone.
Once you see them inside the Arena, you can’t help noticing them all over Verona’s streets.
THE GATES AND TO THE ANCIENT ROMAN CITY
The white limestone 1st century Porta Borsari (Porta Iovia) was the main entrance to the Roman city from the via Postumia. There is still an inscription on the gate from the time the Emperor Gallienus made renovations in 265.
The via Postumia became the Decumanus Maximus in the city. Roman cities had two main arteries of traffic: the Decumanus (east to west) and the Cardus (north to south). The Decumanus Maximus also led to the Military Fort, which in Verona was by way of the two Roman bridges across the Adige River.
The name Borsari comes from the middle ages when the gate was used to collect tolls and customs duty on merchandise coming in and out of the gate. The bags the customs agents filled with money are ‘bores’ and the agents themselves were called the Borsari.
The 1st century Arco dei Gavi, a memorial arch built by the Gavi family, is also on the via Postumia, (the Decumanus). Most of the arches built during the Roman Empire were in honor of the Emperor but there are a few excellent examples of wealthy family building arches in honor and memory of their family. If you’re ever in Pula, Croatia, check out the Arch of the Sergii built by the Sergii family. Pula also has one of the best Roman Amphitheaters in the world.
During the middle ages the Gavi Arch was in
corporated into the expanded city walls and used as one of the main gates till it was demolished in 1805 during the Napoleonic occupation. In 1932, most of the pieces were found and the arch was resurrected next to the Scalageri Fortress known as ‘Castelvecchio’. It’s current location is very close to the original one. The Gavi Arch still has the inscription of the architect, Lucius Vitruvius Cordo who worked a lot in ancient Verona. The inscription reads: Lucius Vitrivius Cerdo, a freedman of Lucius.
The Porta Leoni is the oldest gate in Verona, from around 50 BC. The name Leoni comes from a Roman tomb decorated with two lions found nearby. Not much is known about the Porta Leoni prior to the middle ages and not much of the gate is left. It was once a double façade with two towers but now only half of the inner façade is visible. The other half is long gone, probably used as building materials. In fact, the Porta Leoni actually looks like building materials used to hold up the corner of the intersection at via Amanti and via Leoni.
BRIDGES TO THE ANCIENT ROMAN CITY
The Ponte Pietra (Stone Bridge) is the ancient Roman Pons Marmoreus was also built on the Via Postumia in 100 BC. During the Roman times, there two bridges (the Marmoreus and the Postumius ) flanking each other across the Adige River. By the 10th century, The Postumius Bridge was pretty badly damaged. The floods of 1154 and 1239 wiped out any remaining parts of the bridge.
According to the Italian archeology writer Vittorio Galliazzo, there are still remains of 900 ancient Roman bridges throughout Europe, Turkey, Middle East and northern Africa.
Actually, only the first two arcades on the left side of the bridge are from the Roman period. The other three were rebuilt.
The one closest the right bank was built by Alberto I della Scala in 1298. The other two were built in the 16th century. The difference between the arches is pretty obvious. The newer arches are the traditional red brick look favored by the Della Scala (Scalageri) family.
The Torre di Guardia (Watch Tower) on the right bank was also built by Alberto I della Scala at the same time he repaired the bridge.
On April 24, 1945, the four center sections of the span were blown up by the retreating German Army. However, the Superintendent of Monuments had an idea the war might damage some of the monuments and so he had a team of people photograph every detail of the bridge. After the war, the Ponte Pietra was restored, rebuilt to the exact details with the same materials.
THERE ONCE WAS A ROMAN FORUM UNDER THE PIAZZA DELLE ERBE
The Piazza delle Erbe and the sits on what was once the ancient Roman Forum but what is under the Piazza might remain untouched. The 12th – 16th century Piazza is so beautiful and historic in it’s own right that archeological digs under them seems absurd.
The Pizza is named after the herbs and vegetables that were once sold here. This has always been the hub of commerce and life.
There is a curious covered square pavilion in the center of the square with handcuff chained to the upright post. I have no idea why it’s there but my guess is for tourist imaginations. I have never read anything about public executions or criminals put on display in the center of the square.
The fountain in the center of the square was built in 1368 by Cansignorio della Scala. . The statue on top of the fountain is called Madonna Verona, but it’s ancient Roman from 380 ad.
The statue faces the 16th century Loggia Berlina where political critics were once tied up and pelted with rotten vegetables sold in the market square.
The 12th Century Scalageri Palace was once located in the square but the Della Scala family had many palaces, including the Great Scalageri Fortress known as Castelvecchio. The Palace went through many renovations and became the Casa Mazzanti. The only accessible part of the old Palazzo is at the Café at Casa Mazzanti, a good spot for a coffee or cocktail.
The Hall of Commerce (Palazzo Maffei) is a Baroque masterpiece crowned with statues of Hercules, Venus, Jupiter, Minerva, Mercury and Apollo.
The 14th century Gardello Tower was built by Cansignorio della Scala. The clock and moving spheres were added in 1421 when it became known as the Torre delle Ore (Tower of Hours), or the clock tower. You can see the original bells in the Castelvecchio Museum.
The Piazza dei Signori, around the corner might also have been built over the Roman Forum. This is the royal square where the Scalageri Palaces once ruled the city. In the center of the square is a 17th century statue of Dante who was supported by of Cansignorio.
The square is filled with 12th-15th century buildings ranging from the Palazzo della Ragione (Town Hall), the Casa dei Mercanti (Merchants Hall and now the Banca Popolare di Verona), the Podesta (government Palace) and the Consignorio Scalageri Palace.
There is a mouth under the 17th century entrance known as the Porta dei Bombardieri (the gate of artillerymen) on the Cansgnorio Palace where people used to feed anonymous complaints against their neighbors, ratting them out to the local authorities.
THE ROMAN THEATER
The Roman theater sits at the base of the hill next to the Scalageri Castelvecchio. It’s one of the most preserved 1st century Roman Theaters in Italy. The first theater here dates back to the 1st century BC at the time of Augustus.
The theater was buried under the complex of churches, convents and urban construction for years until 1757 when excavations were made to recover it. In 1830 many of the houses on the hill were removed and through further excavations in the early 1900s, the theater gradually came back to life. The restoration was completed in 1955. The original theater was around 400’ x 500’. The current version is reduced down to around 350’ x 450’.
The cavea (horseshoe shaped seating tiers) are pretty much all that remains but they’re in good condition. There are also some original steps, the remains of a few arches and part of the ancient stage. The theater is used for small ballet, small concerts and, of course, dramatic performances. When the performance season ends (the same time as the Opera season) the Theater often closes for reparations.
Above the theater, on what was once called the Mons Gallus, is the Castel San Pietro. It was inhabited by Theodoric the Great in the 5th century and later used and added onto by Pepin the short in the 8th century and Berengar I in the 9th century. Gian Galeazzo Visconti, the Duke of Milan built a Castel here in 1398 but it was blown up by Napoleon’s French in 1801. The Austrians built the current Castel up on the hill in 1856. It’s a great view over the city but the Castel has been closed for a long time. There are plans to turn it into a museum but you know how plans go.
ROMEO AND JULIET TOURISM
We’re staying at the Escalus Verona, a small hotel a stone’s throw from the Verona Arena. Escalus was the Prince in Shakespeare’s ‘Romeo and Juliet’. He is the voice of reason a calm who tries to keep the Montague and Capulet families in peaceful coexistence. The hotel is built from an old building and although the exterior is fairly nondescript, the rooms are large and contemporary and the staff is wonderful. We are in the attic room which can be a head bumper for taller people but it does have the added attraction of an amazing private terrace overlooking the Arena. We sat up on the terrace and listened to a performance of Jesus Christ Superstar.
Shakespeare took his story from the 1562 poem the ‘Tragicall Historye of Romeus and Juliet’ by Arthur Brooke.
Brooke took the story from an Italian novella written by Matteo Bandello, a novelist from Mantua who wrote the book around 1560. Bandello’s stories were also the inspiration for ‘Twelfth Night’ and ‘Much ado about Nothing’.
Matteo Bandello took the story from a novella written in 1531 by Luigi da Porto, a captain in the Venetian Republic army who heard the story from one of his bowmen. Every story has a story.
I had an High School English teacher who told us that Shakespeare never traveled outside of England. He got all his ideas from books. In the 16th century England, Italians were considered clever, devious and exotic and of the 37 plays Shakespeare wrote, 14 of them take place in Italy.
Most literary historians believe the feuding families of the Montague and Capulet were modeled after the Montecchi and Cappelletti but in the 14th century these names were more known as political factions representing the Guelph and Ghibellines (more about this feud in a minute).
The Montecchi were originally from Vicenza, a good 50 kilometers from Verona. The Cappeletti (named for the linen hats they wore) were centered in Cremona, 100 kilometers away, but why let the facts get in the way of a good story. Actually there were members of each political family who settled within the city of Verona.
The last of the Montecchi family, Crescimbene de Monticuli, was thrown out of Verona in 1324 by Cangrande I (Scalageri). The house where the Monticuli (supposedly) lived is on the Via delle Arche Scalageri. It is now known as the house of Cagnolo Nogarola called Romeo’s house (Casa di Cagnolo Nogarola detto Romeo).
The house did actually belong to the Cagnolo Nogarola family (part of the della Scala family). The only way you’ll know it’s Romeo’s house is from the plaque on the wall. The house is private and tourists are not allowed inside. The house, by the way, is conveniently close to Juliet’s house.
The house that poses as Juliet’s house was once owned by the dal Cappello family. In fact the house sits on the via Cappello. It’s very close to Romeo’s house and very very close to the crowded Piazza delle Erbe.
There are Romeo and Juliet references all over Verona, but Juliet’s house and the balcony where she gazed down to Romeo’s sweet words is the one that tourists flock to.
Juliet never lived in the house. Yes, that’s right Virginia, there never really was a Juliet Capulet. She is a work of fiction but that doesn’t get in the way of the faithful.
Inside the Capulet Museum is filled with movie and photo trivia about the play. There is also a computer room where you can send love letters to your favorite character.
The Capulet house does date back to the 14th century, which was the period when the play takes place, but the famous balcony was added much later in order to tie into the story. The balcony is actually a medieval sarcophagus attached to the wall over the courtyard. If you didn’t know, you’d never notice.
Not much is known about the Cappello family except they sold the property to the Rissardi family in 1667. The Rissardi opened it as a hotel and, of course, told the guests the house was once the home of Shakespeare’s Juliet. It was sold to the state in 1905 and was pretty much forgotten.
In 1936, George Cukor’s film of Romeo and Juliet changed everything. The film starring Norma Shearer and Leslie Howard became an international sensation and by 1939, Juliet mania was in full swing. September 16th has been deemed Juliet’s birthday. I have no idea why but it has become a very big day for Juliet tourism.
For many years unrequited lovers have attached notes or graffiti to the brick walls of the courtyard under the balcony, hoping that the fictitious romantic heroine would somehow create a magic spell that will grant them their true love’s request.
The adhesive of choice for posting notes was chewing gum. It got pretty ugly. In 2012 the city of Verona had enough and passed an ordinance against chewing gum or graffiti on the walls with a penalty up to a 500 euro fine. The gum law was originally passed in 2004 but I guess it didn’t stick and so now it’s being re-enforced.
These days romantic tourists are still allowed to leave their love letters to Juliet but only on the removable panels in the entrance arch to the courtyard. Romeo and Juliet tourism is enormous and the city legislators are very careful not to piss off the visitors too much.
Fictional tourism is big business. One of the favorite tourism destinations in London is the Sherlock Holmes apartment at 221B Baker Street.
Ever since the fall of the Soviet Union thousands have made the visit to Dracula’s Castle (Bran Castle in Romania) which by the way is currently for sale.
In my city of San Francisco, where the Sam Spade walking tour is very popular, you can go to a small alley called Burritt Street, just off Bush Street above the Stockton Tunnel near Union Square. There is a plaque on the wall that reads, ‘On approximately this spot Miles Archer, partner of Sam Spade, was done in by Brigid O’Shaughnessy.’
Aside from the visitors to the house, over 6,000 letters addressed to ‘Juliet’s House’ Verona Italy arrive every year.
They are mostly received by the Juliet Club, a group who share the courtyard with the famous Juliet House. This is a volunteer organization of people 18 or over preferably with candidates who have several languages and a background in psychology, sociology, literature and journalism. Not all candidates are accepted.
There used to be another way to have your love granted that didn’t require writing a note. There was a bronze statue of Juliet in the courtyard and just a rub of her right breast would grant you the fulfillment of your true love. That was until early 2014 when the statue was removed. The statue was put there in 1972 but after 42 years of aggressive groping and rubbing, the breast developed a large hole and the right arm used to support the gropers of the magic orb cracked. The statue is being restored in the Museum Castelvecchio. The right arm and right breast of the replacement statue are already showing the polished acid touch of human hands.
If you are serious in your quest for everything Romeo and Juliet in Verona, there are a few other locations you might want to visit.
Juliet’s tomb is in the crypt of the Convent at the San Francesco al Corso Church. This Franciscan church has been here since the 13th century and since it was outside the Verona city walls in the 14th century it fit the description for the burial of Juliet. The convent also has a medieval museum and fresco museum.
The crypt of the 11th century basilica of Saint Zeno is now regarded as the place where Romeo and Juliet were married by Friar Lawrence. The crypt is eternally occupied by the relics of Saint Zeno, the 8th bishop of Verona and the city’s patron Saint.
Zeno was born in Mauretainia (northern Africa) around 300. According to Saint Ambrose (of Milan) who lived at the same time, Zeno died a ‘happy Death’ and was not tortured or martyred. However, according to Catholic mythology, if you are a saint you had to be martyred and so according to the Roman Martyrology, the Bishop of Verona was martyred by the Emperor Gallienus in 371 AD, even though the Emperor Gallienus died in 268.
According to legend, Theodoric the Great consecrated a small church over the tomb of Zeno in the late 4th century. In the 9th century his bones were moved to a new basilica built over the ancient Roman road, the Via Gallica. They were moved again during the Magyars invasion of the early 10th century and then moved back again on May 21st 921. Even though the feast day of Saint Zeno is actually April 12th, the Veronese celebrate on May 21 to coincide with the reburial of the sacred bones, the same bones that share the crypt where Romeo and Juliet were secretly married.
Theodoric the Great rebuilt the fortifications of the city but eventually lost the city to the Lombards. The Lombards finally fell to Charlemagne in 774.
By the way, Bell ringing began here in Verona in 622 when bells in the tower rang to announce the death of the Bishop Mauro. The earliest grand bell tower was at the Basilica di San Zeno in 1149.
THE GUELPH, GHIBELLINES AND THE SCALAGERI
Let’s backtrack a few minutes here for a brief introduction to the Italian City/States of the 11th-14th centuries, a time of feuding wealthy families preying on other wealthy families who lived nearby all in the name of the Guelph and Ghibellines. Lombardy and the Veneto is still filled with the fortresses and towers of the Visconti, Sforza, Malatesta, Gonzaga, d’Este, da Romano, Scalageri and others.
The age old rivalry between the Houses of Bavaria and Swabia moved into Italy after Frederick II (Barbarosa) was crowned Holy Roman Emperor in 1152. His resistance was Pope Alexander III.
The Pope was supported by the Guelphs (from the Bavarian ‘Welfs’), merchant families of the larger cities. The Emperor was supported by the Ghibellines (from Waiblingen, the ancestral home of the Hohenstaufen Swabians), land owners of the smaller cities. The conflict, mostly in Northern Italy, lasted over 400 years.
By the way, the great Emperor Frederick II Barbarosa who started the mess drowned in 1190 while crossing a shallow part of the Saleph River in Turkey (now known as the Göksu River). The current was too strong and the horse dumped the Emperor into the water face down. The water was shallow but the armor was too heavy and he couldn’t roll over. His troops packed him in a vat of vinegar, hoping to preserve him for a burial in Jerusalem but it didn’t work. His remains made it as far as Antioch (modern day Antakya) in southern Turkey.
It was the da Romano family that secured Verona for the Ghibellines after Ezzelino II da Romano defeated the ‘Lombard League’ of Pope supporters in 1212. His son Ezzelino III has gone down in history as one of the cruelest tyrants of Italian history. By the 14th century, historians claimed he was the ‘Son of Satan’. Even Dante placed him in the 7th circle of Hell. However, history always seems to be rewritten by the victorious.
When Ezzelino III died in 1259, the Scaligeri family took control of Verona. The family emblem is the ladder (Scala) but the family identified more with a pack of wild dogs.
The first ruler was Mastino I (Mastiff I). Down the line came Cangrande I (Big Dog I), Mastino II (Mastiff II), Cangrande II (Big Dog II) also known as Can Rabbioso (Rabid Dog), Cansignorio (Lord Dog) and Can Francesco (Dog Frank). Rabid dog was killed by Lord Dog in 1359.
Castelvecchio and the beautiful brick Castelvecchio Bridge were built by Cangrande II (Can Rabbioso) to protect himself from his enemies (the Venicians, the Sforza of Milan and Gonzaga of Mantua. Unfortunately the enemy who managed to kill him in 1359 was his brother Cansignorio who gave the city some of it’s most memorable 14th century buildings; the palaces in the Piazza dei Signoria, the Scalageri Tombs and the Gardello watchtower which in 1370 was built with a 51” wide 4,000 pound cast bell and one of the first bells to strike the hours in the world. This bell is in now the Castelvecchio Museum.
Construction of Castelvecchio started in 1354 over an ancient Roman Fort on other opposite side of the Adige River. It’s a great example of Gothic Castle construction and now, mostly used as the Medieval museum.
The Ponte Scaligero, the fortified brick and concrete bridge across the Adige River was also built in the 1350’s. It is an extention of the fortifications of the Castle. At it’s time of completion, it was the longest span bridge in the known world, 48.7 m (159.8 feet).
The designer, Guglielmo Bevilacqua was so nervous about the construction, he came to the inauguration of the bridge on horseback, just in case he needed to make a fast getaway if the bridge collapsed. The bridge worked out perfectly, that was until it was completely destroyed by the retreating Germans on April 24, 1945. The reconstruction (minus one of the fortified towers) began in 1949. I don’t know why they never built the second tower. Maybe they ran out of funding. Anyway, the bridge reopened in 1951.
Napoleon set up his Military headquarters here which is now part of the Museum tour. Galeazzo Ciano was tried here during the later days of the Italian Republic during World War II. The Castle was badly damaged during the war but it has been completely and exactly restored.
There are 10 bridges that cross the Adige River. It’s a shame the Germans had to bomb out the really historic ones. Actually it wasn’t the first time the Germans destroyed Verona. In 1626 there over 53,000 people lived in the city, but in 1630 some German soldiers arrived carrying a plague that wiped out over 60% of the city. By the time the plague was exhausted, barely 20,000 people remained. | <urn:uuid:acd68308-5baf-46d8-8559-3be51e315f18> | CC-MAIN-2019-47 | https://romeonrome.com/2014/10/2014-verona-ancient-rome-medieval-warlords-and-romeo-and-juliet-tourism/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668699.77/warc/CC-MAIN-20191115171915-20191115195915-00060.warc.gz | en | 0.971987 | 6,645 | 2.71875 | 3 |
The Kunlun mountains are believed to be Taoist paradise. The first to visit this paradise was, according to the legends, King Mu (976-922 BCE) of the Zhou Dynasty. He supposedly discovered there the Jade Palace of Huang-Di, the mythical Yellow Emperor and originator of Chinese culture, and met Hsi Wang Mu (Xi Wang Mu) , the ‘Spirit Mother of the West’ usually called the ‘Queen Mother of the West’, who was the object of an ancient religious cult which reached its peak in the Han Dynasty, also had her mythical abode in these mountains. Jesuit missionaries, the noted American Sinologist Charles Hucker, and London University’s Dr Bernard Leeman (2005) have suggested that Xiwangmu and the Queen of Sheba were one and the same person. The Transcendency of Sheba, a religious group, believes that the Queen of Sheba’s pre-Deuteronomic Torah recorded in the Kebra Nagast was influential in the development of Daoism. They insist that after vacating the throne for her son Solomon the queen journeyed to the Kunlun Mountains where, known as the Queen from the West, she attained spiritual enlightenment — Source: Cultural China
Mt. Kunlun has been known as the Forefather of Mountains in China. The name of the mountain can be found in many Chinese classics, such as The Classics of Mountains and Rivers, Commentary on the Waterways Classics, and Canonization of the Gods (or Gods and Heroes). As legend has it, the goddess of Kunlun is Queen Mother of the West. The adobe of immortals in many ancient books is said to be the Heihai, or the Black Sea – the source of the Kunlun River, 4,300 meters above sea level, with an area of 60 square kilometers. The river region is an ideal home to birds and wild animals, such as wild donkeys, sheep, and brown bears. There are precious murals in Yeniugou*, or Wild Bull Ditch. Textual research shows that this is where Taoist rites were performed during the late Yuan Dynasty (1271-1368).[*Yeniugou is traditionally an area a transitional grazing grounds for both the pastoral tribes of the Tibetans and Mongols, with Kazakh arrivals from Xinjiang superimposed in later times in 1950s. Source: Wildlife Status and Conservation in Yeniugou, Qinghai, China]
Goldin, Paul R “On the Meaning of the Name Xiwangmu, Spirit Mother of the West“. The Journal of the American Oriental Society Vol. 122 Nbr. 1, January 2002
Xi wangmu, the famous Chinese divinity, is generally rendered in English as “Queen Mother of the West.” This is misleading for two reasons. First, “Queen Mother” in normal English refers to the mother of a king, and Xi wangmu’s name is usually not understood in that manner. More importantly, the term wang in this context probably does not carry its basic meaning of “king, ruler. “Wangmu is a cultic term referring specifically to the powerful spirit of a deceased paternal grandmother. So Xi wangmu probably means “Spirit-Mother of the West.” This paper discusses occurrences of wang as “spirit” in ancient texts, and concludes with a consideration of some etymological reasons as to why wang is sometimes used in this less common sense.
See also Xi Wangmu, the shamanic great goddess of China by Max Dashu
“The name of the goddess is usually translated as Queen Mother of the West. Mu means “mother,” and Wang,“sovereign.” But Wangmu was not a title for royal women. It means “grandmother,” as in the Book of Changes, Hexagram 35: “One receives these boon blessings from one’s wangmu.” The classical glossary Erya says that wangmu was used as an honorific for female ancestors. [Goldin, 83] The ancient commentator Guo Pu explained that “one adds wang in order to honor them.” Another gloss says it was used to mean “great.” Paul Goldin points out that the Chinese commonly used wang “to denote spirits of any kind,” and numinous power. He makes a convincing case for translating the name of the goddess as “Spirit-Mother of the West.” [Goldin, 83-85]
The oldest reference to Xi Wangmu is an oracle bone inscription from the Shang dynasty, thirty-three centuries ago: “If we make offering to the Eastern Mother and Western Mother there will be approval.” The inscription pairs her with another female, not the male partner invented for her by medieval writers—and this pairing with a goddess of the East persisted in folk religion. Suzanne Cahill, an authority on Xi Wangmu, places her as one of several ancient “mu divinities” of the directions, “mothers” who are connected to the sun and moon, or to their paths through the heavens. She notes that the widespread tiger images on Shang bronze offerings vessels may have been associated with the western mu deity, an association of tiger and west that goes back to the neolithic. [Cahill, 12-13]
After the oracle bones, no written records of the goddess appear for a thousand years, until the “Inner Chapters” of the Zhuang Zi, circa 300 BCE. This early Taoist text casts her as a woman who attained the Tao [Feng, 125]” http://www.suppressedhistories.net/goddess/xiwangmu.html
Wangmu a specific cultic term meaning deceased paternal grandmother
The rabbit on the Moon
Tomb relief from Suide, Shaanxi
In an important find near Tengzhou, Shandong, an incised stone depicts Xi Wangmu with a leopard’s body, tail, claws, teeth, and whiskers—and a woman’s face, wearing the sheng headdress. Votaries make offerings to her on both sides. The inscription salutes Tian Wangmu: Queen Mother of the Fields. [Lullo, 271] This alternate title reflects her control of the harvests, a tradition attested elsewhere. [Cahill, 13]
At Suide in Shaanxi, a sheng-crowned Xi Wangmu receives leafy fronds from human and owl-headed votaries, while hares joyously pound exilir in a mortar (below). The magical fox, hare, frog, crow, and humans attend her in a tomb tile at Xinfan, Sichuan. The tomb art of this province shows the goddess of transcendence seated in majesty on a dragon and tiger throne. [Liu, 40-3] This magical pair goes back to the Banpo neolithic, circa 5000 BCE, where they flank a burial at Xishuipo, Henan. [Rawson, 244] Tiger and dragon represented yin and yang before the familiar Tai Ji symbol came into use during the middle ages.
The Western Grandmother presides at the summit of the intricate bronze “divine trees” that are unique to Sichuan.
Their stylized tiers of branches represent the multiple shamanic planes of the world mountain. The ceramic bases for the trees also show people ascending Kunlun with its caverns. [Wu, 81-91] “Universal mountain” censers (boshanlu) also depict the sacred peak with swirling clouds, magical animals and immortals. [Little, 148]
Xi Wangmu often appears on circular bronze mirrors whose backs are filled with concentric panels swirling with cloud patterns and thunder signs. She is flanked by the tiger and dragon, or the elixir-preparing rabbit, or sits opposite the Eastern King Sire, amidst mountains, meanders, “magic squares and compass rings inscribed with the signs of time.” [Schipper 1993: 172] Some mirrors are divided into three planes, with a looped motif at the base symbolizing the world tree. At the top a pillar rests on a tortoise—a motif recalling the mythical Tortoise Mountain of Xi Wangmu. [Wu, 87]
The marvellous Kunlun mountain lies somewhere far in the west, beyond the desert of Flowing Sands. It was often said to be in the Tian Shan (“heaven mountain”) range of central Asia
In the Zhuang Zi, Xi Wangmu sits atop Shao Guang, which represents the western skies. Elsewhere she sits on Tortoise Mountain, the support of the world pillar, or on Dragon Mountain. In the Tang period, people said that the goddess lived on Hua, the western marchmount of the west in Shaanxi, where an ancient shrine of hers stood. [Cahill, 76, 14-20, 60]
The sacred mountain is inhabited by fantastic beings and shamanistic emissaries. Among them are the three-footed crow, the nine-tailed fox, a dancing frog, and the moon-hare who pounds magical elixirs in a mortar. There are phoenixes and chimeric chi-lin, jade maidens and azure lads, and spirits riding on white stags. A third century scroll describes Xi Wangmu herself as kin to magical animals in her western wilderness: “With tigers and leopards I form a pride; Together with crows and magpies I share the same dwelling place.” [Cahill, 51-3]
Medieval poets and artists show the goddess riding on a phoenix or crane, or on a five colored dragon. Many sources mention three azure birds who bring berries and other foods to Xi Wangmu in her mountain pavilion, or fly before her as she descends to give audience to mortals. The poet Li Bo referred to the three wild blue birds who circle around Jade Mountain as “the essence-guarding birds.” They fulfil the will of the goddess. Several poets described these birds as “wheeling and soaring.” [Cahill, 99; 92; 51-3; 159]
The Jade Maidens (Yü Nü) are companions of the goddess on Kunlun. They are dancers and musicians who play
chimes, flutes, mouth organ, and jade sounding stones. In medieval murals at Yongle temple, they bear magical ling zhifungi on platters. In the “Jade Girls’ Song,” poet Wei Ying-wu describes their flight: “Flocks of transcendents wing up to the divine Mother.” [Cahill, 99-100]
Jade Maidens appear as long-sleeved dancers in the shamanic Songs of Chu and some Han poems. The Shuo wen jie zi defines them as “invocators [zhu] …women who can perform services to the shapeless and make the spirits come down by dancing.” [Rawson, 427] Centuries later, a Qing dynasty painting shows a woman dancing before Xi Wang Mu and her court, moving vigorously and whirling her long sleeves. [Schipper, 2000: 36] Chinese art is full of these ecstatic dancing women.
Han dynasty people placed bronze mirrors in burials as blessings for the dead and the living, inscribed with requests for longevity, prosperity, progeny, protection, and immortality. Taoists also used these mystic mirrors in ritual and meditation and transmissions of potency. One mirror depicting Xi Wangmu bears a poem on the transcendents:
The common people marched westward through various provinces, toward the Han capital. Many were barefoot and wild-haired (like their untamed goddess). People shouted and drummed and carried torches to the rooftops. Some crossed barrier gates and climbed over city walls by night, others rode swift carriages in relays “to pass on the message.” They gathered in village lanes and fields to make offerings. “They sang and danced in worship of the Queen Mother of the West.” [Lullo, 278-9]
People passed around written talismans believed to protect from disease and death. Some played games of chance associated with the immortals. [Cahill, 21-3] There were torches, drums, shouting. Farming and normal routines broke down. This goddess movement alarmed the gentry, and the Confucian historian presented it in a negative light. He warned the danger of rising yin: females and the peasantry stepping outside their place. The people were moving west—opposite the direction of the great rivers—“which is like revolting against the court.” The writer tried to stir alarm with a story about a girl carrying a bow who entered the capital and walked through the inner palaces. Then he drew a connection between white-haired Xi Wangmu and the dowager queen Fu who controlled the court, accusing these old females of “weak reason.” His entire account aimed to overthrow the faction in power at court. [Lullo, 279-80]
Change was in the air. Around the same time, the Taiping Jing(Scripture of Great Peace) described “a world where all would be equal.” As Kristofer Schipper observes, “a similar hope drove the masses in search of the great mother goddess.” [Schipper: 2000, 40] Their movement was put down within the year, but the dynasty fell soon afterward.
From the Han dynasty forward, the image of Xi Wangmu underwent marked changes. [Lullo, 259] Courtly writers tried to tame and civilize the shamanic goddess. Her wild hair and tiger features receded, and were replaced by a lady in aristocratic robes, jeweled headdresses, and courtly ways. Her mythology also shifted as new Taoist schools arose. She remains the main goddess in the oldest Taoist encyclopedia (Wu Shang Bi Yao). But some authors begin to subordinate her to great men: the goddess offers “tribute” to emperor Yu, or attends the court of Lao Zi. [Cahill, 34, 45, 121-2] They displace her with new Celestial Kings, Imperial Lords, and heavenly bureaucracies—but never entirely.
In the later Han period, the spirit-trees of Sichuan show Xi Wangmu at the crest, with Buddha meditating under her, in a still-Taoist context. [Little, 154-5; Wu, 89] By the Six Dynasties, several paintings in the Dun Huang caves show the goddess flying through the heavens to worship the Buddha. [Cahill, 42]
(In time, Taoism and Buddhism found an equilibrium in China, and mixed so that borders between the two eroded.) But cultural shifts never succeeded in subjugating the goddess.
She held her ground in the Tang dynasty, when Shang Qing Taoism became the official religion. She was considered its highest deity, and royals built private shrines to her. Her sheng headdress disappears, and is replaced by a nine-star crown. Poets named her the “Divine Mother,” others affectionately called her Amah, “Nanny.” But some literati demote the goddess to human status, making her fall in love with mortals, mooning over them and despairing at their absence. In a late 8th century poem she becomes “uncertain and hesitant” as she visits the emperor Han Wudi. [Cahill, 82-3; 58-69; 159]
Others portrayed her as young and seductive. [Lullo, 276] Worse, a few misogynists disparaged the goddess. The fourth century Yü Fang Bi Jue complained about her husbandless state and invented sexual slurs. It claimed that she achieved longevity by sexually vampirizing innumerable men and even preying upon boys to build up her yin essence. But the vigor of folk tradition overcame such revisionist slurs—with an important exception.
The ancient, shamanic shapeshifter side of Xi Wangmu, and her crone aspect, were pushed aside. Chinese folklore is full of tiger-women: Old Granny Autumn Tiger, Old Tiger Auntie (or Mother), Autumn Barbarian Auntie. They retain shamanic attributes, but in modern accounts they are demonized (and slain) as devouring witches. Two vulnerable groups, old women and indigenous people, become targets. [ter Harrm, 55-76] Yet the association of Tiger and Autumn and Granny goes back to ancient attributes of Xi Wangmu that are originally divine.
A search and survey for possible prototypes of the Queen Mother of the West:
Egypt — one possible origin of Mother of the Gods, Queen of the Goddesses, Lady of Heaven(?):
Schist statuette of Mut, mother, Late Period,Dynasty XXVI, c. 664-525 BC often interpreted as representing one of the earliest mother goddesses of Egypt. Photo: Wikimedia Commons.
Mut, which meant mother in the ancient Egyptian language, was an ancient Egyptian Some of Mut’s many titles included World-Mother, Eye of Ra, Queen of the Goddesses, Lady of Heaven, Mother of the Gods, and She Who Gives Birth, But Was Herself Not Born of Any.
Later in ancient Egyptian mythology deities of the pantheon were identified as equal pairs, female and male counterparts, having the same functions. In the later Middle Kingdom, when Thebes grew in importance, its patron, Amun also became more significant, and so Amaunet, who had been his female counterpart, was replaced with a more substantial mother-goddess, namely Mut, who became his wife. In that phase, Mut and Amun had a son, Khonsu, another moon deity.When Thebes rose to greater prominence, Mut absorbed these warrior goddesses as some of her aspects. First, Mut became Mut-Wadjet-Bast, then Mut-Sekhmet-Bast (Wadjet having merged into Bast), then Mut also assimilated Menhit, who was also a lioness goddess
The authority of Thebes waned later and Amun was assimilated into Ra. Mut, the doting mother, was assimilated into Hathor, the cow-goddess and mother of Horus who had become identified as Ra’s wife. Subsequently, when Ra assimilated Atum, the Ennead was absorbed as well, and so Mut-Hathor became identified as Isis (either as Isis-Hathor or Mut-Isis-Nekhbet), the most important of the females in the Ennead (the nine), and the patron of the queen. The Ennead proved to be a much more successful identity and the compound triad of Mut, Hathor, and Isis, became known as Isis alone—a cult that endured into the 7th century A.D. and spread to Greece, Rome, and Britain.
mother goddess with multiple aspects that changed over the thousands of years of the culture. Alternative spellings are Maut and Mout. She was considered a primal deity, associated with the waters from which everything was born through parthenogenesis. She also was depicted as a woman with the crowns of Egypt upon her head. The rulers of Egypt each supported her worship in their own way to emphasize their own authority and right to rule through an association with Mut. Mut was a title of the primordial waters of the cosmos, Naunet, in the Ogdoad cosmogony during what is called the Old Kingdom, the third through sixth dynasties, dated between 2,686 to 2,134 B.C. However, the distinction between motherhood and cosmic water later diversified and lead to the separation of these identities, and Mut gained aspects of a creator goddess, since she was the mother from which the cosmos emerged.The hieroglyph for Mut’s name, and for mother itself, was that of a white vulture, which the Egyptians believed were very maternal creatures. Indeed, since Egyptian white vultures have no significant differing markings between female and male of the species, being without sexual dimorphism, the Egyptians believed they were all females, who conceived their offspring by the wind herself, another parthenogenic concept.
Much later new myths held that since Mut had no parents, but was created from nothing; consequently, she could not have children and so adopted one instead.
Making up a complete triad of deities for the later pantheon of Thebes, it was said that Mut had adopted Menthu, god of war. This choice of completion for the triad should have proved popular, but because the isheru, the sacred lake outside Mut’s ancient temple in Karnak at Thebes, was the shape of a crescent moon, Khonsu, the moon god eventually replaced Menthu as Mut’s adopted son. — Source: Wikipedia
[Note: The Eye of Ra, crescent moon symbolism and birth or adoption of moon deity as related motifs to the Queen Mother of the West, may be significant for provenance of related creation myths of the birth of Amaterasu.
In Anatolia (Turkey):
Çatalhöyük is perhaps best known for the idea of the mother goddess. But our work more recently has tended to show that in fact there is very little evidence of a mother goddess and very little evidence of some sort of female-based matriarchy. That’s just one of the many myths that the modern scientific work is undermining.”
Old European (especially Cucuteni-Trypillian culture):
James Frazer (The Golden Bough) and Marija Gimbutas advance the idea that goddess worship in ancient Europe and the Aegean was descended from Pre-Indo-European neolithic matriarchies… Gimbutas maintained that the “earth mother” group continues the paleolithic figural tradition discussed above, and that traces of these figural traditions may be found in goddesses of the historical period. According to Gimbutas’ Kurgan Hypothesis, Old European cultures were disrupted by expansion of Indo-European speakers from southern Siberia. — Mother Goddess (Wikipedia)
Ceramic Neolithic female figurine Cucuteni-Trypillian culture, ca. 5500-2750 BCE, Piatra Neamt Museum Photo: Wikimedia Commons
From 5500 to 2750 BC the Cucuteni-Trypillian culture flourished in the region of modern-day Romania, Moldova, and southwestern Ukraine, leaving behind ruins of settlements of up to 15,000 residents who practiced agriculture, domesticated livestock, and many ceramic remains of pottery and clay figurines. Some of these figurines appear to represent the mother goddess.
In Anatolia (Turkey):
Çatalhöyük is perhaps best known for the idea of the mother goddess. But our work more recently has tended to show that in fact there is very little evidence of a mother goddess and very little evidence of some sort of female-based matriarchy.
[New archaeological discoveries find the clay figurines associated with various ovens, figurines with animal heads leopards and bears, with missing heads and the ovens, and with burials. Thus, the
“figurine can be interpreted in a number of ways – as a woman turning into an ancestor, as a woman associated with death, or as death and life conjoined. It is possible that the lines around the body represent wrapping rather than ribs. Whatever the specific interpretation, this is a unique piece that may force us to change our views of the nature of Çatalhöyük society and imagery. Perhaps the importance of female imagery was related to some special role of the female in relation to death as much as to the roles of mother and nurturer.” — Catalhoyuk 2005 archive report
Another article, “A Journey to 9000 years ago” a report of the Turkish Daily News, Jan 17, 2008 edition’s …
findings point to ties between Çatalhöyük, Hittites and other ancient civilizations of Anatolia, since bulls and strong women icons in Çatalhöyük also carry great symbolic importance in Hittite culture.
Hodder said Çatalhöyük has come to be identified with the icon of a goddess, adding, “Mellart drew public attention to the female icon he found during excavation. Therefore, Çatalhöyük came to be identified with the goddess. Female icons, male icons and phallus symbols were found during excavation. When we look at what they eat and drink and at their social statues, we see that men and women had the same social status. There was a balance of power. Another example is the skulls found. If one’s social status was of high importance in Çatalhöyük, the body and head were separated after death. The number of female and male skulls found during the excavations is almost equal.”…
Hodder said this year excavations in Çatalhöyük yielded bear patterned friezes and Anatolia is one of the world’s richest archaeological sites, adding, “Anatolia has great importance when it comes to the spread of culture throughout the world. Findings show that agriculture, settlements, crockery production and various figures spread through Europe from Anatolia.”
The secret of the world lies in southeastern Anatolia
“Southeastern Turkey has great archaeological importance. If comprehensive excavations are conducted, we may come across findings that will shock the scientific world. We can even obtain data that would rewrite the science of archaeology. As a matter of fact, excavations in the 11,500 year-old Neolithic residential areas of Göbeklitepe, which lies 15 kilometers northeast of Şanlıurfa, radically changed our knowledge.”
Before the Göbeklitepe excavations it was widely believed that the area stretching from east Mediterranean Lebanon to Jordan experienced an agricultural revolution, said Hodder. Yet, the Göbeklitepe excavations tore this argument to shreds. Hodder said the agricultural revolution began much earlier in southeastern Anatolia, and recent findings show that the transition to an agricultural society began in more than just one place.
Hodder said the male icon and headless bird icon found in Göbeklitepe share similarities with those found in Çatalhöyük. Unlike Çatalhöyük, male symbolism is more prominent in Göbeklitepe. Male sexual organs were drawn on animal icons found in Göbeklitepe, which leads to the complete disposal of the idea that agriculture is related to female and goddess images, said Hodder
So what is the origin of all the Mother Earth goddess figures then?
Does it originate out of the Egyptian deity Isis, later spread by the mystery religion and the cult of Isis? Or from the Thracian-Greek grain goddess Demeter/Artemis (of Tauropoulos) ?
Thrace, Perinthos AE18. Veiled bust of Demeter right, in left hand holding poppy & ear / PERINQIWN D(IS NEWKORON), Demeter (or Artemis Tauropoulos) advancing right, holding torches in both hands
Sumerian and Mesopotamian
Ninsun is the Mother Goddess in general Mesopotamian mythology. She is Asherah in Canaan and `Ashtart in Syria. The Sumerians wrote erotic poetry about their mother goddess Ninhursag. source
Thrace, Perinthos. c350 BC. AE 23mm. Jugate heads of Osiris and Isis right / PERIN-QIWN, bull standing left, two-headed horse below source
The Minoan goddess represented in seals and other remains many of whose attributes were later also absorbed by Artemis, seems to have been a mother goddess type, for in some representations she suckles the animals that she holds.
The archaic local goddess worshiped at Ephesus, whose cult statue was adorned with necklaces and stomachers hung with rounded protuberances who was later also identified by Hellenes with Artemis, was probably also a mother goddess.
The Anna Perenna Festival of the Greeks and Romans for the New Year, around March 15, near the Vernal Equinox, may have been a mother goddess festival. Since the Sun is considered the source of life and food, this festival was also equated with the Mother Goddess.
In the 1st century BC, Tacitus recorded rites amongst the Germanic tribes focused on the goddess Nerthus, whom he calls Terra Mater, ‘Mother Earth’. Prominent in these rites was the procession of the goddess in a wheeled vehicle through the countryside. Among the seven or eight tribes said to worship her, Tacitus lists theAnglii and the Longobardi.
Vedic Mittani, India
In Hinduism, Durga represents the empowering and protective nature of motherhood. From her forehead sprang Kali, who defeated Durga’s enemy, Mahishasura. The divine Mother, Devi Adi parashakti, manifests herself in various forms, representing the universal creative force. She also gives rise to Maya (the illusory world) and to prakriti, the force that galvanizes the divine ground of existence into self-projection as the cosmos. The Earth itself is manifested by Parvati, Durga’s previous incarnation. Hindu worship of the divine Mother can be traced back to early Vedic culture and Mittanian empire.
The form of Hinduism known as Shaktism is strongly associated with Vedanta, Samkhya, and Tantra Hindu philosophies and is ultimately monist. The feminine energy, Shakti, is considered to be the motive force behind all action and existence in the phenomenal cosmos. The cosmos itself is Shiva, the unchanging, infinite, immanent, and transcendent reality that is the Divine Ground of all being, the “world soul”. Masculine potential is actualized by feminine dynamism, embodied in multitudinous goddesses who are ultimately reconciled in one. Mother Maya, Shakti, herself, can free the individual from demons of ego, ignorance, and desire that bind the soul in maya (illusion).
Gaia‘s equivalent in the Roman mythology was Terra Mater or Tellus Mater, sometimes worshiped in association with Demeter‘s Roman equivalent, Ceres, goddess of grain, agriculture and fertility, and mothering.
Venus (Greek Aphrodite‘s equivalent), was mother of the Trojan Aeneas and ancestor of Romulus, Rome’s mythical founder. In effect, she was the mother of Rome itself, and various Romans, including Julius Caesar, claimed her favour. In this capacity she was given cult as Venus Genetrix (Ancestor Venus). She was eventually included among the many manifestations of a syncretised Magna Dea (Great Goddess), who could be manifested as any goddess at the head of a pantheon, such asJuno or Minerva, or a goddess worshipped monotheistically.
Yer Tanrı is the mother of Umai, also known as Ymai or Mai, the mother goddess of the Turkic Siberians. She is depicted as having sixty golden tresses, that resemble the rays of the sun. She is thought to have once been identical with Ot of the Mongols.
The Irish goddess Anu, sometimes known as Danu, has an impact as a mother goddess, judging from the Dá Chích Anann near Killarney, County Kerry. Irish literature names the last and most favored generation of deities as “the people of Danu” (Tuatha De Danann). The Welsh have a similar figure called Dôn who is often equated with Danu and identified as a mother goddess. Sources for this character date from the Christian period however so she is referred to simply as a mother of heroes in the Mabinogion. The character’s (assumed) origins as a goddess are obscured.
The Celts of Gaul worshipped a goddess known as Dea Matrona (“divine mother goddess”) who was associated with the Marne River. Similar figures known as the Matres (Latin for “mothers”) are found on altars in Celtic as well as Germanic areas of Europe.
Exploring the connection between stone symbolism and Queen Mother of the West
Reputed as the Ridge of Asia, the Kunlun Mountains originate at the Pamir Plateau and snake eastwards across the heart of the Asian continent. Ranging in altitude an average of 5,500 meters, the mountain range runs 1,800 kilometers through the Xinjiang Ugyur Autonomous Region and extends more than 1,200 kilometers into Qinghai Province.
For thousands of years, the Kunlun Mountains have remained well known in Chinese folklore and mythology. It is said the Royal Mother of the West is the immortal owner of the mountains, and the Black Sea, the headstream of the Kunlun River, is her abode named Jade Pond. The bamboo slips unearthed from an ancient tomb in Henan Province record such a legend: King Mu of the Zhou Dynasty (C. 11th-256B.C.) rides eight steeds#, which can run 10,000 miles a day, to meet the Royal Mother of the West in the Jade Pond. Before his departure for the return trip, the goddess presents him with eight carriages loaded with jade, and makes an appointment with the king to again meet three years later. [# SKYLLA One of the eight immortal horses which drew the chariot of Poseidon]
The Kunlun Mountains and the Himalayas are located at the junction of two continental plates. Due to ongoing collision between the two plates over billions of years, as well as the crushing force from magmata under the crust, a special sort of stone was formed – Kunlun jadeites.The Kunlun Mountains and the Himalayas are located at the junction of two continental plates. Due to ongoing collision between the two plates over billions of years, as well as the crushing force from magmata under the crust, a special sort of stone was formed – Kunlun jadeites. It is these hardworking miners who create the legend behind the renowned Kunlun jade. (web source): Jade Dreams Text by Wang Shengzhi Photographs by Yang Jiankun China Pictorial
See also Jade of the Kunlun Mtns
Kunlun jade is found in the Kunlun Mountains in Qinghai Province. The type of jade is now recognized as a world-class jewel. Kunlun Mountain is the birthplace of many legendary
The Kunlun Mountains (simplified Chinese: 昆仑山; traditional Chinese: 崑崙山; pinyin: Kūnlún Shān; Mongolian: Хөндлөн Уулс) is one of the longest mountain chains in Asia, extending more than 3,000 km.
From the Pamirs of Tajikistan, it runs east along the border between Xinjiang and Tibet autonomous regions to the Sino-Tibetan ranges in Qinghai province. It stretches along the southern edge of what is now called the Tarim Basin, the infamous Takla Makan or “sand-buried houses” desert, and the Gobi Desert. A number of important rivers flow from it including the Karakash River (‘Black Jade River’) and the Yurungkash River (‘White Jade River’), which flow through the Khotan Oasis into the Taklamakan Desert.
Altyn-Tagh or Altun Range is one of the chief northern ranges of the Kunlun. Nan Shan or its eastern extension Qilian is another main northern range of the Kunlun. In the south main extension is the Min Shan. Bayan Har Mountains, a southern branch of the Kunlun Mountains, forms the watershed between the catchment basins of China’s two longest rivers, the Yangtze River and the Huang He.
The highest mountain of the Kunlun Shan is the Kunlun Goddess (7,167 m) in the Keriya area. The Arka Tagh (Arch Mountain) is in the center of the Kunlun Shan; its highest point is Ulugh Muztagh. Some authorities claim that the Kunlun extends north westwards as far as Kongur Tagh (7,649 m) and the famous Muztagh Ata (7,546 m). But these mountains are physically much more closely linked to the Pamir group (ancient Mount Imeon).
Since ancient times, the magnificent Kunlun Mountains in northwestern China has remained a famous cradle of jade. At the Beijing 2008 Olympics, Kunlun jade added Chinese elegance to the Olympic medals.
Early in 1992, Darcy, a farmer from Golmud, Qinghai Province, came across several pieces of green stones near the Fairy Maiden Peak in the Kunlun Mountains. Never before seeing such a stone, he brought one home. It weighed dozens of kilograms. His interest growing, in hopes of finding more stones, he led his son and two of his friends to revisit the peak a couple of days later. When the group reached the top of the peak after three hours of tough trekking, they were shocked by an incredible scene: Countless rough jadeites in light green and white were growing on the ground. A jadeite mine formed billions of years ago was thus revealed. Source: Kunlun mountains China | <urn:uuid:ee886574-2861-47d5-abfc-d148c26633d5> | CC-MAIN-2019-47 | https://japanesemythology.wordpress.com/moon-viewing-tradition/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00177.warc.gz | en | 0.954627 | 7,976 | 3.328125 | 3 |
Researchers from the Centre for Genomic Regulation in Barcelona, Spain, have discovered an even faster and more efficient way to reprogram adult cells to make induced pluripotent stem cells (iPSCs).
This new discovery decreases the time it takes to derived iPSCs from adult cells from a few weeks to a few days. It also elucidated new things about the reprogramming process for iPSCs and their potential for regenerative medical applications.
iPSCs behave similarly to embryonic stem cells, but they can be created from terminally differentiated adult cells. The problem with the earlier protocols for the derivation of iPSCs is that only a very small percentage of cells were successfully reprogrammed (0.1%-2%). Also this reprogramming process takes weeks and is a rather hit-and-miss process.
The Centre for Genomic Regulation (CRG) research team have been able to reprogram adult cells very efficiently and in a very short period of time.
“Our group was using a particular transcription factor (C/EBPalpha) to reprogram one type of blood cells into another (transdifferentiation). We have now discovered that this factor also acts as a catalyst when reprogramming adult cells into iPS,” said Thomas Graf, senior group leader at the CRG and ICREA research professor.
“The work that we’ve just published presents a detailed description of the mechanism for transforming a blood cell into an iPS. We now understand the mechanics used by the cell so we can reprogram it and make it become pluripotent again in a controlled way, successfully and in a short period of time,” said Graf.
Genetic information is compacted into the nucleus like a wadded up ball of yarn. In order to access genes for gene expression, that ball of yarn has to be unwound so that the cell can find the information it needs.
The C/EBPalpha (CCAAT/Enhancer Binding Protein alpha) protein temporarily unwinds that region of DNA that contains the genes necessary for the induction of pluripotency. Thus, when the reprogramming process begin, the right genes are activated and they enable the successful reprogramming all the cells.
“We already knew that C/EBPalpha was related to cell transdifferentiation processes. We now know its role and why it serves as a catalyst in the reprogramming,” said Bruno Di Stefano, a PhD student. “Following the process described by Yamanaka the reprogramming took weeks, had a very small success rate and, in addition, accumulated mutations and errors. If we incorporate C/EBPalpha, the same process takes only a few days, has a much higher success rate and less possibility of errors, said Di Stefano.
This discovery provides a remarkable insight into stem cell-forming molecular mechanisms, and is of great interest for those studies on the early stages of life, during embryonic development. At the same time, the work provides new clues for successfully reprogramming cells in humans and advances in regenerative medicine and its medical applications.
Research groups at the University of Manchester, and University College, London, UK, have developed a new technique for reprogramming adult cells into induced pluripotent stem cells that greatly reduces the risk of tumor formation.
Kostas Kostarelos, who is the principal investigator of the Nanomedicine Lab at the University of Manchester said that he and his colleagues have discovered a safe protocol for reprogramming adult cells into induced pluripotent stem cells (iPSCs). Because of their similarities to embryonic stem cells, many scientist hope that iPSCs are a viable to embryonic stem cells.
How did they do it? According to Kostarelos, “We have induced somatic cells within the liver of adult mice to transient behave as pluripotent stem cells,” said Kostarelos. “This was done by transfer for four specific gene, previously described by the Nobel-prize winning Shinya Yamanaka, without the use of viruses but simply plasmid DNA, a small circular, double-stranded piece of DNA used for manipulating gene expression in a cell.”
This technique does not use viruses, which was the technique of choice in Yamanaka’s research to get genes into cells. Viruses like the kind used by Yamanaka, can cause mutations in the cells. Kostarelos’ technique uses no viruses, and therefore, the mutagenic properties of viruses are not an issue.
Kostarelos continued, “One of the central dogmas of this emerging field is that in vivo implantation of (these stem) cells will lead to their uncontrolled differentiation and the formation of a tumor-like mass.”
However, Kostarelos and his team have determined that the technique they designed does not show this risk, unlike the virus-based methods.
“[This is the ] only experimental technique to report the in vivo reprogramming of adult somatic cells to plurpotentcy using nonviral, transient, rapid and safe methods,” said Kostarelos.
Since this approach uses circular plasmid DNA, the tumor risk is quite low, since plasmid DNA is rather short-lived under these conditions. Therefore, the risk of uncontrolled growth is rather low. While large volumes of plasmid DNA are required to reprogram these cells, the technique appears to be rather safe in laboratory animals.
Also, after a burst of expression of the reprogramming factors, the expression of these genes decreased after several days. Furthermore, the cells that were reprogrammed differentiated into the surrounding tissues (in this case, liver cells). There were no signs in any of the laboratory animals of tumors or liver dysfunction.
This is a remarkable proof-of-principle experiment that shows that reprogramming cells in a living body is fast and efficient and safe.
A great deal more work is necessary in order to show that such a technique can use useful for regenerative medicine, but it is certainly a glorious start.
The journal Stem Cells Translational Medicine has published a new protocol for reprogramming induced pluripotent stem cells (iPSCs) into mature blood cells. This protocol uses only a small amount of the patient’s own blood and a readily available cell type. This novel method skips the generally accepted process of mixing iPSCs with either mouse or human stromal cells. Therefore, is ensures that no outside viruses or exogenous DNA contaminates the reprogrammed cells. Such a protocol could lead to a purer, safer therapeutic grade of stem cells for use in regenerative medicine.
The potential for the field of regenerative medicine has been greatly advanced by the discovery of iPSCs. These cells allow for the production of patient-specific iPSCs from the individual for potential autologous treatment, or treatment that uses the patient’s own cells. Such a strategy avoids the possibility of rejection and numerous other harmful side effects.
CD34+ cells are found in bone marrow and are involved with the production of new red and white blood cells. However, collecting enough CD34+ cells from a patient to produce enough blood for therapeutic purposes usually requires a large volume of blood from the patient. However, a new study outlined But scientists found a way around this, as outlined by Yuet Wai Kan, M.D., FRS, and Lin Ye, Ph.D. from the Department of Medicine and Institute for Human Genetic, University of California-San Francisco has devised a way around this impasse.
“We used Sendai viral vectors to generate iPSCs efficiently from adult mobilized CD34+ and peripheral blood mononuclear cells (MNCs),” Dr. Kan explained. “Sendai virus is an RNA virus that carries no risk of altering the host genome, so is considered an efficient solution for generating safe iPSC.”
“Just 2 milliliters of blood yielded iPS cells from which hematopoietic stem and progenitor cells could be generated. These cells could contain up to 40 percent CD34+ cells, of which approximately 25 percent were the type of precursors that could be differentiated into mature blood cells. These interesting findings reveal a protocol for the generation iPSCs using a readily available cell type,” Dr. Ye added. “We also found that MNCs can be efficiently reprogrammed into iPSCs as readily as CD34+ cells. Furthermore, these MNCs derived iPSCs can be terminally differentiated into mature blood cells.”
“This method, which uses only a small blood sample, may represent an option for generating iPSCs that maintains their genomic integrity,” said Anthony Atala, MD, Editor of STEM CELLS Translational Medicine and director of the Wake Forest Institute for Regenerative Medicine. “The fact that these cells were differentiated into mature blood cells suggests their use in blood diseases.”
Stem cell researchers at the University of California, San Diego have designed a simple, reproducible, RNA-based method of generating human induced pluripotent stem cells (iPSCs). This new technique broad applications for the successful production of iPSCs for use in therapies and human stem cell studies.
Human iPSCs are made from adult cells by genetically engineering adult cells to overexpress four different genes (Oct4, Klf4, Sox2, and c-Myc). This overexpression drives the cells to de-differentiate into pluripotent stem cells that have many of the same characteristics as embryonic stem cells, which are made from embryos. However, because iPSCs are made from the patient’s own cells, the chances that the immune system of the patient will reject the implanted cells is low.
The problem comes with the overexpression of these four genes. Initially, retroviruses have been used to reprogram the adult cells. Unfortunately, retroviruses plop their DNA right into the genome of the host cell, and this change is permanent. If these genes get stuck in the middle of another gene, then that cell has suffered a mutation. Secondly, if these genes are stuck near another highly-expressed gene, then they too might be highly expressed, thus driving the cells to divide uncontrollably.
Several studies have shown that in order to reprogram these cells, these four genes only need to be overexpressed transiently. Therefore, laboratories have developed ways of reprogramming adult cells that do not use retroviruses. Plasmid-based systems have been used, adenovirus and Sendai virus-based systems, which do not integrate into the genome of the host cell, have also been used, and even RNA has been used (see Federico González, Stéphanie Boué & Juan Carlos Izpisúa Belmonte, Nature Reviews Genetics 12, 231-242).
The UC San Diego team led by Steven Dowdy has used Venezuelan equine virus (VEE) that they engineered to express the reprogramming genes required to make iPSCs from adult cells. Because this virus does not integrate into the host genome, and expresses RNA in the host cell only transiently, it seems to be a safe and effective way to make buckets of messenger RNA over a short period of time.
The results were impressive. The use of this souped-up VEE produced good-quality iPSCs very efficiently. Furthermore, it worked on old and young human cells, which is important, since those patients who will need regenerative medicine are more likely to be young patients than old patients. Also, changing the reprogramming factors is rather easy to do as well.
Regenerative medicine depends on stem cells for the promises that it can potentially deliver to ailing patients. Training stem cells to repair injured tissues with custom-grown tissue substitutes and to replace dead cells are some of the goals of regenerative medicine. A major player in regenerative medicine is induced pluripotent stem cells (iPSCs), which are made from a patient’s own tissues. Because iPSCs are derived from a patient’s own cells, their chance of being rejected by the patient’s own immune system is rather low. Unfortunately, Shinya Yamanaka’s formula for making iPSCs, for which he was awarded last year’s Nobel Prize, utilizes a strict recipe that uses a precise combination of genes, some of which increase the risk of cancer risk, and, therefore, restricts their full potential for clinical application.
However, the laboratory Juan Carlos Izpisua Belmonte and his colleagues at the Salk Institute have published a paper in the journal Cell Stem Cell that shows that the recipe for iPSCs is much more versatile than originally thought. For the first time, Izpisua Belmonte and his colleague have replaced a gene that was once thought to be impossible to substitute in the production of iPSCs. This creates the potential for more flexible recipes that should speed the adoption of iPSCs for stem cell-based therapies.
Pluripotent stem cells come from two main sources. Embryonic stem cells (ESCs) are derived from early human blastocyst-stage embryos. The cells of the inner cell mass are extracted and these immature cells that have never differentiated into specific cell types, and are cultured, grown, and propagated to form an embryonic stem cell line. Secondly, induced pluripotent stem cells or iPSCs, are derived from mature cells that have been reprogrammed back into an undifferentiated state. In 2006 by Yamanaka introduced four different genes into a mature cell to reprogram the cell to pluripotency. This pluripotent cell can be cultured and grown into an iPSCs line. Because of Yamanaka’s initial success in iPSC production, most stem cell researchers adopted his recipe, even though variations on his protocol have been examined and used.
Izpisua Belmonte and his colleagues used a fresh approach for the derivation of iPSCs. They played around with the Yamanka protocol and in doing do discovered that pluripotency (the stem cell’s ability to differentiate into nearly any kind of adult cell) can also be programmed into adult cells by “balancing” the genes required for differentiation. What genes? Those genes that code for “lineage transcription factors,” which are proteins that direct stem cells to differentiate first into a particular cell lineage, or type, such as a blood cell versus a skin cell, and then finally into a specific cell, such as a white blood cell.
“Prior to this series of experiments, most researchers in the field started from the premise that they were trying to impose an ’embryonic-like’ state on mature cells,” says Izpisua Belmonte, who holds the Institute’s Roger Guillemin Chair. “Accordingly, major efforts had focused on the identification of factors that are typical of naturally occurring embryonic stem cells, which would allow or further enhance reprogramming.”
Despite these efforts, there seemed to be no way to determine through genetic identity alone that cells were pluripotent. Instead, pluripotency was routinely evaluated by functional assays. In other words, if it acts like a stem cell, it must be a stem cell.
That condition led the team to their key insight. “Pluripotency does not seem to represent a discrete cellular entity but rather a functional state elicited by a balance between opposite differentiation forces,” says Izpisua Belmonte.
Once they understood this, they realized the four extra genes weren’t necessary for pluripotency. Instead, adult cells could be reprogrammed by altering the balance of “lineage specifiers,” genes that were already in the cell that specified what type of adult tissue a cell might become.
“One of the implications of our findings is that stem cell identity is actually not fixed but rather an equilibrium that can be achieved by multiple different combinations of factors that are not necessarily typical of ESCs,” says Ignacio Sancho-Martinez, one of the first authors of the paper and a postdoctoral researcher in Izpisua Belmonte’s laboratory.
Izpisua Belmonte’s laboratory showed that more than seven additional genes can facilitate reprogramming adult cells to iPSCs. Most importantly, for the first time in human cells, they were able to replace a gene from the original recipe called Oct4, which had been replaced in mouse cells, but was still thought indispensable for the reprogramming of human cells. Their ability to replace it, as well as SOX2, another gene once thought essential that had never been replaced in combination with Oct4, demonstrated that stem cell development must be viewed in an entirely new way. In point of fact, Belmonte’s group showed that genes that specify mesendodermal lineage can replace OCT4 in human iPSC generation, and ectodermal lineage specifiers are able to replace SOX2 in hiPSC generation. Simultaneous replacement of OCT4 and SOX2 allows human cell reprogramming to iPSCs
“It was generally assumed that development led to cell/tissue specification by ‘opening’ certain differentiation doors,” says Emmanuel Nivet, a post-doctoral researcher in Izpisua Belmonte’s laboratory and co-first author of the paper, along with Sancho-Martinez and Nuria Montserrat of the Center for Regenerative Medicine in Barcelona, Spain.
Instead, the successful substitution of both Oct4 and SOX2 shows the opposite. “Pluripotency is like a room with all doors open, in which differentiation is accomplished by ‘closing’ doors,” Nivet says. “Inversely, reprogramming to pluripotency is accomplished by opening doors.”
This work should help to overcome one of the major hurdles in the widespread adoption of iPSC-based therapies; namely, that the original four genes used to reprogram stem cells had been implicated in cancer. “Recent studies in cancer, many of them done by my Salk colleagues, have shown molecular similarities between the proliferation of stem cells and cancer cells, so it is not surprising that oncogenes [genes linked to cancer] would be part of the iPSC recipe,” says Izpisua Belmonte.
With this new method, which allows for a customized recipe, the team hopes to push therapeutic research forward. “Since we have shown that it is possible to replace genes thought essential for reprogramming with several different genes that have not been previously involved in tumorigenesis, it is our hope that this study will enable iPSC research to more quickly translate into the clinic,” says Izpisua Belmonte.
Other researchers on the study were Tomoaki Hishida, Sachin Kumar, Yuriko Hishida, Yun Xia and Concepcion Rodriguez Esteban of the Salk Institute; Laia Miquel and Carme Cortina of the Center of Regenerative Medicine in Barcelona, Spain.
Induced pluripotent stem cells (iPSCs) come from adult cells and not embryos. By genetically engineering adult cells to express a cadre of genes that are normally found in early embryonic cells, scientists can de-differentiate the adult cells into cells that resemble embryonic stem cells in many (although not all) ways.
Generating iPSCs from human adult cells is tedious and not terribly efficient, but there are ways to increase the efficiency of iPSC generation (see here). Additionally, iPSCs can show a substantial tendency to form tumors, but this tendency is cell line-specific (see here and here). Furthermore, there are ways to screen iPSC lines for tumorgenicity.
Because iPSCs are directly from the patient’s cells, the chances of rejection by the immune system are less likely (see here). Therefore, many stem cells scientists believe that iPSCs may represent one of the best future possibilities for regenerative medicine. However, a hurdle in iPSC development is the ability to generate and evaluation iPSC lines in a rapid, but reliable manner. Once adult cells are induced to become iPSCs, the iPSC cultures are a mixed bag of iPSCs, undifferentiated adult cells that failed to make the transition to iPSCs, and partially reprogrammed cells. Selecting the iPSCs by merely eye-balling the cells through the microscope is tricky and fraught with errors. If the scientist wants to select iPSCs for toxicity studies and not partially differentiated cells, selecting the wrong cells for the experiment can be fatal to the experiment itself.
Scientists from the New York Stem Cell Foundation (NYSCF) Research Institute have developed a protocol for iPSC generation and evaluation is automated and efficient, and may bring us closer to the goal of using iPSCs in the clinic some day. This protocol is the culmination of three and a half years of work. This protocol uses a technology called “fluorescence activated cell sorting” or FACS to identify fully reprogrammed cells. FACS sorts the cells according to their expression of two specific cell surface molecules and the absence of another cell surface molecule. This negative selection for a cell surface molecule found in partially reprogrammed cells but not iPSCs is a very powerful technique for purifying iPSCs.
David Kahler, the NYSCF director of laboratory automation, said, “To date, this protocol has enabled our group to derive (and characterize over) 228 individual iPS cell lines, representing one of the largest collections derived in a single lab.” Kahler continued: “This standardized method means that these iPS cells can be compared to one another, an essential step for the use in drug screens and the development of cell therapies.”
This particular cell selection technique provides the basis for a new technology developed by NYSCF, the Global Stem Cell Array, which is a fully automated, robotic platform to generate cell lines in parallel.
Underway at the NYSCF Laboratory, the Array reprograms thousands of adult cells from kin and blood samples taken from healthy donors and diseased patients into iPSC lines. Sorting and characterizing cells at an early stage of reprogramming allows efficient development of iPSC clones and derivation of adult cell types.
“We are excited about the promise this protocol holds to the field. As stem cells move towards the clinic, Kahler’s work is a critical step to ensure safe, effective treatments for everyone.” said Susan L. Solomon, who is the Chief Executive Officer of NYSCF.
The removal of one genetic roadblock could improve the efficiency of adult cell reprogramming by some 10 to 30 fold, according to research by stem cell scientists at the Methodist Hospital Research Institute and two other institutions.
Rongfu Wang, the principal investigator and director of the Center for Inflammation and Epigenetics, said this about his group’s findings: “The discovery six years ago that scientist can convert adult cells into inducible pluripotent stem cells, or iPSCs, bolstered the dream that a patient’s own cells might be reprogrammed to make patient-specific iPSCs for regenerative medicine, modeling human diseases in Petri dishes, and drug screening. But reprogramming efficiency has remained very low, impeding its applications in the clinic.”
Wang and his group identified a protein encoded by a gene called Jmjd3, which is known as KDM6B, acts as an impediment to the reprogramming of adult cells into iPSCs. Jmjd3 is involved in several different biological processes, including the maturation of nerve cells and immune cell differentiation (Popov N, Gil J. Epigenetics. 2010 5(8):685-90).
These findings by Wang’s team are the first time anyone has identified a role for Jmjd3 in the reprogramming process. According to Wang, fibroblasts that lack functional Jmjd3 showed greatly enhanced reprogramming efficiency.
Helen Wang, one of the co-principal authors of this study, said, “Our findings demonstrate a previously unrecognized role of Jmjd3 in cellular reprogramming and provide molecular insight into the mechanisms by which the Jmjd3-PHF20 axis controls this process.’
While teasing apart the roles of Jmjd3 in reprogramming, Wang and his colleagues discovered that this protein regulates cell growth and cellular aging. These are two previously unidentified functions of Jmjd3, and Jmjd3 appears to work primarily by inactivating the protein PHF20. PHF20 is a protein that is required for adult cell reprogramming, and cells that lack PHF20 do not undergo reprogramming to iPSCs.
Rongfu Want explained it like this: “So when it comes to increasing iPSC yields, knocking down Jmjd3 is like hitting two birds with one stone.”
Jmjd3 is almost certainly not the only genetic roadblock to stem cell conversion. Wang noted, “Removal of multiple roadblacks could further enhance the reprogramming efficiency with which researchers can efficiently generate patient-specific iPSCs for clinical applications.”
While this is certainly an exciting finding, there is almost certainly a caveat that comes with it. increased reprogramming efficiency almost certainly brings the potential for increased numbers of mutations. Other studies have shown that iPSC generation is much more efficient if the protein P53 is inhibited, but P53 is the guardian of the genome. It prevents the cell from dividing if there is substantial amounts of DNA damage. Inhibiting P53 activity allows iPSC generation even if the cells have excessive amounts of DNA damage. Therefore, inhibiting those cellular processes that are meant to guard against excessive cell proliferation and growth can lead to greater numbers of mutations. Thus, before Jmjd3 inactivation is used to generate iPSCs for clinical uses, extensive animal testing must be required to ensure that this procedure does make iPSCs even less safe than they already are. | <urn:uuid:85134b43-96ed-4088-b37e-75ed6131a8f6> | CC-MAIN-2019-47 | https://beyondthedish.wordpress.com/tag/cellular-reprogramming/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00098.warc.gz | en | 0.943249 | 5,534 | 2.6875 | 3 |
CBSE Previous Year Query Papers Class 12 History 2018
Time Allowed: three Hours
Maximum Rating: 70
- The questionnaire incorporates 26 questions.
- All questions are required
- Question no. 1 to 11 carry 1 entry each. These questions must be answered in about 10 to 20 phrases.
- Query no. 12 to 19 carry 3 characters every. These questions must be answered in about 30-50 phrases.
- Question no. 20 to 26 carry 5 characters every. These questions ought to be answered in about 75 to 100 phrases.
** This answer won’t be answered as a result of a change in current curriculum.
Describe the idea of archaeologists recognized centers of handicraft within the Harappan tradition. Answer:
The idea by which archaeologists determine centers of handicraft manufacturing is:
- Raw supplies resembling stone nodules, entire shells, copper ore, and so forth.
- Rejects and waste: It’s the greatest indicator of craft. For example, if a bark or stone is reduce to make articles, items of these materials might be disposed of as waste at the place of manufacturing.
- Finished Products: Typically giant items of waste have been used to make smaller gadgets, suggesting that in addition to small specialised centers, handicraft production was also carried out in giant cities akin to Mohenjodaro and Harappa.
Explain the sources of revenue for village panchayats in the course of the Mughal Rule in India. Reply:
The sources of revenue for Village Panchayats through the Indian Mughal Rule have been:
- The contribution of individuals to the joint economic pool.
- Agricultural Taxes.
Take a look at the impact of the "restriction laws" adopted by the British in 1859. Reply:
In 1859, the British passed the statute of limitations, stating that a loan signed between lenders and ryots would only last three years.
The consequences of the regulation have been:
- Cash lenders manipulated and forced ryots to signal a brand new bond every three years.
- Cashholders refused to problem receipts when the loans have been repaid, fictitious. bonds, acquired the peasants 'harvest at low costs, the cash holders ultimately took the peasants' property.
"There are indications in Harappan society of making and implementing complex decisions." Within the mild of this statement, explain whether there might have been rulers who would rule the Harappan society. Answer:
There are indications in Harappan society about making and implementing complicated selections.
The proof is:
- A big building present in Mohenjodaro was designated a palace of archaeologists, but no spectacular findings have been made.
- Some archaeologists consider that Harappan society had no rulers and that everybody had an equal standing.
- Others feel that there was not one ruler however several, that Mohenjodaro had a separate ruler, Harappa another and
- In line with some students, the last concept appears probably because it is unlikely that whole communities might have carried out and enforced so difficult selections.
- Harappan objects have been exceptionally uniform.
- Although the bricks clearly didn’t produce any middle, their relationship was uniform throughout the world, from Jammu to Gujarat.
- Staff have been mobilized to make bricks and to construct large walls and pallets. Designed city middle with well-equipped drainage system.
Describe the financial and social circumstances of individuals dwelling in rural areas c. 600 BC – 600 CE. [2 + 2 = 4] Answer:
The agricultural population differed from:
- In accordance with Jataka and Panchatantra, the relationship between the king and his subordinates can typically be strained – kings typically tried to fill the coffers by demanding high taxes. Peasants, particularly, thought-about the calls for as oppressive.
- Numerous methods similar to
- conversion to plow farming,
- contribution of iron plow to agricultural productiveness progress, use of irrigation by means of wells and tanks, and less usually channels have been adopted to extend production.
- Land grants provide some concept of the connection between farmers and the state.
Social State of affairs:
- The excellence between individuals working in progress grew. agriculture – landless agricultural staff, small farmers and enormous landowners.
- The good landowners and the village manager turned robust figures and sometimes dominated other farmers.
- The property was given gender-specific estimates.
- Occupations following individuals of different castes / ravens.
“Ibn Battuta found exciting opportunities in cities on the Indian mainland. "
Explain the statement by referring to the city of Delhi. Answer:
Ibn Battuta found cities in the Indian mainland full of exciting opportunities, especially the city of Delhi:
- Delhi covers a large area and has a dense population.
- a city that is not parallel. The wall has a width of eleven cubic meters and houses houses for night guards and gatekeepers.
- Within the menus there are stores for storing edible items, magazines, ammunition, ballistics and siege machines.
- The gates of twenty-eight cities, called Darwaza, where the Darwaza of Buddha is the largest.
- Gul Darwaza has an orchard. It has a fine cemetery where the tombs have bubbles and no bubbles are sure to have an arch.
“Sufism developed in response to the growing materialism of the caliphate, a religious and political institution. "Correct. Answer:
- The Sufis emphasized seeking salvation with a strong dedication and love for God.
- They sought interpretation of the Qur'an based on their personal experience and were critical of definitions and students. the Qur'anic interpretation methods accepted by theologians.
- By the eleventh century Sufism had evolved into a well-developed movement with a wealth of literature on Quranic studies and Sufi practices.
- The Sufisilsila was a kind of chain or link. between the Master and the disciple to find spiritual power and blessings.
- Special initiation rituals were developed, such as using patch cloths, shaving your head, making an open kitchen for charity.
Investigate the involvement of the Takaldarians. Awadh rebellion in 1857. Answer:
- For generations, Awadhi Talqdars had carried land and power in the countryside and maintained armed detainees built ts and enjoyed some independence as long as they accepted the superpower of the Nawabs and the the British did not want to tolerate the power of the Taluqdars. After accession, the Taluqdars of weapons were laid down and their fortifications were destroyed.
- Britain's rural income policy also hit the farm qdars. In South Awadh, Taluqdars lost more than half of the total villages they previously owned.
- The British government's income stream increased and peasant demand was not reduced, with an increase in income demand from 30 to 70 percent. Therefore, there was no reason for the Taluqdars or the peasants to be satisfied with the annexation of Awadh.
- In areas such as Awadh, where resistance in 1857 was intense and enduring, the Taluqdars and their peasants had fought. Question 9.
Explain why some hill stations were developed during the Indian colonial period. Answer:
- The cold climate of Indian hills was seen as an advantage. Especially since the British linked hot weather to epidemics.
- Hill stations were established mainly for the military. protects them from diseases such as cholera and malaria. They also became strategic locations for guarding borders and launching campaigns against enemy rulers.
- These hill stations were also developed into sanatoriums, i.e., places where soldiers could be sent to rest.
- These places are suitable for the British rulers in cold climates where new rulers and auxiliary troops could rest in the summer.
“By 1922, Gandhiji changed Indian nationality, thereby redeeming his promise to the BHU. February 1916 Speech. It was no longer a movement of professionals and intellectuals; Now hundreds of thousands of peasants, workers and craftsmen took part. Many of them respected Gandhij with reference to him as their "Mahatman". They appreciated the fact that he dressed like them, lived their way and spoke their language, unlike other leaders, he did not stand apart from the common people but felt known and even identified with them. “
In the light of the above paragraph, highlight all the four values maintained by Mahatma Gandhi. **
Traces the growth of Buddhism. Explain the most important teachings of the Buddha.
Track how the stupas were built. Explain why the Sanchi stupa survived, but not in Amravat. [4 + 4 = 8] Answer:
- Buddhism grew rapidly both during the life of the Buddha and after his death.
- It appeals to many people who are dissatisfied with current religious practices and confused by the rapid social changes around them. The importance given to use and values rather than claims of birth-based superiority, emphasis on meta (collegiality) and Karuna (compassion), especially for those who were younger and weaker themselves, were men and women for Buddhist teaching.
- Buddhism grew because of the Buddhist text – Tipitaka (Vinaya Pitaka, Sutta Pitaka, Abhidhamma Pitaka), Dipavamsa and Mahavamsa, Ashokavadana, Jatakas and Buddhist Hagiography. message.
- The world is a constant anicca.
- It is soulless (anatta) because it has nothing permanent or eternal.
- In the transient world, grief is inherent to human existence. People can rise above these verbal difficulties by following the path of moderation between serious repentance and spontaneous pampering.
- The Buddha emphasized personal help and righteousness as a means of escaping the rebirth and attaining self.
- Shut down ego and desire to end suffering periods.
Stuppas were considered sacred because they contained remnants of the Buddha, such as the remains of his body or the objects he used to bury them there. According to the Buddhist text, Ashoka Vadana, Ashoka distributed parts of the Buddha's remains to each important city and ordered the construction of stupas on them. In the second century BC, Bharhut, Sanchi and Sarnath were built.
The writings found on the railings and pillars of the stuppies are gifts and decorations of the buildings. Some donors were made by kings, such as the Hundred Witches, others by donors, such as ivory workers, who funded part of the Sanchi gates.
Amaravati did not survive because:
Maybe Amaravati was found. before the researchers understood the value of the findings and realized how critical it was to preserve the things they were found in, they thought of removing them from the scene.
The Amaravati stupas were modified and some of the tile boxes had the Amaravati stupas taken to different places, such as those in Kolkata, Chennai and London, and used in other buildings. Local boundaries also took the remains of the Amravati Stupa to build their temple.
Sanchi Stupa survived because:
It escaped from the eyes of railway contractors, builders, and seekers who could be transported to museums in Europe. Bhopal rulers Shahjehan Begum and his successor Sultan Jehan Begum offered money to preserve it. H. H. Cole opposed the looting of the original works of ancient art. Nineteenth-century Europeans were very interested in the Sanppie stupa. Therefore, it survived the test of time.
Explain why Mughal rulers in India recruited nobility from different races and religious groups.
Explain women from the imperial economy of the Mughal Empire. Answer:
- The nobility was recruited from many ethnic and religious groups, ensuring that no group was large enough to challenge the authority of the state.
- Mughalian officials were described as a bouquet, held together in loyalty to the emperor. The emperor was very respectful among religious saints and scholars.
- The Turani and Iranian nobles were the earliest in Akbar's imperial service. Akbar was a great and intelligent king and wanted talented people to join his state.
- Two ruling groups of Indian origin came to Imperial service – Rajput and Indian Muslims.
- The nobles participated in military campaigns and also served as imperial officers in their respective provinces.
- The Mansaobdars had two numerical names: Zat, which represented the position in the imperial hierarchy and sawar.
- Members of the Hindu caste who bent on education and accounting were also promoted, a famous example of Akbar's Finance Minister Raja Todar Mai, who belonged to the Khatri caste.
- After Noor Jahan, the Mughal queens and princesses began to control significant financial resources.
- Shanjahan's daughters Jahanara and Roshanara. enjoyed an annual income that was often equal to that of the Imperial Mansions.
- Resource management allowed important women in the Mughal household to comment on Ssion buildings and gardens.
- The pulsating center of Shahjahanabad was the Chandni Chowk bazaar designed by Jahanara.
- Gulbadan Begum, daughter of Babar and Humayun's sister, wrote a picture of Humayun Nama depicting the view of the domestic world. Mughals.
- Gulbadan described in detail the princess and the princess among the princes and kings.
- The common practice of the Mughal household consisted of emperor wives, wives, his close and distant relatives, and female and slaves. . Older women in the family played an important role in conflict resolution.
- Begams who got married after receiving huge amounts of money and valuables as a dower (mahr) naturally got their husband a higher status and more attention than aghas. 19659005] Communities were in the lowest position in the hierarchy. They all received monthly rewards in cash, supplemented by gifts based on their position.
“Community policy, begun in the early 20th century, largely reflected the division of land. "Investigate Statement.
" The Indian division had made nationalists fiercely opposed to the idea of separate electrics. "Study the statement. Answer:
- The colonial government created separate Muslim voters in 1909, which was expanded in 1919. It decisively shaped the nature of community politics. Separate voters meant that Muslims could now elect their own constituencies .
- There was active opposition and hostility between the communities.
- The Arya Samaj cow protection movement brought people back to the Hindus by folding those who had been converted to Islam after 19 years . Operation Cripps in 1942.
- Hindu Hate for Rapid Spread, Tabligh and Tanzim troops after 1923. Social riots deepened disagreements between communities and caused disturbing memories of violence
- Nationalists were plagued by the ongoing civil war and riots on the partition days.
- B. Pocker Bahadur strongly calls for separate voters to vote in the constituent assembly.
- The idea of separate voters caused anger and sadness among most nationalists in the Constituent Assembly.
- It was seen as a measure taken by the British to divide the Indians.
- This was a demand that moved one community against another.
- It divided people on a community level. It strained the relationship and caused blood.
- It was against the principle of democracy,
- GB Pant said it was suicide for the nation.
- Separate voters could lead to the sharing of loyalty and the difficulty of establishing a strong nation and a strong state.
- The isolation of minorities would prevent them from any effective say in government.
Read the following excerpt carefully and answer the following questions:
Here is the story of Adab Parvan of Mahabharata:
When Drona, Brahmana, who taught archery in Kuru , contacted Ekalavya, a forest dwelling nishada (hunting community). When Drona, who knew the dharma, refused to consider her a student, Ekalavya returned to the forest, took a picture of Drona from clay and processed it as his teacher, and began to practice himself. In time, he received great professional archery. One day, the Kuru princes went hunting and their dbg roaming the forest came over Ekalavya. When the dog smelled the dark nishada wrapped in black deer skin, his body dirty, it began to become brittle. The irritated Ekalavya fired seven arrows into his mouth. When the dog returned to the Pandavas, they were amazed at this excellent archery display. They tracked down Ekalavya, who introduced himself as a student of Drona.
Drona had once told her favorite student, Arjuna, that she would be unmatched among her students. Arjuna now reminded Drona of this. Drona contacted Ekalavya, who recognized and respected her as a teacher. As Drona demanded his right thumb as a reward, Ekalavya promptly cut it off and offered it. But after he shot with his remaining fingers, he was no longer as quick as before. So Drona kept his word: no one was better than Arjuna.
(14.1) Why did Drona refuse to consider Ekalavya as his student? (14.2) How did Drona keep his word to Arjuna? (14.3) Do you consider Drona's behavior with Ekalavya justified? If so, please indicate the reason. Answer:
(14.1) Drona, was a brahmana who knew dharma. He taught archery to the princes of Kuru. As Ekalavya approached him, the Nishada (hunting community) lodge teaches him archery, but Drona refused to consider Ekalavya as his student because he was of low descent.
(14.2) Drona gave her world Arjuna would be unparalleled among students. To prove this, Drona demanded Eklavya's right thumb as a reward, Ekalavya promptly cut it off and offered it to the guru, so he was no longer as quick as before.
(14.3) Drona's behavior with Ekalavya was justified because he promised Arjuna to be the best at archery, but when he saw Ekalavya, he was amazed at the higher display of his archery. Ekalavya was better than Arjuna's curve, so in order to keep his promise to Arjuna, Drona demanded Ekalavya to reward the thumb of his right hand.
Please read the following extract carefully and answer the following questions:
Born in 1754, Colin Mackenzie became famous as a designer, surveyor and cartographer; In 1815, he was appointed India's first Prime Minister, an office he held until his death in 1821. He began collecting local history and mapping historical sites to better understand India's past and facilitate settlement management. He says that “it struggled for a long time with bad management…. before the South came under the goodwill of the British Government. “By studying Vijaynagar, Mackenzie believed that an East Indian company could obtain“ a lot of useful information about many of these institutions, laws and customs, which continue to be influenced by various indigenous tribes. which constitutes the general mass of the population to this day. ”
(15.1) Who was Colin Mackenzie? (15.2) How did Mackenzie try to rediscover the Vijaynagar Empire? (15.three) How did the analysis of the Vijaynagar Empire profit East India? Answer:
(15.1) Colin Mackenzie was EIC's famous engineer, surveyor and cartographer. He mapped historical websites to raised perceive India's previous and facilitate settlement management. In 1815 he was appointed the primary Indian chief.
(15.2) He started amassing local history and mapping historic websites to raised understand the past of India, which included Vijaynagar in South India.
(15.three) By researching Vijay Nagar, Mackenzie believed that an East Indian agency might achieve useful insights into the various plant laws and practices that continue to affect the varied indigenous tribes that make up the overall inhabitants at this time.
Learn the following excerpt rigorously and reply the following questions:
"Tomorrow's violation of the Salt Tax Act."
On April 5, 1930, Mahatma Gandhi spoke in Dandhi:
When I left Sabarmat. with my partners in this coastal city of Dundee, I didn't think we would get to this place.
While I was in Sabarmat, rumor that I could be arrested. I had thought that the government might allow my party to enter Dand, but I certainly would not. If someone says this betrayed an incomplete faith for me, I will not deny the accusation. The fact that I come here, there is no way of peace and non-violence force: due to the fact that the power to feel like a general. The government can congratulate itself on acting if it wished, because it could have arrested each of us. We thank him for not having the courage to arrest this peace army. It was a shame to arrest such an army. He is a civilized man who feels ashamed to do anything his neighbors would not accept. The government deserves our congratulations for not arresting us, even if it did not merely fear world opinion.
Tomorrow's violation of the Salt Tax Act. Whether the government accepts this is another question. It may not tolerate it, but it deserves to be congratulated for the patience and patience it has shown with regard to this party … …
What if I and all the prominent leaders in Gujarat and elsewhere are arrested? This movement is based on the belief that once the entire nation is tuned in and marched, no leader is necessary.
(16.1) What kind of Mahatma Gandhi was he afraid of when he started his Dandi March? (16.2) Why did Gandhiji say the government deserves congratulations? (16.3) Why & # 39; Salt March & # 39; was very significant? Answer:
(16.1) He wasn't sure if he would be allowed to enter Dand. Gandhiji doubted that he could be arrested, as he said: "The federal government might permit my celebration to enter the Dand district, but not for me."
(16.2) In response to Gandhi, the federal government deserved congratulations for not arresting them,
- The salt march was vital because it brought Gandhi into the parking mild and attracted world attention.
- The participation of girls was very high. It made the British assume that their British Raj would not continue.
- Gandhi raised widespread dissatisfaction with the British regime.
Part E –
(17.1) Map of India, correctly search and mark the next info:
(a) Amritsar – an necessary middle of nationwide motion.
(b) Area underneath Agra – Babur.
(17.2) In the identical political outline of the map of India, the three places which might be giant Buddhist St gadgets are marked A, B and C. Determine them and write their actual names in the strains drawn close to them.
CBSE Previous Year Question Paper | <urn:uuid:c47124a4-c1d5-496e-922a-af20c6268255> | CC-MAIN-2019-47 | https://thecrockettreporter.com/cbse-previous-year-questionnaire-category-12-history-2018/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668334.27/warc/CC-MAIN-20191114081021-20191114105021-00337.warc.gz | en | 0.968277 | 4,920 | 3.28125 | 3 |
Rabbits are small mammals in the family Leporidae of the order Lagomorpha (along with the hare and the pika). Oryctolagus cuniculus includes the European rabbit species and its descendants, the world's 305 breeds of domestic rabbit. Sylvilagus includes 13 wild rabbit species, among them the 7 types of cottontail. The European rabbit, which has been introduced on every continent except Antarctica, is familiar throughout the world as a wild prey animal and as a domesticated form of livestock and pet. With its widespread effect on ecologies and cultures, the rabbit (or bunny) is, in many areas of the world, a part of daily life—as food, clothing, a companion, and as a source of artistic inspiration.
Male rabbits are called bucks; females are called does. An older term for an adult rabbit is coney (derived ultimately from the Latin cuniculus), while rabbit once referred only to the young animals. Another term for a young rabbit is bunny, though this term is often applied informally (especially by children) to rabbits generally, especially domestic ones. More recently, the term kit or kitten has been used to refer to a young rabbit.
A group of rabbits is known as a colony or nest (or, occasionally, a warren, though this more commonly refers to where the rabbits live). A group of baby rabbits produced from a single mating is referred to as a litter, and a group of domestic rabbits living together is sometimes called a herd.
Rabbits and hares were formerly classified in the order Rodentia (rodent) until 1912, when they were moved into a new order, Lagomorpha (which also includes pikas). Below are some of the genera and species of the rabbit.
Hares are precocial, born relatively mature and mobile with hair and good vision, while rabbits are altricial, born hairless and blind, and requiring closer care. Hares (and cottontail rabbits) live a relatively solitary life in a simple nest above the ground, while most rabbits live in social groups in burrows or warrens. Hares are generally larger than rabbits, with ears that are more elongated, and with hind legs that are larger and longer. Hares have not been domesticated, while descendants of the European rabbit are commonly bred as livestock and kept as pets.
Rabbits have long been domesticated. Beginning in the Middle Ages, the European rabbit has been widely kept as livestock, starting in ancient Rome. Selective breeding has generated a wide variety of rabbit breeds, many of which (since the early 19th century) are also kept as pets. Some strains of rabbit have been bred specifically as research subjects.
As livestock, rabbits are bred for their meat and fur. The earliest breeds were important sources of meat, and so became larger than wild rabbits, but domestic rabbits in modern times range in size from dwarf to giant. Rabbit fur, prized for its softness, can be found in a broad range of coat colors and patterns, as well as lengths. The Angora rabbit breed, for example, was developed for its long, silky fur, which is often hand-spun into yarn. Other domestic rabbit breeds have been developed primarily for the commercial fur trade, including the Rex, which has a short plush coat.
Because the rabbit's epiglottis is engaged over the soft palate except when swallowing, the rabbit is an obligate nasal breather. Rabbits have two sets of incisor teeth, one behind the other. This way they can be distinguished from rodents, with which they are often confused. Carl Linnaeus originally grouped rabbits and rodents under the class Glires; later, they were separated as the scientific consensus is that many of their similarities were a result of convergent evolution. However, recent DNA analysis and the discovery of a common ancestor has supported the view that they do share a common lineage, and thus rabbits and rodents are now often referred to together as members of the superorder Glires.
Since speed and agility are a rabbit's main defenses against predators (including the swift fox), rabbits have large hind leg bones and well developed musculature. Though plantigrade at rest, rabbits are on their toes while running, assuming a more digitigrade form. Rabbits use their strong claws for digging and (along with their teeth) for defense. Each front foot has four toes plus a dewclaw. Each hind foot has four toes (but no dewclaw).
Most wild rabbits (especially compared to hares) have relatively full, egg-shaped bodies. The soft coat of the wild rabbit is agouti in coloration (or, rarely, melanistic), which aids in camouflage. The tail of the rabbit (with the exception of the cottontail species) is dark on top and white below. Cottontails have white on the top of their tails.
As a result of the position of the eyes in its skull, the rabbit has a field of vision that encompasses nearly 360 degrees, with just a small blind spot at the bridge of the nose.
The anatomy of rabbits' hind limbs are structurally similar to that of other land mammals and contribute to their specialized form of locomotion. The bones of the hind limbs consist of long bones (the femur, tibia, fibula, and phalanges) as well as short bones (the tarsals). These bones are created through endochondral ossification during development. Like most land mammals, the round head of the femur articulates with the acetabulum of the ox coxae. The femur articulates with the tibia, but not the fibula, which is fused to the tibia. The tibia and fibula articulate with the tarsals of the pes, commonly called the foot. The hind limbs of the rabbit are longer than the front limbs. This allows them to produce their hopping form of locomotion. Longer hind limbs are more capable of producing faster speeds. Hares, which have longer legs than cottontail rabbits, are able to move considerably faster. Rabbits stay just on their toes when moving this is called Digitigrade locomotion. The hind feet have four long toes that allow for this and are webbed to prevent them from spreading when hopping. Rabbits do not have paw pads on their feet like most other animals that use digitigrade locomotion. Instead, they have coarse compressed hair that offers protection.
Rabbits have muscled hind legs that allow for maximum force, maneuverability, and acceleration that is divided into three main parts; foot, thigh, and leg. The hind limbs of a rabbit are an exaggerated feature, that are much longer than the forelimbs providing more force. Rabbits run on their toes to gain the optimal stride during locomotion. The force put out by the hind limbs is contributed to both the structural anatomy of the fusion tibia and fibula, and muscular features. Bone formation and removal, from a cellular standpoint, is directly correlated to hind limb muscles. Action pressure from muscles creates force that is then distributed through the skeletal structures. Rabbits that generate less force, putting less stress on bones are more prone to osteoporosis due to bone rarefaction. In rabbits, the more fibers in a muscle, the more resistant to fatigue. For example, hares have a greater resistant to fatigue than cottontails. The muscles of rabbit's hind limbs can be classified into four main categories: hamstrings, quadriceps, dorsiflexors, or plantar flexors. The quadriceps muscles are in charge of force production when jumping. Complimenting these muscles are the hamstrings which aid in short bursts of action. These muscles play off of one another in the same way as the plantar flexors and doriflexors, contributing to the generation and actions associated with force.
Within the order lagomorphs, the ears are utilized to detect and avoid predators. In the family leporidae, the ears are typically longer than they are wide. For example, in black tailed jack rabbits, their long ears cover a greater surface area relative to their body size that allow them to detect predators from far away. Contrasted to cotton tailed rabbits, their ears are smaller and shorter, requiring predators to be closer to detect them before they can flee. Evolution has favored rabbits to have shorter ears so the larger surface area does not cause them to lose heat in more temperate regions. The opposite can be seen in rabbits that live in hotter climates, mainly because they possess longer ears that have a larger surface area that help with dispersion of heat as well as the theory that sound does not travel well in more arid air, opposed to cooler air. Therefore, longer ears are meant to aid the organism in detecting predators sooner rather than later in warmer temperatures. The rabbit is characterized by its shorter ears while hares are characterized by their longer ears. Rabbits' ears are an important structure to aid thermoregulation and detect predators due to how the outer, middle, and inner ear muscles coordinate with one another. The ear muscles also aid in maintaining balance and movement when fleeing predators.
The Auricle (anatomy), also known as the pinna is a rabbit's outer ear. The rabbit's body surface is mainly taken up by the pinnae. It is theorized that the ears aid in dispersion of heat at temperatures above 30 °C with rabbits in warmer climates having longer pinnae due to this. Another theory is that the ears function as shock absorbers that could aid and stabilize rabbit's vision when fleeing predators, but this has typically only been seen in hares. The rest of the outer ear has bent canals that lead to the eardrum or tympanic membrane.
The middle ear is filled with three bones called ossicles and is separated by the outer eardrum in the back of the rabbit's skull.The three ossicles are called hammer, anvil, and stirrup and act to decrease sound before it hits the inner ear. In general, the ossicles act as a barrier to the inner ear for sound energy.
Inner ear fluid called endolymph receives the sound energy. After receiving the energy, later within the inner ear there are two parts: the cochlea that utilizes sound waves from the ossicles and the vestibular apparatus that manages the rabbit's position in regards to movement. Within the cochlea there is a basilar membrane that contains sensory hair structures utilized to send nerve signals to the brain so it can recognize different sound frequencies. Within the vestibular apparatus the rabbit possesses three semicircular canals to help detect angular motion.
Thermoregulation is the process that an organism utilizes to maintain an optimal body temperature independent of external conditions. This process is carried out by the pinnae which takes up most of the rabbit's body surface and contain a vascular network and arteriovenous shunts. In a rabbit, the optimal body temperature is around 38.5–40℃. If their body temperature exceeds or does not meet this optimal temperature, the rabbit must return to homeostasis. Homeostasis of body temperature is maintained by the use of their large, highly vascularized ears that are able to change the amount of blood flow that passes through the ears.
Constriction and dilation of blood vessels in the ears are used to control the core body temperature of a rabbit. If the core temperature exceeds its optimal temperature greatly, blood flow is constricted to limit the amount of blood going through the vessels. With this constriction, there is only a limited amount of blood that is passing through the ears where ambient heat would be able to heat the blood that is flowing through the ears and therefore, increasing the body temperature. Constriction is also used when the ambient temperature is much lower than that of the rabbit's core body temperature. When the ears are constricted it again limits blood flow through the ears to conserve the optimal body temperature of the rabbit. If the ambient temperature is either 15 degrees above or below the optimal body temperature, the blood vessels will dilate. With the blood vessels being enlarged, the blood is able to pass through the large surface area which causes it to either heat or cool down.
During the summer, the rabbit has the capability to stretch its pinnae which allows for greater surface area and increase heat dissipation. In the winter, the rabbit does the opposite and folds its ears in order to decrease its surface area to the ambient air which would decrease their body temperature.
The jackrabbit has the largest ears within the Oryctolagus cuniculus group. Their ears contribute to 17% of their total body surface area. Their large pinna were evolved to maintain homeostasis while in the extreme temperatures of the desert.
The rabbit's nasal cavity lies dorsal to the oral cavity, and the two compartments are separated by the hard and soft palate. The nasal cavity itself is separated into a left and right side by a cartilage barrier, and it is covered in fine hairs that trap dust before it can enter the respiratory tract. As the rabbit breathes, air flows in through the nostrils along the alar folds. From there, the air moves into the nasal cavity, also known as the nasopharynx, down through the trachea, through the larynx, and into the lungs. The larynx functions as the rabbit's voice box, which enables it to produce a wide variety of sounds. The trachea is a long tube embedded with cartilaginous rings that prevent the tube from collapsing as air moves in and out of the lungs. The trachea then splits into a left and right bronchus, which meet the lungs at a structure called the hilum. From there, the bronchi split into progressively more narrow and numerous branches. The bronchi branch into bronchioles, into respiratory bronchioles, and ultimately terminate at the alveolar ducts. The branching that is typically found in rabbit lungs is a clear example of monopodial branching, in which smaller branches divide out laterally from a larger central branch.
Rabbits breathe primarily through their noses due to the fact that the epiglottis is fixed to the backmost portion of the soft palate. Within the oral cavity, a layer of tissue sits over the opening of the glottis, which blocks airflow from the oral cavity to the trachea. The epiglottis functions to prevent the rabbit from aspirating on its food. Further, the presence of a soft and hard palate allow the rabbit to breathe through its nose while it feeds.
Rabbits lungs are divided into four lobes: the cranial, middle, caudal, and accessory lobes. The right lung is made up of all four lobes, while the left lung only has two: the cranial and caudal lobes. In order to provide space for the heart, the left cranial lobe of the lungs is significantly smaller than that of the right. The diaphragm is a muscular structure that lies caudal to the lungs and contracts to facilitate respiration.
Rabbits are herbivores that feed by grazing on grass, forbs, and leafy weeds. In consequence, their diet contains large amounts of cellulose, which is hard to digest. Rabbits solve this problem via a form of hindgut fermentation. They pass two distinct types of feces: hard droppings and soft black viscous pellets, the latter of which are known as caecotrophs or "night droppings" and are immediately eaten (a behaviour known as coprophagy). Rabbits reingest their own droppings (rather than chewing the cud as do cows and numerous other herbivores) to digest their food further and extract sufficient nutrients.
Rabbits graze heavily and rapidly for roughly the first half-hour of a grazing period (usually in the late afternoon), followed by about half an hour of more selective feeding. In this time, the rabbit will also excrete many hard fecal pellets, being waste pellets that will not be reingested. If the environment is relatively non-threatening, the rabbit will remain outdoors for many hours, grazing at intervals. While out of the burrow, the rabbit will occasionally reingest its soft, partially digested pellets; this is rarely observed, since the pellets are reingested as they are produced.
Hard pellets are made up of hay-like fragments of plant cuticle and stalk, being the final waste product after redigestion of soft pellets. These are only released outside the burrow and are not reingested. Soft pellets are usually produced several hours after grazing, after the hard pellets have all been excreted. They are made up of micro-organisms and undigested plant cell walls.
Rabbits are hindgut digesters. This means that most of their digestion takes place in their large intestine and cecum. In rabbits, the cecum is about 10 times bigger than the stomach and it along with the large intestine makes up roughly 40% of the rabbit's digestive tract. The unique musculature of the cecum allows the intestinal tract of the rabbit to separate fibrous material from more digestible material; the fibrous material is passed as feces, while the more nutritious material is encased in a mucous lining as a cecotrope. Cecotropes, sometimes called "night feces", are high in minerals, vitamins and proteins that are necessary to the rabbit's health. Rabbits eat these to meet their nutritional requirements; the mucous coating allows the nutrients to pass through the acidic stomach for digestion in the intestines. This process allows rabbits to extract the necessary nutrients from their food.
The chewed plant material collects in the large cecum, a secondary chamber between the large and small intestine containing large quantities of symbiotic bacteria that help with the digestion of cellulose and also produce certain B vitamins. The pellets are about 56% bacteria by dry weight, largely accounting for the pellets being 24.4% protein on average. The soft feces form here and contain up to five times the vitamins of hard feces. After being excreted, they are eaten whole by the rabbit and redigested in a special part of the stomach. The pellets remain intact for up to six hours in the stomach; the bacteria within continue to digest the plant carbohydrates. This double-digestion process enables rabbits to use nutrients that they may have missed during the first passage through the gut, as well as the nutrients formed by the microbial activity and thus ensures that maximum nutrition is derived from the food they eat. This process serves the same purpose in the rabbit as rumination does in cattle and sheep.
The adult male reproductive system forms the same as most mammals with the seminiferous tubular compartment containing the Sertoli cells and an adluminal compartment that contains the Leydig cells. The Leydig cells produce testosterone, which maintains libido and creates secondary sex characteristics such as the genital tubercle and penis. The Sertoli cells triggers the production of Anti-Müllerian duct hormone, which absorbs the Müllerian duct. In an adult male rabbit, the sheath of the penis is cylinder-like and can be extruded as early as two months of age. The scrotal sacs lay lateral to the penis and contain epididymal fat pads which protect the testes. Between 10–14 weeks, the testes descend and are able to retract into the pelvic cavity in order to thermoregulate. Furthermore, the secondary sex characteristics, such as the testes, are complex and secrete many compounds. These compounds includes fructose, citric acid, minerals, and a uniquely high amount of catalase.
The adult female reproductive tract is bipartite, which prevents an embryo from translocating between uteri. The two uterine horns communicate to two cervixes and forms one vaginal canal. Along with being bipartite, the female rabbit does not go through an estrus cycle, which causes mating induced ovulation.
The average female rabbit becomes sexually mature at 3 to 8 months of age and can conceive at any time of the year for the duration of her life. However, egg and sperm production can begin to decline after three years. During mating, the male rabbit will mount the female rabbit from behind and insert his penis into the female and make rapid pelvic hip thrusts. The encounter lasts only 20–40 seconds and after, the male will throw himself backwards off of the female.
The rabbit gestation period is short and ranges from 28 to 36 days with an average period of 31 days. A longer gestation period will generally yield a smaller litter while shorter gestation periods will give birth to a larger litter. The size of a single litter can range from four to 12 kits allowing a female to deliver up to 60 new kits a year. After birth, the female can become pregnant again as early as the next day.
The mortality rates of embryos are high in rabbits and can be due to infection, trauma, poor nutrition and environmental stress so a high fertility rate is necessary to counter this.
Rabbits may appear to be crepuscular, but their natural inclination is toward nocturnal activity. In 2011, the average sleep time of a rabbit in captivity was calculated at 8.4 hours per day. As with other prey animals, rabbits often sleep with their eyes open, so that sudden movements will awaken the rabbit to respond to potential danger.
In addition to being at risk of disease from common pathogens such as Bordetella bronchiseptica and Escherichia coli, rabbits can contract the virulent, species-specific viruses RHD ("rabbit hemorrhagic disease", a form of calicivirus) or myxomatosis. Among the parasites that infect rabbits are tapeworms (such as Taenia serialis), external parasites (including fleas and mites), coccidia species, and Toxoplasma gondii. Domesticated rabbits with a diet lacking in high fiber sources, such as hay and grass, are susceptible to potentially lethal gastrointestinal stasis. Rabbits and hares are almost never found to be infected with rabies and have not been known to transmit rabies to humans.
Encephalitozoon cuniculi, an obligate intracellular parasite is also capable of infecting many mammals including rabbits.
Rabbits are prey animals and are therefore constantly aware of their surroundings. For instance, in Mediterranean Europe, rabbits are the main prey of red foxes, badgers, and Iberian lynxes. If confronted by a potential threat, a rabbit may freeze and observe then warn others in the warren with powerful thumps on the ground. Rabbits have a remarkably wide field of vision, and a good deal of it is devoted to overhead scanning. They survive predation by burrowing, hopping away in a zig-zag motion, and, if captured, delivering powerful kicks with their hind legs. Their strong teeth allow them to eat and to bite in order to escape a struggle. The longest-lived rabbit on record, a domesticated European rabbit living in Tasmania, died at age 18. The lifespan of wild rabbits is much shorter; the average longevity of an eastern cottontail, for instance, is less than one year.
Rabbit habitats include meadows, woods, forests, grasslands, deserts and wetlands. Rabbits live in groups, and the best known species, the European rabbit, lives in burrows, or rabbit holes. A group of burrows is called a warren.
More than half the world's rabbit population resides in North America. They are also native to southwestern Europe, Southeast Asia, Sumatra, some islands of Japan, and in parts of Africa and South America. They are not naturally found in most of Eurasia, where a number of species of hares are present. Rabbits first entered South America relatively recently, as part of the Great American Interchange. Much of the continent has just one species of rabbit, the tapeti, while most of South America's southern cone is without rabbits.
The European rabbit has been introduced to many places around the world.
Rabbits have been a source of environmental problems when introduced into the wild by humans. As a result of their appetites, and the rate at which they breed, feral rabbit depredation can be problematic for agriculture. Gassing, barriers (fences), shooting, snaring, and ferreting have been used to control rabbit populations, but the most effective measures are diseases such as myxomatosis (myxo or mixi, colloquially) and calicivirus. In Europe, where rabbits are farmed on a large scale, they are protected against myxomatosis and calicivirus with a genetically modified virus. The virus was developed in Spain, and is beneficial to rabbit farmers. If it were to make its way into wild populations in areas such as Australia, it could create a population boom, as those diseases are the most serious threats to rabbit survival. Rabbits in Australia and New Zealand are considered to be such a pest that land owners are legally obliged to control them.
In some areas, wild rabbits and hares are hunted for their meat, a lean source of high quality protein. In the wild, such hunting is accomplished with the aid of trained falcons, ferrets, or dogs, as well as with snares or other traps, and rifles. A caught rabbit may be dispatched with a sharp blow to the back of its head, a practice from which the term rabbit punch is derived.
Wild leporids comprise a small portion of global rabbit-meat consumption. Domesticated descendants of the European rabbit (Oryctolagus cuniculus) that are bred and kept as livestock (a practice called cuniculture) account for the estimated 200 million tons of rabbit meat produced annually. Approximately 1.2 billion rabbits are slaughtered each year for meat wordwide. In 1994, the countries with the highest consumption per capita of rabbit meat were Malta with 8.89 kilograms (19.6 lb), Italy with 5.71 kilograms (12.6 lb), and Cyprus with 4.37 kilograms (9.6 lb), falling to 0.03 kilograms (0.066 lb) in Japan. The figure for the United States was 0.14 kilograms (0.31 lb) per capita. The largest producers of rabbit meat in 1994 were China, Russia, Italy, France, and Spain. Rabbit meat was once a common commodity in Sydney, Australia, but declined after the myxomatosis virus was intentionally introduced to control the exploding population of feral rabbits in the area.
In the United Kingdom, fresh rabbit is sold in butcher shops and markets, and some supermarkets sell frozen rabbit meat. At farmers markets there, including the famous Borough Market in London, rabbit carcasses are sometimes displayed hanging, unbutchered (in the traditional style), next to braces of pheasant or other small game. Rabbit meat is a feature of Moroccan cuisine, where it is cooked in a tajine with "raisins and grilled almonds added a few minutes before serving". In China, rabbit meat is particularly popular in Sichuan cuisine, with its stewed rabbit, spicy diced rabbit, BBQ-style rabbit, and even spicy rabbit heads, which have been compared to spicy duck neck. Rabbit meat is comparatively unpopular elsewhere in the Asia-Pacific.
An extremely rare infection associated with rabbits-as-food is tularemia (also known as rabbit fever), which may be contracted from an infected rabbit. Hunters are at higher risk for tularemia because of the potential for inhaling the bacteria during the skinning process. An even more rare condition is protein poisoning, which was first noted as a consequence of eating rabbit meat to exclusion (hence the colloquial term, "rabbit starvation"). Protein poisoning, which is associated with extreme conditions of the total absence of dietary fat and protein, was noted by Vilhjalmur Stefansson in the late 19th century and in the journals of Charles Darwin.
In addition to their meat, rabbits are used for their wool, fur, and pelts, as well as their nitrogen-rich manure and their high-protein milk. Production industries have developed domesticated rabbit breeds (such as the well-known Angora rabbit) to efficiently fill these needs.
Rabbits are often used as a symbol of fertility or rebirth, and have long been associated with spring and Easter as the Easter Bunny. The species' role as a prey animal with few defenses evokes vulnerability and innocence, and in folklore and modern children's stories, rabbits often appear as sympathetic characters, able to connect easily with youth of all kinds (for example, the Velveteen Rabbit, or Thumper in Bambi).
With its reputation as a prolific breeder, the rabbit juxtaposes sexuality with innocence, as in the Playboy Bunny. The rabbit (as a swift prey animal) is also known for its speed, agility, and endurance, symbolized (for example) by the marketing icons the Energizer Bunny and the Duracell Bunny.
The rabbit as trickster is a part of American popular culture, as Br'er Rabbit (from African-American folktales and, later, Disney animation) and Bugs Bunny (the cartoon character from Warner Bros.), for example.
Anthropomorphized rabbits have appeared in film and literature, in Alice's Adventures in Wonderland (the White Rabbit and the March Hare characters), in Watership Down (including the film and television adaptations), in Rabbit Hill (by Robert Lawson), and in the Peter Rabbit stories (by Beatrix Potter). In the 1920s, Oswald the Lucky Rabbit, was a popular cartoon character.
On the Isle of Portland in Dorset, UK, the rabbit is said to be unlucky and even speaking the creature's name can cause upset among older island residents. This is thought to date back to early times in the local quarrying industry where (to save space) extracted stones that were not fit for sale were set aside in what became tall, unstable walls. The local rabbits' tendency to burrow there would weaken the walls and their collapse resulted in injuries or even death. Thus, invoking the name of the culprit became an unlucky act to be avoided. In the local culture to this day, the rabbit (when he has to be referred to) may instead be called a “long ears” or “underground mutton”, so as not to risk bringing a downfall upon oneself. While it was true 50 years ago[when?] that a pub on the island could be emptied by calling out the word "rabbit", this has become more fable than fact in modern times.
In other parts of Britain and in North America, invoking the rabbit's name may instead bring good luck. "Rabbit rabbit rabbit" is one variant of an apotropaic or talismanic superstition that involves saying or repeating the word "rabbit" (or "rabbits" or "white rabbits" or some combination thereof) out loud upon waking on the first day of each month, because doing so will ensure good fortune for the duration of that month.
The "rabbit test" is a term, first used in 1949, for the Friedman test, an early diagnostic tool for detecting a pregnancy in humans. It is a common misconception (or perhaps an urban legend) that the test-rabbit would die if the woman was pregnant. This led to the phrase "the rabbit died" becoming a euphemism for a positive pregnancy test.
|Wikimedia Commons has media related to Rabbit.|
|Wikiquote has quotations related to: Rabbit| | <urn:uuid:1d57e3b3-148e-4323-9922-be3383a462b6> | CC-MAIN-2019-47 | https://readtiger.com/wkp/en/Rabbit | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00298.warc.gz | en | 0.953691 | 6,565 | 3.4375 | 3 |
- Open Access
Vectored immunoprophylaxis: an emerging adjunct to traditional vaccination
Tropical Diseases, Travel Medicine and Vaccines volume 3, Article number: 3 (2017)
The successful development of effective vaccines has been elusive for many of the world’s most important infectious diseases. Additionally, much of the population, such as the aged or immunocompromised, are unable to mount an effective immunologic response for existing vaccines. Vectored Immunoprophylaxis (VIP) is a novel approach designed to address these challenges. Rather than utilizing an antigen to trigger a response from the host’s immune system as is normally done with traditional vaccines, VIP genetically engineers the production of tailored antibodies from non-hematopoietic cells, bypassing the humoral immune system. Direct administration of genes encoding for neutralizing antibodies has proven to be effective in both preventing and treating several infectious diseases in animal models. While, a significant amount of work has focused on HIV, including an ongoing clinical trial, the approach has also been shown to be effective for malaria, dengue, hepatitis C, influenza, and more. In addition to presenting itself as a potentially efficient approach to solving long-standing vaccine challenges, the approach may be the best, if not only, method to vaccinate immunocompromised individuals. Many issues still need to be addressed, including which tissue(s) makes the most suitable platform, which vector(s) are most efficient at transducing the platform tissue used to secrete the antibodies, and what are the long-term effects of such a treatment. Here we provide a brief overview of this approach, and its potential application in treating some of the world’s most intractable infectious diseases.
From the early practice of scarification to prevent smallpox through the creation of targeted, recombinant vaccines, the development of effective vaccines has been one of the great achievements in public health and medicine, resulting in millions of lives saved. Modern vaccines typically protect by eliciting immunity following exposure to an inactivated or attenuated whole pathogen or recombinant components of a pathogen . This approach works well for diseases in which natural infection leads to immunity and protection against re-infection and has resulted in the eradication of smallpox and dramatic declines in such diseases as diphtheria, measles, and polio . However, it has been more challenging to develop effective vaccines against diseases for which prior infection does not offer full future protection, such as HIV, malaria, hepatitis C virus, and influenza A .
Although cellular immunity is certainly important, humoral immunity appears to play the most significant role in the protection associated with most vaccines . Passive immunization achieved through the infusion of serum has played a significant historical role in the treatment and prevention of infection [4, 5]. The recent development of hybridoma technology and humanized monoclonal antibodies have resulted in a new class of antibody-based drugs with demonstrated and potential efficacy in cancer, inflammatory diseases, addiction, and infectious diseases . Within this context, there has been an increased interest in passive immunization utilizing monoclonal antibodies produced in plants or transgenic animals for infections such as Ebola virus and MERS-CoV [7, 8]. However, logistical requirements including the need for high antibody concentrations requiring repeated injections due to the short half-life of antibodies, a cold-chain for delivery, and trained medical personnel for delivery create potential limitations to the use of this therapy, especially in low resource areas [1, 9]. The development of passive immunization by gene therapy could be a solution to some of those logistical issues and holds potential promise as either an adjunct to standard vaccination in populations who do not generate a sufficient immune response or for pathogens able to evade current vaccination strategies due to antigenic variability.
Originally proposed as a concept in 2002 , passive immunization by vector-mediated delivery of genes encoding broadly neutralizing antibodies for in vivo expression has been referred to as Immunoprophylaxis by Gene Transfer (IGT) , Vector-Mediated Antibody Gene transfer , or Vectored Immunoprophylaxis (VIP) [6, 12]; and for sake of consistency, ‘VIP’ is used here. Rather than passively transfering pre-formed antibodies, VIP is a process in which genes encoding previously characterized neutralizing antibodies are vectored into non-hematopoietic cells which then secrete the monoclonal antibodes encoded by those genes (See Fig. 1.) This vectored delivery and production of specified antibodies allows for protection without generating a standard immune response and results in endogenous antibody production that has the potential to be sustained . The approach has several benefits, including: 1) it does not require the host have the ability to respond immunologically, 2) the antibody can naturally be selected for a specific pathogen targets, as well as specific epitopes, 3) the antibody can be genetically modified to further enhance its activity, and, 4) vectors can be selected or engineered to have tropic characteristics targeting specific tissues and cells, potentially allowing either systemic or enhanced localized antibody production .
Infections for which VIP has been tested
VIP has been demonstrated to be effective in a host of animal models for the prevention of infection with several pathogens, especially those commonly afflicting travelers (see Table 1), including influenza A virus [13, 14], malaria (Plasmodium falciparum) , hepatitis C virus , respiratory syncitial virus , Bacillus anthracis , dengue virus , and chickungunya virus . In addition to the protection conferred by systemic neutralizing antibodies, protection against infection with influenza A virus has also been demonstrated following intranasal administration of vectored local antibody production .
By far, the most extensive and promising exploration of VIP for an infectious disease has been against HIV. In the initial study demonstrating the potential of VIP, a recombinant adeno-associated virus (rAAV) vector using a dual-promoter system generated both light and heavy chains of IgG1b12, one of the early broadly neutralizing antibodies described for HIV. The rAAV was injected into the quadricep muscles of immunodeficient mice and biologically active antibody was found in sera for over 6 months . This study provided the first evidence that rAAV vectors could transfer antibody genes to muscle, and muscle tissue was a suitable platform to produce and distribute the antibodies throughout the circulation . Follow-on studies used a native macaque SIV gp120-specific Fab molecule as an immunoadhesin, a chimeric, antibody-like molecules that combine the functional domain of a binding protein with immunoglobulin constant domains, which were considered to be superior to single chain (scFv) or whole antibody (IgG) molecules with respect to achievable steady-state serum concentrations . Six of nine rhesus macaques were completely protected against intravenous challenge with virulent SIV and still had stable immunoadhesin levels 6 years after injection . The three subjects not protected were found to have developed an immune response to the immunoadhesin by 3 weeks after injection .
Another group used an rAAV vector injected into the quadriceps muscle of a humanized mouse to express an array of broadly neutralizing antibodies: 2G12, IgG1b12, 2F5, 4E10 and VRC01. Though VRC01 serum levels as low as 8.3 μg/mL provided protection from an intravenous challenge with HIV, they achieved concentrations as high as 100 μg/mL for at least 12 months . They followed-up that study by optimizing the broadly neutralizing antibody, and although muscle was chosen as a platform for expression and secretion of the IgG1 isotype, antibodies were found to effectively reach the vaginal mucosa. Animals receiving VIP that expressed a modified VRC07 antibody (concentration of nearly 100 μg/ml in the serum and 1 μg/ml in vaginal wash fluid) were completely resistant to repetitive intravaginal challenge by a heterosexually transmitted founder HIV strain .
Saunders, et al., used an rAAV serotype 8 vector to produce a full length IgG of a simianized form of the broadly neutralizing antibody VRC07 in macaques which was protective against simian-human immunodeficiency virus (SHIV) infection 5.5 weeks after treatment . SHIVs are chimeric viruses constructed to express the HIV envelope glycoprotein to be used in vaccine experiments to evaluate neutralizing antibodies. The antibody reached levels up to 66 μg/ml for 16 weeks, but immune suppression with cyclosporine was needed to sustain expression due to the development of anti-idiotypic antibodies .
The approach to preventing HIV was enhanced further by fusing the immunoadhesin form of CD4-Ig with a small CCR5-mimetic sulfopeptide at the carboxy-terminus (eCD4-Ig). eCD4-Ig is more potent than the best broadly neutralizing antiody and binds avidly to the HIV-1 envelope glycoprotein. Rhesus macaques expressed 17–77 μg/mL of fully functional rhesus eCD4-Ig for more than 40 weeks after injection with a self complimentary serotype 1 AAV (scAAV1) vector and were completely protected from multiple challenges with a simian/human immunodeficiency virus, SHIV-AD8 . Of note, the rhesus eCD4-Ig was also markedly less immunogenic than rhesus forms of four well-characterized broadly neutralizing antibodies .
In addition to disease prevention as noted above, studies have also demonstrated an application for VIP in the effective treatment of previously-infected animals. Using HIV-1-infected humanized mice, Horwitz, et al., demonstrated that following initial treatment with anti-retroviral therapy (ART), a single injection of adeno-associated virus directing expression of broadly neutralizing antibody 10-1074, produced durable viremic control after the ART was stopped .
The first human trial using the VIP approach started in January 2014 and is a phase 1, randomized, blinded, dose-escalation study of an rAAV1 vector coding for PG9, a potent broadly neutralizing antibody, in high risk, healthy adult males (ClinicalTrials.gov number, NCT01937455). Another study evaluating using VIP in HIV-positive subjects is scheduled to get underway soon .
Many options exist for vectoring the transgene into the host tissue, each with distinct advantages and limitations. Naked plasmid DNA is relatively easy to use, does not elicit significant immunogenicity, and has the potential for inexpensive large-scale production [20, 27]. Recent advances in both the mechanism of delivery and optimization of plasmid and electroporation conditions have improved the concentration and duration of antibody production, but it has yet to prove as potent as viral vectoring.
Viral vectors offer the advantage of efficient, rapid delivery of the transgene into host cells and the potential for integration into the host genome, allowing for sustained expression . The life cycle of a virus consists of attachment, penetration, uncoating, replication, gene expression, assembly and budding. Replication and gene expression typically take place in the nucleus where viral genomes persist episomally or integrate into the host genome (i.e., a provirus). Vectors that persist episomally can provide sustained transgene expression in post-mitotic tissue, but since they do not alter the host genome, they may be lost if and when the cells divide. Vectors that integrate into to the host genome may provide life-long transgene expression in dividing cells but could also lead to insertional mutagenesis resulting in apoptosis or malignant transformation .
Adenoviral vectors produce rapid, but transient, gene expression that could be ideal for responding to a disease outbreak, but would have limitations for long term protection . Adenovirus serotype 5 (Ad5) has successfully transduced protective antibodies for respiratory syncytial virus (RSV), influenza A virus (IAV), and Bacillus anthracis [14, 17, 18]. The Ad5 genome is easy to engineer and remains episomal, but there is significant pre-existing immunity to Ad5, estimated at 50% of the adult population worldwide and even higher in sub-Saharan Arica, which decreases the ability to transfer the transgene. Additionally, it can result in systemic cytokine release creating a sepsis presentation and there is significant tissue tropism for the liver when delivered intravenously. Alternative adenoviral vectors are being researched .
Lentivral vectors are better suited for long term expression since they typically integrate into the genome and can transduce dividing and non-dividing cells. They have successfully been utilized to transduce hematopoietic stem cells to produce broadly neutralizing antibodies against HIV in mouse models [32, 33]. However, because they can integrate into the host genome, there is concern for mutagenesis. Newer generation lentiviral vectors contain deletions in their long-terminal repeat (LTR) and a self-inactivating (SIN) LTR, leaving them replication incompetent, which should make them much safer, but this question is not fully answered .
Although other viral vectors are being explored, rAAV vectors are currently the favored vehicle for delivering the antiody genes into the host tissue due to their efficiency in gene transfer . In contrast to other viral vectors, such as adenovirus, rAAV’s have not been associated with any human diseases and do not stimulate signficant immunologic reaction, and are therefore able to induce long-term expression of non-self-proteins . They are engineered to consist of the antibody gene expression cassette flanked by the AAV ITRs (inverted terminal repeats), which are the only part of the AAV genome present in the rAAV vector and are required for rAAV vector genome replication and packaging. Despite a relativley small packaging capacity of 5 kb, both heavy- and light-chain antibody genes can be incorporated into a single vector, either using a promoter for each gene cassette or a single promoter for expression with the heavy and light chain separated by a foot-and-mouth disease virus 2A peptide .
‘Immunization’ site selection
All studies to date have targeted skeletal muscle as the platform for transfection and antibody production. Muscle offers some significant advantages. It is easily accessible for localized vector administration, and some muscle groups can be removed in the event of mutagenesis or auto-immunity without functional consequence. However, muscle has certain disadvantages as well. It is a tissue that does not normally produce circulating proteins and therefore may not do it efficiently. It also contains antigen-presenting dendritic cells that could induce immune responses which might eliminate transduced cells or induce auto-immunity. Additionally, the removal of muscle tissue would likely have a significant effect on a subject’s lifestyle in the event of a potential unexpected VIP-induced pathology.
Other platforms have been considered. For example, some authors have suggested the liver as an alternative site . Unlike muscle, it is designed to secrete circulating proteins. It is also thought to be less immunogenic. However, transduction would require systemic administration of the vector, and there would be no simple means of eliminating expression in the event of a complication. Another potential site could be the salivary glands. While it is well-know that the salivary glands secrete proteins into the oral cavity, it may be less well appreciated that they have also been used as a platform to deliver therapeutic proteins, including the IgG Fc fragment and a host of other proteins, into the systemic circulation [36, 37]. Transgenes delivered to the salivary gland tend to favor being sorted either into the saliva or the blood, though it is currently a challenge to predict which direction a particular protein will sort . The major paired salivary glands are also easily accessible, and the parotid glands are encapsulated, which minimizes vector spillage into the general circulation. Futhermore, in the event of complications, the transfected glands could be removed without creating major disability.
Potential safety issues
Safety concerns associated with VIP include genotoxic events typically associated with any viral vector mediated gene therapy, such as inflammation, a random insertion disrupting normal genes, activation of proto-oncogenes, and insertional mutagenesis . There are many factors which affect the likelihood of developing a genotoxic event including the vector, the targeted insertion site, the transgene, the targeted cell type, and host factors including age and underlying disease . The risk of genotoxicity or carcinogenicity can potentially be decreased by selection of the promoter and the integration site, using novel techniques such as Clustered Regularly Interspaced Short Palindromic Repeat (CRISPR)-Cas9 (an RNA-guided gene-editing platform that allows for cutting of DNA in a specified gene), but much more work needs to be done to better characterize safety and efficacy of these methods .
As the purpose of VIP is to produce a monoclonal antibody, the possibility of producing a paraproteinemia similar to that caused by multiple myeloma, other hematologic malignancies, primary amyloidosis, or a monoclonal gammopathy of undertermined significance (MGUS) is a concern. The most benign of these is MGUS, but it has been increasingly recognized to have pathologic associations including is nephropathy secondary to monoclonal gammopathy of renal significance (MGRS), neuropathy, oculopathy, and dermopathy as well as possible associations with autoimmunity and coagulopathy and an epidemiologic association with early mortality from a variety of apparently unrelated causes [40–42]. Any of these conditions could result from a monoclonal gammopathy produced by VIP. However, it should be noted that MGUS is very common, occurring in 3% of the population older than 50 years old, and most of these associations remain either unclear or uncommon . However, the potential for autoimmunity should be of particular concern. It is possible that the monoclonal antibody could interact with self-antigen and either stimulate an autoimmune antibody that interacts with self-antigen or neutralizes the intended effect of the monoclonal antibody.
Vectored Immunoprophylaxis has demonstrated great promise in a variety of pre-clinical studies as a potential adjunct to vaccination in patients not able to respond effectively to immunization or as an alternative to vaccination for infectious diseases not effectively covered by current vaccines. The rapid identification of specific neutralizing antibodies is likely to increase the potential for this method. One could imagine uses for VIP such as an adjunct to vaccination for influenza in the elderly and immunocompromised, for HIV protection in high risk populations, or as part of a ring vaccination strategy in an outbreak of a disease such as Ebola. Many important questions remain, including the ability to produce equally effective clinical results in human trials, the duration of response, and the potential for side-effects. Mutagenesis at the site of transfection is a common concern, but the development of an immune response to the transgene product or the off-target binding of the antibodies are more likely scenarios, either of which could result in decreased efficacy of the procedure or a significant auto-immune reaction. Questions also remain concerning the best vector and the optimal tissue site for transfection. Despite these questions and concerns, the advantages offered in settings ranging from chronic protection of the aged or immunocompromised to rapid protection for early responders in the event of a bioterror or emerging infection event are significant and intriguing. Further pre-clinical and clinical studies are certainly warranted.
Adenovirus serotype 5
Human immunodeficiency virus
Influenza A virus
Immunoprophylaxis by gene transfer
Inverted terminal repeats
Recombinant adeno-associated virus
Respiratory syncytial virus
Single chain antibody
Simian-human immunodeficiency virus
Simian immunodeficiency virus
Deal CE, Balazs AB. Engineering humoral immunity as prophylaxis or therapy. Curr Opin Immunol. 2015;35:113–22.
Roush SW, Murphy TV, Vaccine-Preventable Disease Table Working G. Historical comparisons of morbidity and mortality for vaccine-preventable diseases in the United States. JAMA. 2007;298(18):2155–63.
Amanna IJ, Slifka MK. Contributions of humoral and cellular immunity to vaccine-induced protection in humans. Virology. 2011;411(2):206–15.
Luke TC, Casadevall A, Watowich SJ, Hoffman SL, Beigel JH, Burgess TH. Hark back: passive immunotherapy for influenza and other serious infections. Crit Care Med. 2010;38(4 Suppl):e66–73.
Luke TC, Kilbane EM, Jackson JL, Hoffman SL. Meta-analysis: convalescent blood products for Spanish influenza pneumonia: a future H5N1 treatment? Ann Intern Med. 2006;145(8):599–609.
Yang L, Wang P. Passive immunization against HIV/AIDS by antibody gene transfer. Viruses. 2014;6(2):428–47.
Luke T, Wu H, Zhao J, Channappanavar R, Coleman CM, Jiao JA, Matsushita H, Liu Y, Postnikova EN, Ork BL, et al. Human polyclonal immunoglobulin G from transchromosomic bovines inhibits MERS-CoV in vivo. Sci Transl Med. 2016;8(326):326ra321.
Rybicki EP. Plant-based vaccines against viruses. Virol J. 2014;11:205.
Loomis RJ, Johnson PR. Emerging vaccine technologies. Vaccines (Basel). 2015;3(2):429–47.
Lewis AD, Chen R, Montefiori DC, Johnson PR, Clark KR. Generation of neutralizing activity against human immunodeficiency virus type 1 in serum by antibody gene transfer. J Virol. 2002;76(17):8769–75.
Schnepp BC, Johnson PR. Vector-mediated in vivo antibody expression. Microbiol Spectr. 2014;2(4):AID-0016-2014.
Balazs AB, Chen J, Hong CM, Rao DS, Yang L, Baltimore D. Antibody-based protection against HIV infection by vectored immunoprophylaxis. Nature. 2012;481(7379):81–4.
Balazs AB, Bloom JD, Hong CM, Rao DS, Baltimore D. Broad protection against influenza infection by vectored immunoprophylaxis in mice. Nat Biotechnol. 2013;31(7):647–52.
Tutykhina IL, Sedova ES, Gribova IY, Ivanova TI, Vasilev LA, Rutovskaya MV, Lysenko AA, Shmarov MM, Logunov DY, Naroditsky BS, et al. Passive immunization with a recombinant adenovirus expressing an HA (H5)-specific single-domain antibody protects mice from lethal influenza infection. Antiviral Res. 2013;97(3):318–28.
Deal C, Balazs AB, Espinosa DA, Zavala F, Baltimore D, Ketner G. Vectored antibody gene delivery protects against Plasmodium falciparum sporozoite challenge in mice. Proc Natl Acad Sci U S A. 2014;111(34):12528–32.
de Jong YP, Dorner M, Mommersteeg MC, Xiao JW, Balazs AB, Robbins JB, Winer BY, Gerges S, Vega K, Labitt RN, et al. Broadly neutralizing antibodies abrogate established hepatitis C virus infection. Sci Transl Med. 2014;6(254):254ra129.
Skaricic D, Traube C, De B, Joh J, Boyer J, Crystal RG, Worgall S. Genetic delivery of an anti-RSV antibody to protect against pulmonary infection with RSV. Virology. 2008;378(1):79–85.
De BP, Hackett NR, Crystal RG, Boyer JL. Rapid/sustained anti-anthrax passive immunity mediated by co-administration of Ad/AAV. Mol Ther. 2008;16(1):203–9.
Flingai S, Plummer EM, Patel A, Shresta S, Mendoza JM, Broderick KE, Sardesai NY, Muthumani K, Weiner DB. Protection against dengue disease by synthetic nucleic acid antibody prophylaxis/immunotherapy. Sci Rep. 2015;5:12616.
Muthumani K, Block P, Flingai S, Muruganantham N, Chaaithanya IK, Tingey C, Wise M, Reuschel EL, Chung C, Muthumani A, et al. Rapid and long-term immunity elicited by DNA-encoded antibody prophylaxis and DNA vaccination against chikungunya virus. J Infect Dis. 2016;214(3):369–78.
Limberis MP, Adam VS, Wong G, Gren J, Kobasa D, Ross TM, Kobinger GP, Tretiakova A, Wilson JM. Intranasal antibody gene transfer in mice and ferrets elicits broad protection against pandemic influenza. Sci Transl Med. 2013;5:187ra172.
Johnson PR, Schnepp BC, Zhang J, Connell MJ, Greene SM, Yuste E, Desrosiers RC, Clark KR. Vector-mediated gene transfer engenders long-lived neutralizing activity and protection against SIV infection in monkeys. Nat Med. 2009;15(8):901–6.
Balazs AB, Ouyang Y, Hong CM, Chen J, Nguyen SM, Rao DS, An DS, Baltimore D. Vectored immunoprophylaxis protects humanized mice from mucosal HIV transmission. Nat Med. 2014;20(3):296–300.
Saunders KO, Wang L, Joyce MG, Yang ZY, Balazs AB, Cheng C, Ko SY, Kong WP, Rudicell RS, Georgiev IS, et al. Broadly neutralizing human immunodeficiency virus type 1 antibody gene transfer protects nonhuman primates from mucosal simian-human immunodeficiency virus infection. J Virol. 2015;89(16):8334–45.
Gardner MR, Kattenhorn LM, Kondur HR, von Schaewen M, Dorfman T, Chiang JJ, Haworth KG, Decker JM, Alpert MD, Bailey CC, et al. AAV-expressed eCD4-Ig provides durable protection from multiple SHIV challenges. Nature. 2015;519(7541):87–91.
Horwitz JA, Halper-Stromberg A, Mouquet H, Gitlin AD, Tretiakova A, Eisenreich TR, Malbec M, Gravemann S, Billerbeck E, Dorner M, et al. HIV-1 suppression and durable control by combining single broadly neutralizing antibodies and antiretroviral drugs in humanized mice. Proc Natl Acad Sci U S A. 2013;110(41):16538–43.
Tjelle TE, Corthay A, Lunde E, Sandlie I, Michaelsen TE, Mathiesen I, Bogen B. Monoclonal antibodies produced by muscle after plasmid injection and electroporation. Mol Ther. 2004;9(3):328–36.
Tjelle TE, Salte R, Mathiesen I, Kjeken R. A novel electroporation device for gene delivery in large animals and humans. Vaccine. 2006;24(21):4667–70.
Muthumani K, Flingai S, Wise M, Tingey C, Ugen KE, Weiner DB. Optimized and enhanced DNA plasmid vector based in vivo construction of a neutralizing anti-HIV-1 envelope glycoprotein Fab. Hum Vaccin Immunother. 2013;9(10):2253–62.
Gil-Farina I, Schmidt M. Interaction of vectors and parental viruses with the host genome. Curr Opin Virol. 2016;21:35–40.
Alonso-Padilla J, Papp T, Kajan GL, Benko M, Havenga M, Lemckert A, Harrach B, Baker AH. Development of novel adenoviral vectors to overcome challenges observed with HAdV-5-based constructs. Mol Ther. 2016;24(1):6–16.
Joseph A, Zheng JH, Chen K, Dutta M, Chen C, Stiegler G, Kunert R, Follenzi A, Goldstein H. Inhibition of in vivo HIV infection in humanized mice by gene therapy of human hematopoietic stem cells with a lentiviral vector encoding a broadly neutralizing anti-HIV antibody. J Virol. 2010;84(13):6645–53.
Luo XM, Maarschalk E, O’Connell RM, Wang P, Yang L, Baltimore D. Engineering human hematopoietic stem/progenitor cells to produce a broadly neutralizing anti-HIV antibody after in vitro maturation to human B lymphocytes. Blood. 2009;113(7):1422–31.
Nieto K, Salvetti A. AAV vectors vaccines against infectious diseases. Front Immunol. 2014;5:5.
Mellins ED, Kay MA. Viral vectors take on HIV infection. N Engl J Med. 2015;373(8):770–2.
Baum BJ, Alevizos I, Chiorini JA, Cotrim AP, Zheng C. Advances in salivary gland gene therapy - oral and systemic implications. Expert Opin Biol Ther. 2015;15(10):1443–54.
Racz GZ, Perez-Riveros P, Adriaansen J, Zheng C, Baum BJ. In vivo secretion of the mouse immunoglobulin G Fc fragment from rat submandibular glands. J Gene Med. 2009;11(7):580–7.
Baldo A, van den Akker E, Bergmans HE, Lim F, Pauwels K. General considerations on the biosafety of virus-derived vectors used in gene therapy and vaccination. Curr Gene Ther. 2013;13(6):385–94.
David RM, Doherty AT. Viral vectors: the road to reducing genotoxicity. Toxicol Sci. 2016;155 (2):315–25. doi:10.1093/toxsci/kfw220.
Glavey SV, Leung N. Monoclonal gammopathy: the good, the bad and the ugly. Blood Rev. 2016;30(3):223–31.
International Myeloma Working G. Criteria for the classification of monoclonal gammopathies, multiple myeloma and related disorders: a report of the International Myeloma Working Group. Br J Haematol. 2003;121(5):749–57.
Leung N, Bridoux F, Hutchison CA, Nasr SH, Cockwell P, Fermand JP, Dispenzieri A, Song KW, Kyle RA, International K, et al. Monoclonal gammopathy of renal significance: when MGUS is no longer undetermined or insignificant. Blood. 2012;120(22):4292–5.
Availability of data and materials
JWS and TAP contributed equally to the development of this manuscript. Both authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate
About this article
Cite this article
Sanders, J.W., Ponzio, T.A. Vectored immunoprophylaxis: an emerging adjunct to traditional vaccination. Trop Dis Travel Med Vaccines 3, 3 (2017) doi:10.1186/s40794-017-0046-0
- Vectored immunoprophylaxis
- Immunoprophylaxis by gene transfer
- Vector-mediated antibody gene transfer
- Gene therapy
- Broadly neutralizing antibody
- Salivary gland | <urn:uuid:e71b22ec-6c1e-4320-b4e7-54a4f0d19690> | CC-MAIN-2019-47 | https://tdtmvjournal.biomedcentral.com/articles/10.1186/s40794-017-0046-0 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00337.warc.gz | en | 0.887913 | 6,890 | 3.0625 | 3 |
New Ipswich is situated at 42° 44’ 53” N 71° 51’ 15” W in Hillsborough County in the south of the state of New Hampshire.
Population:- New Ipswich’s population, as at the 2010 census was 5,099.
How to get there:-
By road: From Manchester take State Highway 101 west, then State Highway 31 south to State Highway 123.
From Boston take US Highway 3 north to Nassau, then follow State Highway 101A west to the intersection with State Highway 101, before following directions from Manchester as above.
From the west take State Highway 101 east, then State Highway 123 on to State Highway 124.
There is no rail service to New Ipswich.
Nearest airport is Manchester-Boston Regional Airport, 35 miles from New Ipswich.Time Zone: Eastern Standard Time (GMT -5 hrs). Daylight saving time in summer + 1 hr.
Order of contents on this page: (Click on the links below)
As far so we know, no Native Americans ever made their homes in the New Ipswich area. They only came here to hunt. The tribe that probably came here most frequently would have been the Souhegan, one of the smaller tribes within the Pennacook Confederacy, which had its settlement at Amherst, 20 miles to the northeast of New Ipswich. The Souhegan River is a tributary of the Merrimack River that begins in New Ipswich. Nobody is sure what the name “Souhegan” actually means. The word itself can mean “a waiting and watching place”, which could have been applied to the fish weirs set across rapids where they waited to catch the fish. It could be a contraction of “souheganash”, meaning worn-out lands, referring to the more barren uplands around New Ipswich, or there is the word “souheganoe”, which means crooked, so the name Souhegan might mean “crooked river.”
Although they had remained neutral, the Pennacook people were attacked by British troops during King Philip’s War (1675-76), a general Indian uprising against the settlers, and the Pennacook had to abandon this area, and they took refuge in Quebec under French protection, where they became assimilated with other Abenaki exiles. (see also Ipswich, Massachusetts page for general information regarding pre-European New Hampshire & Massachusetts).
The land in the area that was to become New Ipswich was granted as a township six miles square by the General Court (Assembly) of Massachusetts on 15 January 1736 to a group of sixty people who lived in Ipswich, Massachusetts (approximately 50 miles away). They became proprietors of the land and each grantee had to arrange for the clearance and settlement of his plot within five years. In 1737 the first road through the area was constructed, which later became known as the ‘Old Country Road’. The following year, the first permanent settlement was established by Abijah Foster and his family.
Although the proprietors lived in Ipswich, Massachusetts, they did not necessarily need to settle the land themselves. In fact, only two of the original settlers came from Ipswich: Abijah Foster (1708-1759) and Henry Pudney; three-quarters of the first occupants before 1750 came from Middlesex County, Massachusetts, mostly from Concord and Littleton.
The township’s growth was limited in part by concern as to whether Massachusetts had claim to the land since this was disputed by the heirs of John Mason, the grantee of New Hampshire. It was not until 1745 that the area of New Ipswich was finally ruled to be in New Hampshire. The other factor against early settlement was the fear of Indian attacks. In 1748 all the residents, but Captain Moses Tucker, abandoned their homes when such an attack occurred and the meeting house was burnt to the ground.
The confirmation of the claim by the heirs of John Mason annulled the existing titles to the land so the proprietors had to procure another grant from the New Hampshire authorities. The first charter, known as the Masonian Charter, was issued on 17th April 1750, and clearly refers to the township of New Ipswich. The conditions of the new grant were very similar to the former grant, but now the proprietors had full title to the land.
The Masonian Charter established 30 new proprietors of whom only 13 resided in the township. Of the 30 proprietors, six were from Ipswich, Massachusetts, (Thomas Adams, Isaac Appleton, Robert Choate, Thomas Dennis, Abijah Foster and Henry Pudney) and, of these, only two (previously mentioned) actually lived at New Ipswich. There is no evidence that the others ever resided in the town, although the sons of Thomas Adams and Isaac Appleton did later come to live here.
The first written reference to “New Ipswich” is in the journal of the surveyor, Richard Hazzen, in March 1741. Although this name was in use from the 1740s, initially the town was incorporated as Ipswich on 9th September 1762. A second act of incorporation on 6th March 1766, however, reverted to the name of New Ipswich.
Although incorporation made little real difference to the existing governance of New Ipswich, it now placed it on a proper legal footing with elected officials for a fixed term of office. Hitherto, proprietors had met once a year and appointed officers to act for them “at their pleasure”. These were invariably men selected by the residents who, in the absence of the proprietors, met in committee to ensure that the public duties, such as maintenance of the highways, were conducted properly.
In 1801 the first woollen mill in New Hampshire was founded in New Ipswich, powered by the waters of the Souhegan River. Three years later the first cotton mill was established.Top of Page
The 33.1 square miles of New Ipswich also include the six villages of Bank, Davis, Gibson Four Corners, Highbridge, Smithville and Wilder. Early settlement of New Ipswich comprised scattered farmsteads three or four miles apart, but the nucleus of the community was always at the location that became Center Village (see New Ipswich Center Village Historic District, below). The town was divided into districts for “schooling” purposes in 1770. At first these were given geographic designations, such as “West”, “Northeast, “Middle”, “Southeast”, etc. The districts were frequently referred to after the owner of the most prominent house in the district, and later in the 19th century definitive names were given to these scattered communities. The derivation of these names is as follows.
Bank Village: This is just a mile east of Center Village on the west bank (hence its name) of the Souhegan River. This was an early name given in the 1750s to the houses grouped around the first bridge crossing the river on the Old Country Road.
Davis Village: This is half a mile northwest of Center Village, just south of the present Turnpike Road. It was first known as “Bakehouse Village” from 1785 when Samuel Batchelder (1755-1814) arrived from Salem and converted his premises into a bakery and store. His son, Samuel, continued the business until 1826. He then moved to Lowell, Massachusetts, where he became a prominent businessman in the State. After his departure, the name Davis Village was applied because Joseph Davis (1744-1838) used to hold weekly prayer meetings in his home, and also at the homes of other members of the Davis family; these meetings took place from 1810 to 1860.
Gibson Four Corners: Situated two miles south of Center Village on the west bank of the South Branch Souhegan River. Dr Stillman Gibson (1781-1838) moved from Ashby, Massachusetts to New Ipswich in 1812 and bought a farm in the present locality. Such was his reputation for treating sick domestic animals, that signposts were put in place directing people to his farm, thus giving rise to the name of “Gibson’s Village”. As this was near the crossroads junction of Ashby Road, running west to east, and River Road/Ashburnham Road, running north to south, the later designation of “Gibson Four Corners” came into use during the 20th century.
Highbridge: This is the community east of the present river crossing along Turnpike Road. It was first settled in 1750 by John Chandler, who entered a contract with the proprietors to build two mills there. A second bridge was built in 1752 “near the mills” just north of the original crossing point. It was constructed at a higher level to avoid the spring floods when the river was swollen from melting ice and snow. Hence it became known as “High Bridge”, as did the community that developed near to it.
Smithville: This community lies two miles southwest of Center Village. It was originally known as “Mill Village” after the mills built there in 1754 by Zachariah Adams. Jeremiah Smith (1797-1872), whose grandfather came to New Ipswich from Leominster, Massachusetts, in c.1764, bought the house of Ebenezer Fletcher, the largest in the locality, and made part of it a country store which became a postal drop. Hence, from about 1850 the community became known as “Smith Village”, and when a post office was opened there in 1892 it was officially renamed “Smithville”.
Wilder Village: This community is on Turnpike Road (Route 124) four and a half miles northwest of New Ipswich near the boundary with the town of Sharon. It takes its name from Peter Wilder whose family lived at Keene, New Hampshire. He was a chairmaker who moved here in 1810. He and his son-in-law, Abijah Wetherbee, established the Wilder Chair Factory at the junction of the Turnpike Road with Old Nashua Road. “Wilder Chairs” are now much sought after by furniture collectors. A small settlement grew up around the factory. The factory closed down in 1869 following a disastrous spring flood, but the settlement of Wilder Village survives today.
New Ipswich did not raise a town militia, as did most other towns in New England. Native Americans had never inhabited the area, so there was no deemed threat to warrant the formal establishment of a town militia. The settlers made very few preparations to meet an attack, and no public structure was ever built as a place of safety in such an eventuality. However, the New Hampshire Militia was first organised in March 1680, and after New Ipswich had been settled in 1735, men from the town are known to have served in that militia.
When news of the Battle of Lexington and Concord (19th April 1775) that started the Revolutionary War for independence reached New Hampshire, men from New Ipswich set out on 20th April along with other men from New Hampshire under Thomas Heald from New Ipswich as Captain. As volunteers, some men only signed up for a specific assignment and returned after two weeks service, whilst others enlisted for eight months. Capt. Ezra Towne of New Ipswich was given the task of forming a Company on 23rd April 1775. There were 35 men enrolled from New Ipswich and 30 others from the surrounding towns and villages; all of the officers were of the town, and it was called the “New Ipswich Company”. This New Ipswich contingent formed the 4th Company in Col. James Reed’s 3rd New Hampshire Regiment, and took part in the Battle of Bunker Hill (17th June 1775). Most men served until the departure of the British from Boston on 17th March 1776 (see Suffolk County Militia on Suffolk County, Mass. page of ).
Lieut.-Col. Thomas Heald and Capt. Ezra Towne also led men who were called up for service in campaigns along the Canadian border in February 1776, and there were further call-outs in February, May and July 1777. The general purpose of the local militia was to assist the Northern Continental Army in its defence against incursions from Canada, particularly from Fort Ticonderoga on Lake Champlain. By 1778 the urgency of the situation had eased, and there were fewer calls on the men. No battles were fought on New Hampshire soil. In all, out of a population of 1,033, New Ipswich sent about 275 men to serve, of whom one was killed in action, 8 or 10 were severely wounded, and another 20 died from sickness (mainly smallpox) while in the army.
Following the American Revolution, State law required that the town Company parade and drill annually every May on Muster Day. However, after years of successful application, Muster Day became more a day of celebration, and the old militia law was not followed with any vigour. Finally, in 1855 the need for a militia organisation was abolished and the muster was no longer required.
Added to the National Register of Historic Places in 1991 due to its architectural significance, the New Ipswich Center Village Historic District is situated around the original settlement area in New Ipswich. With examples of Georgian, Federal, Greek Revival, Gothic Revival, French Second Empire & Shingle styles of architecture, the Historic District chronicles the periods of historical development of the town from 1735 to 1930. Although the area is mainly residential, the development of the commercial, industrial, professional, religious & educational life of the town is also represented. The district consists of around 150 properties, most of which date from before 1850, although some buildings are of more recent vintage.
Situated south of Turnpike Road, the New Ipswich Center Village Historic District is located in the general vicinity of Main Street, Porter Hill Road, Manley Road, King Road & Old Country Road. Included within the area are six schools or former schools, two churches, two former post offices, town hall, a nineteenth century meeting hall, & several former hotels, taverns & shops.
Two of the earliest buildings still standing date from the 1760s: Reverend Stephen Farrar House on Turnpike Road, which was built around 1762, & Preston-King House on King Road, built in 1763-64.
The first New Ipswich Academy building, which was built in 1789, stands close to the original Meetinghouse on what was then known as Meetinghouse Hill (modern day Porter Hill Road). It was converted to a private residence in the nineteenth century. (See Appleton Academy section, below).
With the construction of the Third New Hampshire Turnpike in 1800, a number of other Federal style houses sprang up along Main Street. As well as the Barrett House (see section below) these included the Locke-Quimby House, the Matthias Wilson House, the Farwell-Fox House, the Abel Shattuck House, and the Farwell-Spaulding House, Each of these is a five-bay, two-and-a-half storey house. Others were built in the Greek Revival style, such as the Tolman-Sanderson House, the Jefts-Taylor House, the Dolly Everett House & the Shedd-Preston House (Friendship Manor).
Two school buildings also date from the first half of the nineteenth century. One, the Old Number 1 School House, was built in 1829 & now serves as the New Ipswich Historical Society’s premises. The other, a Greek Revival style building, was completed in 1842.
Another structure in the Greek Revival style is the Baptist Church at the junction of Old Country Road & Main Street, which dates from 1850. It was later bought by the Apostolic Lutheran Church, which was established in the town in 1905; a result of the influx of people of Finnish descent during the early twentieth century.
The Homestead Inn, established around 1895, on the corner of Old Country Road and Main Street, was once a charitable institution established by the Church of the Good Shepherd, Boston. It operated as such until 1915. Thereafter it was purchased by James C. Barr, who ran it as an hotel until 1929, when it burnt down. Barr & his extended family also accumulated the majority of the land along the east side of Main Street, between Old Country Road & the Turnpike. Previous generations of the Barr family had also had a substantial influence on New Ipswich since the arrival of James Barr in 1775 (great grandfather of the above).
From the late nineteenth century until around 1930, New Ipswich became a popular resort with summer visitors, & two hotels - the Appleton Arms (later known as Appleton Inn or Manor) & Clark's Hotel (later known as the 1808 House) - plus a number of boarding houses, sprang up in the district. Many houses in what would become the Historic District were, during this period, used as summer only dwellings. Most were already existing buildings converted for purpose, but a few new houses were also erected. One such that was built at this time was an elegant Colonial Revival style summer house; built for Samuel Tarbell Ames & family, & situated close to the Appleton Academy.
Situated on Main Street & also known as Forest Hall, Barrett House is a Federal style American mansion, built for Charles Barrett Jr. by his father as a wedding present around the year 1800.
Charles Barrett Sr. had arrived in New Ipswich around the year 1764 from Boston. He later invested in the first cotton mill in New Hampshire, at Bank Village, New Ipswich. Charles Jr. was also involved in the cotton manufacturing industry.
The house remained in the Barrett family until 1948, although after 1887 the house was used as a summer residence only. From 1916 onwards it remained unoccupied & boarded up, until in 1948 the house was donated to Historic New England, who undertook extensive restoration work before opening it as a museum in 1950.
Within the 75 acres of grounds, consisting mainly of woodland & meadow, can be found an 1840s gothic revival summerhouse. The grounds are also open to museum visitors.
The house was used as a location in the 1979 film ‘The Europeans’, based on the novel by Henry James & starring Lee Remick.
Have you signed the Guestbook yet?
Chartered in 1789, New Ipswich Academy was the second oldest academy in New Hampshire.
The first New Ipswich Academy building was built that same year, & was in use until 1853, when the school relocated from Meetinghouse Hill to a new building constructed in the late-Federal style, further along Main Street. This building burnt down in 1941, but was rebuilt on the same site.
According to the Academy’s Act of Incorporation, the school was to provide education “in the English, Latin and Greek languages, in Writing, Arithmetic, Music and the Art of Speaking, practical Geometry, Logic, Geography, and such other of the liberal arts and sciences or languages, as opportunity may hereafter permit.”
It was later renamed Appleton Academy after its benefactor Samuel Appleton (see below), who, amongst other things, donated a library. Other members of the Appleton family also made contributions to the school.
Appleton Academy closed in 1968.
Born in New Ipswich in 1766, merchant & philanthropist Samuel Appleton was descended from the family that left Suffolk, England in the seventeenth century & settled at Ipswich, Massachusetts (see Appleton Farms on the page). With his brother Nathan, he established S & N Appleton in 1794; an importing company based in Boston. He later opened cotton mills in Massachusetts. After visiting Europe in 1799, he spent much of the following twenty years in Britain. In 1823 he retired from business & devoted much of his wealth to charities, including the Appleton Cabinet at Amherst College, the Appleton Chapel at Harvard University & the Appleton Academy in New Ipswich (see above). He died in 1853, leaving large sums of money for ‘scientific, literary, religious & charitable purposes’.
The city of Appleton in Wisconsin is named after him, as was the 808 ton ship the Samuel Appleton built in Medford, Massachusetts in 1846.
Brother of Samuel, Nathan Appleton was born in New Ipswich in 1779. After going into business with his brother, he was instrumental in introducing the manufacture of cotton on a large scale into the United States; a factory that he helped establish at Waltham, Massachusetts in 1814 being the first to use a power loom. He was one of the founders of the city of Lowell, which grew around the mills he helped establish at Pawtucket Falls, Massachusetts.
Nathan Appleton was also a politician; being a member of the general court of Massachusetts on several occasions from 1816 onwards. He was elected to the US House of Representatives in 1831 & again in 1842. He died in Boston in 1861.
One of Nathan Appleton’s daughters, Frances (1817-1861) known as “Fanny”, married the poet Henry Wadsworth Longfellow (1807-1882), famous for “The Song of Hiawatha” & “Paul Revere’s Ride”.
Landscape artist Benjamin Champney was born in New Ipswich in November 1817. He is widely considered to be the founder of the White Mountain school of painters; a group centred around the North Conway area of New Hampshire, approximately 100 miles northeast of New Ipswich, during the second half of the nineteenth century.
After training as a lithographer in Boston, Champney went to study in Europe in 1841. He returned to America in 1848 & two years later set up a studio in the White Mountains that was to attract artists from all over the country.
Champney was a founder of the Boston Art Club in 1855; becoming its president the following year. His autobiography “Sixty Years’ Memories of Art & Artists” was published in 1900. He died in Woburn, Massachusetts in December 1907.
Born in New Ipswich in 1805, Augustus Addison Gould, graduated from Harvard in 1825 & obtained his degree as a doctor of medicine in 1830. He became president of the Massachusetts Medical Society in 1864; a position he was top hold until his death in 1866.
He is more famous, however, as a naturalist; specialising in the fields of Malacology (the study of molluscs) & Conchology (the study of mollusc shells). As well as writing prolifically for various scientific publications & journals, such as those of the Boston Society of Natural History, his most important published works are Mollusca and Shells vol. xii, 1852 of the United States Exploring Expedition (an exploring and surveying expedition of the Pacific Ocean unde’rtaken from 1838–1842 by the US Navy) & his Report on the Invertebrata published in 1841.
Within the 33.1 square miles that make up the town of New Ipswich stands the 1881 ft peak known as New Ipswich Mountain, part of the Wapack Range (also known as the Pack Monadneck Range). To the north lies Barrett Mountain & to the south Stony Top, which is part of Pratt mountain. The Wapack Trail traverses the area (see below). This is a 21 mile long hiking trail which spans the border between Massachusetts & New Hampshire.
Mostly wooded, New Ipswich Mountain has rocky outcrops near the summit & offers spectacular views of the surrounding area.
View from New Ipswich Mountain
The Wapack Trail stretches from south to north, starting at Ashburnham, Massachusetts & ending at Greenfield, New Hampshire; a distance of 21 miles. From Ashburton, it passes through Ashby, then crosses the state border into New Hampshire & passes through the towns of New Ipswich, Temple, Sharon & Peterborough, before reaching its conclusion in Greenfield. It also passes through the Wapack National Wildlife Refuge & Miller State Park. The southern part of the trail overlaps with the Midstate Trail.
The seeds of the Wapack Trail were sown in Jaffrey, New Hampshire in 1922, when Allen Chamberlain & Albert Annett had the idea of creating a trail beginning at Mount Watatic & following the ridge of what were at the time known as the Boundary Mountains, along Pratt, New Ipswich, Barrett & Temple Mountains to North Pack Monadnock.
That summer, Annett, together with Frank Robbins & Marion Buck (Davis) began cutting the trail, which opened the following year. The name Wapack was taken from the ‘wa’ at the beginning of Watatic, & ‘pack’ from North Pack Monadnock; signifying the start & finish of the trail. Since that time, the mountain range itself has become known as the Wapack Range. The trail is now maintained & overseen by the ‘Friends of the Wapack’; a non profit organisation formed in 1980.
Please sign the Guestbook
Straddling the border between the towns of New Ipswich & Rindge, the Wapack Wilderness is a 1,400 acre site featuring old-growth forest, rocky ridges & wetlands. The area is home to a wide variety of wildlife species including moose, bobcat, beaver, otter, white-tailed deer, coyote, & red fox. The land is owned by Hampshire Country School, formerly known as Cheshire Place; a private school just over the border in Rindge.
With more than 60 acres of woodland, marshland & hiking trails, the Nussdorfer Nature Reserve is situated on Route 124, one mile from the junction with Route 123. The reserve includes Hoar Pond, which has a picnic site & can be used for canoeing & fishing.
This area, within New Ipswich, was opened in 1972 & is the brainchild of Al Jenks. Centred around Barrett Mountain, the Windblown Cross Country Ski Area is traversed by the Wapack Trail & includes 25 miles of trails for cross country skiing, graded from easiest to most difficult. Some of the trails are also suitable for snowshoeing. During the off-season the trails can also be used by hikers. The landscape includes woods, fields, valleys & ponds, with scenic views to be had of Mount Monadnock.
Formed where the West Branch Souhegan & the South Branch Souhegan meet in New Ipswich, the Souhegan River flows 31 miles through the towns of Greenville, Wilton, Milford, Amherst & Merrimack, where it joins the Merrimack River. The river is popular with anglers; with native species such as brook trout, smallmouth bass, yellow perch & dace being supplemented with stocks of rainbow trout, brown trout & Atlantic salmon introduced by the New Hampshire Fish and Game Department. The lower reaches of the river, especially in the Greenville & Wilton areas, east of New Ipswich, are also renowned for white water rafting & kayaking. The Souhegan is also an important water supply for the region & a generator of hydro electric power.
New Ipswich is situated in the south west corner of Hillsborough County, which is the most populous & densely populated county of New Hampshire. Bordering Cheshire County in the west, Sullivan to the northwest, Merrimack to the north & Rockingham in the east, the southern border is with Middlesex & Worcester Counties, over the state boundary in Massachusetts.
Hillsborough was one of the five original counties of New Hampshire in 1769 & was named after Wills Hill, Viscount Hillsborough, who was British Secretary of State for the Colonies at that time. The twin county seats are Manchester & Nassau, both to the east of New Ipswich. | <urn:uuid:4707bf0e-e0dd-412a-a171-16e55c5b0cc4> | CC-MAIN-2019-47 | http://www.planetipswich.com/newipswichnh.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00457.warc.gz | en | 0.978159 | 5,889 | 2.796875 | 3 |
The First Hundred Years, 1911 – 2011
1) Humble beginnings
2) The area
3) The first church building
4) The renovated church
5) The lost minute book
6) The vote for union
7) The declaration
8) Settlement of back taxes
9) Rev. D. A. Fowlie
10) Financial struggles
11) The Burn’s supper
12) The Youth Group highs
13) Communion cards
14) Death of the Clerk
15) After the war
16) The Common Era
17) The new building
18) The Rev. F. W. Metzger years
19) The illegal tenant
20) The new Christian Education Centre
21) Good news
22) The donated building
23) The call, finally
24) The St. Paul’s people
25) The Taiwanese
26) Picking a Pastor
27) A new pastor
28) User groups
29) The road ahead
The following was compiled from many pages of original notes by early writers and from the Session minutes and Board of Managers records.
The city of Vancouver was incorporated on April 6 1886 and the south boundary was approximately where 16th avenue runs today. South of 16th avenue was considered forest and likely would never be needed or considered as part of the new city. Little did our founding fathers know about the desire to live in this most beautiful area of Canada and that in a short time it would be flooded with people from all corners of the world.
Pioneers came, economic migrants and refugees came and the city grew at an exponential rate. With this influx of people came the need for infrastructure, homes, schools and churches. Some of the earlier churches in the new city were stone but in the expanding areas of the city they were hastily built wooden structures usually built by the volunteers of the congregation itself. In reality of course the congregations worshiped in someone’s house until they were big enough to undertake the challenge of building their own church.
In 1891 much like the Sky train a hundred years later, the Inter Urban railroad, on it’s way to New Westminster was the impetus for growth in many clusters along it’s way. Cedar Cottage was one of these areas that had shops and even a post office among its boundaries.
In 1911 Victoria ‘Road’ was newly developed and growing quickly for the ever-expanding City of Vancouver. The streets were of course gravel but it had wooden sidewalks, raised to keep the horses and carts from interfering with the pedestrians.
The neighbourhood housing was usually two stories on small 33’ building lots and not every lot was built on right away leaving large plots of trees and virgin forest.
Much like any other frontier church the congregation of St. Columba had its humble beginnings. In 1911 Rev. J. C. Madill of Cedar Cottage started it as a mission and they met for worship in Tecumseh school which at that time was a one room school.
A young man, S. McLean recalls being the first student there. He states in a letter to Rev. Fowlie in 1938 that “The first Sunday I went there a young lad was sitting at a small portable organ, his name was Dougald Beswerethwic. The next Sunday they also had Mr. Bray with a violin to help in leading the singing.”
Mr. McLean then states they had to leave the school (presumably because a new school was to be built) and move into a store on Victoria Road.
This must have been a larger building or stock room and there they had two worship services and Sunday school.
This venue was soon not able to accommodate the growing congregation and the next meeting place was the “Manual Training Building” at Tecumseh school. —- It appears to this writer that they moved back to the newly built Tecumseh school.
After two years St. Columba was reported as a student mission field with 39 members and it agreed to unite with Livingstone Church as a Home Mission Field. Livingstone Church was another Presbyterian church on 54th avenue and close to Kerr Street.
It is said that there were two hundred and fifty families of Scottish heritage in the area at that time. With their mostly Presbyterian heritage they provided a base to draw from and Church planting was a natural fit.
The First Church
The congregation grew quickly and soon was looking for a bigger space for worship or possibly to buy a lot on which to build a new church. A site consisting of two 33’ residential lots was obtained at 1796 east 45th avenue at Gladstone Street (which due to a later survey and realignment was renamed 2196 East 44th avenue).
With a lot of volunteer labour a small non-basement building was quickly erected and the congregation had their first worship service there on March 12th 1913.
From an article in the May 27 1915 issue of The Presbyterian I quote, “At the new year, 1913, church building operations began, and under Mr. D. Brown, and largely by volunteer labour, a neat little church was erected and opened for service on the first Sunday in March.
The opening services were conducted by Dr. E. D. McLaren and Supt. G. A. Wilson, followed on the next Sunday by Prof. Taylor and Jno A. Logan. Those in charge of this young congregation are Mr. Arch. McLean, who was missionary for a year from April 1912, to April 1913.”
It was the young newly ordained Rev. Archie McLean who had just arrived from Scotland that named the church St. Columba. St. Columba was his favourite missionary who came from nearby Ireland and whose remains are interred on the Island of Iona off the rocky west coast of Scotland.
The Second Church
About nine years later in 1922 the need to accommodate the growing Sunday school prompted the expansion of the new building.
The minister in charge was Rev A.G. McPherson who it appears came to St Columba a year or so earlier as stated supply.
A supporting quote for the expansion from the Jan 19 1923 minutes reads, “17 adherents and 5 members of other churches were admitted into the full membership of St. Columba on motion of Mr. H. Ross and seconded by Mr. Hugh McDonald”.
So the little congregation was still growing and it appears the volunteers were again mustered into a building crew.
It is not known if the new work was an extension to the original “neat little church” as the first church was described. So it’s presumed at this writing that the existing church was raised up and the basement dug out from underneath. The 1953 photo shows a large two-story building with a bell tower.
A note in the memoirs of long time member Kay Berry says that the bell tower was added to the existing church. So we have to conclude that the original church was raised and a full basement constructed underneath. The bell tower was added to the roof with the aim of installing bells at a later date, an ambitious plan that never materialized.
The Lost Minute Book.
When the renovation was completed is unknown but a clue is in the session minutes of March 14 1923. It notes “The old minute book having got lost during the alterations to the church, it was moved by Mr. McMillan seconded by Mr. Adamson, that a new book be purchased.”
The word alteration suggests that the building was extensively renovated in late 1922-3.
The Rev A. G. MacPherson was inducted as minister on Nov 27 1923 and served there until he was “transferred to St. Andrews Church, New Westminster” in the summer of 1925.
It is said that Rev A. G. was a handsome man and all the girls were at least hopeful, but were later disappointed when he and the daughter of Rev. R. G. McBeth, minister of St. Paul’s PC Vancouver announced their wedding plans for later in 1925.
The Vote For Union
1925 saw another challenge to the congregation when news came down from the top to hold a vote on whether to join the Methodists and the Congregationalists in forming the United Church in Canada.
The minutes of June 30 1924 notes that 250 ballots were mailed out to the members and adherents and of those that were returned a majority were in the negative.
For some reason this method of voting was not acceptable and at a congregational meeting on January 5 1925 the membership list was certified and of the 78 ballots cast 68 were against the union 9 in favour and 1 spoiled ballot.
The Declaration Jan 19 1925
“It was then moved by Mr. McDonald seconded by Mrs. Johnstone that this church goes on record as opposed to entering the United Church of (sic) Canada, but remains with the Continuing Presbyterian Church in Canada”.
Mr. McDonald, obviously a student of prudent church decorum, then declared that he was opposed to the enthusiasm that greeted the result.
Some of those in favour of the union left to attend Wilson Heights United, which up until that time was a Methodist church.
Settlement of Back Taxes
It seems that life in the late 20s and early 30s would not be considered as part of “the good old days” as finances were so tight there was a continual struggle to pay bills.
At a Board of Managers meeting Oct 7 1931 there was a motion “That taxes for one year be paid on the church property in order to save it from the tax sale”
Rev D. A. Fowlie
In 1938 Rev. D. A. Fowlie wrote to the board of managers stating that he was being short changed on his stipend by $40 and asked the board to look into this item “of arrears”
He stated that his rent was $25 per month and when he received just $40 as stipend the margin was very small.
The boards chairman Mr. A. Hurley in answering stated “the standing agreement was for all plate collections up to $50 be forwarded to the minister and that if $50 was not received in any one month the check (sic) would be for the amount taken only”. It seems that July and August were slow months even back then.
The next meeting records that Rev. D.A. Fowlie was transferred to Buchannan Church.
Other austerity measures saw the insurance valuation reduced from $4,000 to $3,000, which realized a saving of $3.50.
There was what seemed like a perpetual motion to buy a half ton of coal and a load of wood for the heating.
Without stereotyping their ancestry I can only say that the congregation was very frugal and always had a difficult time finding enough money to pay the minister’s stipend. This became more evident during the late thirties when the “Great Depression” hit. The ministers came for a short time and left and sometimes students from “Westminster Hall” filled in for years at a time. Westminster Hall was a newly created Presbyterian ministerial college at the University of British Columbia. In some circles the graduates from there were considered deficient, not having been educated at the Presbyterian Knox College in Toronto.
In Dec 1938 a letter from the PCC Toronto citing the indebtedness of the congregation to the above fund. “The loan has been outstanding for a long time and funds are greatly needed to assist struggling congregations.”
(I think someone was missing the point here.) The total of the loan was $1,350 and no interest or principal had been paid.
Another query from the Finance Committee of Presbytery states that “Your allocation to Presbytery and Synod Fund for the current year was $5 on which nothing has been paid.
Your attention to this matter at an early date will be greatly appreciated”.
Another interesting statement found was that “owing to low attendance there was no Sunday School in July and August” 1938.
The congregation continued to struggle through the highs and lows of church financing, which resulted in a lot of pulpit supply and very few years of having a full time minister.
While the total receipts for the church in 1938 was only $776.58 the activities held there were quite surprising.
The group headings on the annual report show there was a report from the; Board of Managers, Women’s Guild, Young Peoples, Sunday School, Young Women’s Society, C.G.I.T., Women’s Missionary Society, Boy’s Brigade, The Life Boys, and Mission Band.
A by product of this energy leads to a complaint by the Board of Managers that some doors in the basement were ripped off their hinges. It seems like the cliché “Boys will be Boys” was well founded back then as well.
The youth group highs
The 1930s were tough years but the people pulled together, it seems, for the sake of their kids. While the records show 20 to 50 at the worship services there were about 100 kids in Sunday School. Today we would say that the parents were all working, but we must remember that in the 1930s nobody worked on Sunday. So we have to look at possibly large families or while the records show a small number of communicants there was likely a large number of adherents that were not allowed or didn’t want to partake of communion.
The thirties and forties were good years for the youth groups.
The late forties were uneventful apart from the Sunday school being between 45 to 77 kids. Quite low considering the highs of 150 – 175 in the 1920s
The Burns Supper
One highlight on the social calendar for the Scottish people was the Burns suppers and it seems like this event was big at St Columba. The ladies were quite proficient at preparing this meal and at one time there were over 300 guests for dinner making three sittings necessary to accommodate the crowd. For Jan 1935 dishes were borrowed from St Thomas Anglican on 41st avenue. Of course an evening of Scottish entertainment followed and this social evening became big news around the city.
One item that never escapes being noticed in the session meeting minutes is the distribution of the communion cards. These cards were a tangible sign that the holders were in good standing in the church, and as such were allowed to join in the communion worship service. The cards were distributed before the service and collected at the door as the members came in. Another item that doesn’t go unnoticed is, a few days before every communion service a preparatory Service” is held for the members and those preparing to become members.
Before the use of cards, the pass was a metal token and it has since become a passion for some members of the P. C. to collect these metal communion tokens.
Death of the Clerk and S. S. Superintendent
Mr. R. McKillop, was a respected member of session, who served as clerk from 1931 – 1945. Citing ill health he asked several times in his later years, to be relieved of his work as Sunday School superintendent. Both times after some discussion he consented to continue for another while. A few months after his last request a notice of his death on July 12 1945 is noted in the minutes.
This reminds me of the tombstone epitaph that says, “I told you I was sick”.
From studying the handwriting it appears that the minister Rev. J. C. McLean-Bell acted as moderator and clerk of session for a year before Mr. McKillop’s passing and that Mr. McKillop just signed as clerk.
After the War
The war years took their toll on the welfare of the church, draining it of manpower and finances to the point that to make session work assessor elders had to be brought in from neighbouring congregations. The late forties were fairly stable with Rev. J. C. McLean-Bell as Interim Moderator of Session until 1949.
It was this Rev. McLean-Bell that was quite bitter that St. Columba was not able to pay off the Presbytery loan they made for rebuilding the church in 1923. He was a strong advocate for closing the church but the stubborn members clung to the idea of maintaining their church where it was.
Rev. J. E. Sutherland was placed as Interim Moderator in the summer of 1949 and stayed until he accepted a call to Ontario in Dec 1950.
The Common Era
Moving into the common era, or the one some of us remember, is like, yeah, I remember that.
On September 10 1951 a student minister called Calvin Chambers preached the service. Rev. Chambers is now retired and is on the Appendix to the Constituent Roll of Presbytery.
Rev. McLean-Bell came back to moderate the session again and then Rev. Thomas Murphy was placed by order of Presbytery.
The fifties continued with interim moderators and pulpit supply and it seemed like the congregation was always on life support. Then the building began to crumble and repair became too much to handle. Eventually the city condemned the building so it had to go.
The New Building
On April 10 1957 comments were made about raising $60,000 to re build the church. But on Sept 1 1960 a letter was sent to Toronto (Admin. Office of the PCC) regarding a mortgage for $1,350 plus $100. This figure is too small for the present financing scheme so it must be an outstanding mortgage from the 1923 renovations.
With a concerted push by the Honorable Judge A. Manson financing was arranged with loans from various boards and the sod was turned on May 10 1961.
With construction underway worship services were held in the local community hall on 43rd avenue.
In the fall of 1961 the building was finished and the dedication service was held on the 28th Oct 1961
Usually there is a called minister in the congregation when a building program is undertook but St Columba was surviving on pulpit supply for many years before and after construction.
The Metzger Era
Dec 2 1962 is the first session where Rev. F. W. Metzger appears as interim moderator and on Feb 3 1963 there is discussion about asking the mission board about appointing him as minister of St. Columba and the nearby St Matthews.
Soon the session minutes became long on the things that needed to be done. A nursery was to be provided, infant and adult baptisms were quite common, delegates were appointed for leadership training and elder’s districts were established. A new high of 60 members took communion on Oct 2 1966.
On Jan 24 1967 after the AGM the session was informed that Rev. Metzger was now available for appointment. The congregation unanimously agreed to ask the Mission board to appoint Rev. Metzger as their ordained missionary. On June 1 1967 session was advised of Rev. Metzger’s appointment as minister of St. Columba and St. Matthews.
At a congregational meeting held after the worship service on Nov 19 1967 motions passed were;
That we respectfully request the Presbytery of Westminster to authorize and approve the speedy amalgamation of St. Matthews and St. Columba, not later than Dec 31 1967.
That the congregation petition Presbytery for permission to build a new St. Matthews Christian Education Centre on the adjoining property recently purchased by St. Matthews.
That the amalgamated congregations petition the Women’s Missionary Society to relocate it’s Christian Education to St. Columba and finance the building expenses of a new St Matthews Centre in full or in part from the sale of it’s present property at Nanaimo and Newport avenue.
At the Jan 2 1968 meeting of Presbytery they responded to St. Columba and approved the amalgamation of St. Columba and St. Matthews creating a congregation of 90 members.
After the morning service on March 16 1968 a request was made for $14,500 from the Women’s Missionary Society Legacy fund for the St. Columba church extension fund.
The Illegal Tenant
March 1969, Elders reported that someone was given a key given in error by a congregant and illegally occupied the house immediately west of the church recently purchased by St Matthew’s congregation. This person moved out when asked but was asked for a donation to offset the property tax he triggered by his occupation of the property.
A month later it was actually moved and seconded to sell the property but at the next meeting it was agreed to demolish the building as it was encroaching on the neighbour’s side yard.
Not long after that the building was demolished and the next round of appeals for loans and grants pursued.
The New Christian Education Centre
The continual push to build up the work at St. Columba got another boost when Presbytery struck a committee to work on the extension of St. Columba. The committee reported back on April 13 1972 and suggested that the church get building plans drawn up and ask for quotes from two or three contractors and asked also as to how the new building would be used. Some other suggestions they had were, to “ask for a Government Grant of approximately $10,000 with no strings attached” and approach the Vancouver Foundation, the Spencer Foundation and the Block Bros. Foundation. No word of these requests were ever recorded, so we have to assume they were fruitless.
The building fund grew slowly and it was decided to pay off the loans from the Board of World Missions. Even though this would set the fund back a bit it would look better to not have any outstanding debts. Finances seemed under control but then along came a request to join the campaign to raise $100,000 by Camp Douglas for their new extension work. Another hit came when the Board of World Missions informed the session to raise the stipend and travel allowance of their minister. The lack of finances dictated the speed of the developments so it wasn’t until 1974 that there was a break in the stalemate.
The moderator reported that the old mortgage loans had been paid off and the discharge papers arrived from the church head office in Toronto. Rev. J. C. McLean Bell could now rest easy as St Columba no longer owed presbytery anything, a position they proudly claim today.
The next piece of good news was a donation from a member of the congregation for $3,000 to be used as needed. It was reported that this amount tripled the balance in the church account.
The Donated Building
Not mentioned previously was a request to the City of Vancouver to donate a fire-damaged portable building that was one block away in the nearby Orchard Park housing complex. So it was with glee that on Dec 11 1974 the Moderator read a letter from the City Mayor Mr. Art Phillips that the building would be donated to the congregation of St. Columba. All they had to do was be responsible for removing it to the church property. To keep everything in proper Presbyterian order Presbytery was asked if the congregation had their permission to accept the building as a gift from the city.
The building committee was activated, permits pursued, the foundations formed and poured and then came the big move as reported on April 17 1975 by Nickel Bros. house moving company.
With grants to hire local craftsmen and donations of material and some labour everyone’s attention was then focused on this building to get it from a burned out shell to a useable state.
Like as never before the need for finances became more acute and requests for funds were plentiful. Four applications were put forward for a total of over $25,000, but no mention is made of how many were successful.
There was however some good new in this area. A $5,000 offer from the Westminster Foundation of Religion and Mental Health and a $1,000 offer from The Biblical Museum of Canada were made to the church. Both these entities belonged to Rev. Metzger and there were strings attached. Those being the Foundation could use the building for their counselling activities and the minister could house his museum artifacts in the building.
The work continued and eventually the “lounge” was turned into a fascinating display of tangible items that bolstered the gospel we had heard so many times but always had to imagine what the scenes were like.
The Call, Finally
Rev. Metzger had served in St Columba for many years, pushed the congregation to higher performance and had orchestrated this latest piece of building expansion, all this while being a supply from the Home Mission Board.
So at a specially called congregational meeting on Aug 24 1975 it was approved unanimously to extend a call to the Rev. F. W. Metzger to be their minister. His induction to the congregation of St. Columba took place on Feb 26 1976
Steady growth pursued these events and on Sunday evenings Christian movies and educational films and talks, took the place of the usual services. Numerous Bible Studies occurred during this time. The major study was the Bethel Programme. This was an intense two year study of the Bible. Nine leaders from the congregation went to Madison Wisconsin for one week of intensive training and for two years almost everyone in the congregation was involved in the programme.
The St. Paul’s people
The arson destruction of St. Paul’s Presbyterian church at 18th ave and Glen Drive in 1976 saw about 19 adults and children transfer to St. Columba. Of those, Cameron Hart, June McNeil and Ross and Heather McClelland still remain.
In 1977 the preaching platform was raised a step and the front of the sanctuary got a facelift with new wood paneling decorations and sound equipment. A new carpet was also installed. The names of Ron Morey and Ross and Harry McClelland were mentioned in a vote of thanks.
Artifacts that came to St. Columba then were the Burning Bush pulpit fall placed and dedicated in the sanctuary on March 8 1981. The beautiful baptismal font then stood as a model for elder Ross McClelland to design and manufacture in 1986 a matching communion table and chair for use in St. Columba.
Part of the financial assets from the dissolved St. Paul’s congregation came to St. Columba and was used to purchase the 33’ lot on 45th avenue now used as a parking lot. The rest went to Langley PC and Coquitlam PC to help them buy property.
It could be said that the early seventies saw the beginning of a demographic shift in the makeup of Vancouver. This caused a high demand for property in the city, which made it difficult to go out and build a church like it happened in 1912. So at the regular session meeting on May 20 1979 a request was presented by Rev. Wen Yen Peng to use the lounge for a Mandarin language worship service for the Taiwanese people in the city.
This was eventually approved. The descendants of this group now form the ever growing VTPC and the BTP.
The demographic shift and increasing property values in the city contributed to the difficulty Vancouver’s younger generation had when they tried to live close to their home congregation. The flight to the suburbs saw an increase in the congregations there but sounded the death knell for some of the city’s congregations.
Growth became static but the congregation grew spiritually even though there were resignations from the session and the various boards of the church.
In 1993 Rev. Metzger retired from active ministry and the search began to find another pastor for the flock.
Picking a pastor
One of the most difficult areas of congregational life is finding and picking a new Pastor. Electronics were not used as much back in the nineties as they are today. We can now use U Tube, Face Book, Skype, DVD, video links and many other electronic possibilities to help you get to know the person who will be to you, like, “closer than a brother”. A search committee was struck and did their due diligence which resulted in Murat Kuntel being nominated as our next minister.
On August 9 1994 Murat Halil Kuntel was ordained and installed as the Minister of Word and Sacraments in the congregation of St. Columba.
During Murat’s ministry we were all challenged to take the Alpha Course, and almost everyone did. This was such a success, the course was offered a second time. In the Fall of 2000, we adopted the Lighthouse of Prayer programme and used it a number of times in the first decade of the new century.
For a number of years some of the young people joined Murat in a contemporary music group that sometimes led the worship.
Henry Blackaby’s studies, Experiencing God and Your Church Experiencing God Together, were widely used after the Alpha series.
In 1998 a Korean outreach ministry was started under the leadership of Charles Ahn. The ministry grew for the next three years and left our premises to start its own congregation in August 2001.
In the spring of 2005 Pastor Joseph Qian approached the session with a request to start a Mandarin speaking congregation at St. Columba. He asked to use the premises for Sunday afternoon worship services and some mid week evening studies. The session was pleased to grant his request and hence the Waters of Elim Christian Church was born. We are happy to report that as of this writing they have a growing ministry and are still a vital part of the outreach in this community.
In December of 2006, we were approached by another group wanting to use our facilities, this time for a youth outreach programs. Youth Church started in January 2007 under the leadership of Pastor Rick Ellis. The group held vibrant worship/praise services for the next one and a half years. They grew rapidly and had to vacate our premises in March 2008 for larger facilities. We are pleased to know that they are now worshiping in a much larger facility in Surrey.
By the grace of God in early 2010 another group asked for the use of our building and Agape Renewal Ministry set up the first Canadian school for leaders and pastors. There aim is to train pastors for the vast mission field in their China homeland. They meet for worship at 6:30 am four days a week and study until mid afternoon.
The Road Ahead
Is again tenuous as the congregation survives on Pulpit Supply.
It is supposed that a part time minister is just as suitable for St Columba now as any time in the past. However, what once was a traditional English community is now mostly Cantonese with a varied mix of other languages. Yes it is a great mission field but it is thought that a bi-lingual minister would be most suitable in trying to reach the present neighbourhood.
Which ever way it pulls for survival, the congregation of St. Columba will have to rely on the grace of God, as did their pioneering forefathers a hundred years ago.
2 Thessalonians 3:5 | <urn:uuid:c997c9fb-f1af-43f2-981e-4b2c82f14d45> | CC-MAIN-2019-47 | http://stcolumba-vancouver.ca/congregational-history-2/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00458.warc.gz | en | 0.984124 | 6,440 | 3.078125 | 3 |
Thermal spraying techniques are coating processes in which melted (or heated) materials are sprayed onto a surface. The "feedstock" (coating precursor) is heated by electrical (plasma or arc) or chemical means (combustion flame).
Thermal spraying can provide thick coatings (approx. thickness range is 20 microns to several mm, depending on the process and feedstock), over a large area at high deposition rate as compared to other coating processes such as electroplating, physical and chemical vapor deposition. Coating materials available for thermal spraying include metals, alloys, ceramics, plastics and composites. They are fed in powder or wire form, heated to a molten or semimolten state and accelerated towards substrates in the form of micrometer-size particles. Combustion or electrical arc discharge is usually used as the source of energy for thermal spraying. Resulting coatings are made by the accumulation of numerous sprayed particles. The surface may not heat up significantly, allowing the coating of flammable substances.
Coating quality is usually assessed by measuring its porosity, oxide content, macro and micro-hardness, bond strength and surface roughness. Generally, the coating quality increases with increasing particle velocities.
- 1 Variations
- 2 System overview
- 3 Detonation thermal spraying process
- 4 Plasma spraying
- 5 Wire arc spray
- 6 High velocity oxygen fuel spraying (HVOF)
- 7 Cold spraying
- 8 Applications
- 9 Limitations
- 10 Safety
- 11 See also
- 12 References
Several variations of thermal spraying are distinguished:
- Plasma spraying
- Detonation spraying
- Wire arc spraying
- Flame spraying
- High velocity oxy-fuel coating spraying (HVOF)
- High velocity air fuel (HVAF)
- Warm spraying
- Cold spraying
In classical (developed between 1910 and 1920) but still widely used processes such as flame spraying and wire arc spraying, the particle velocities are generally low (< 150 m/s), and raw materials must be molten to be deposited. Plasma spraying, developed in the 1970s, uses a high-temperature plasma jet generated by arc discharge with typical temperatures >15,000 K, which makes it possible to spray refractory materials such as oxides, molybdenum, etc.
A typical thermal spray system consists of the following:
- Spray torch (or spray gun) – the core device performing the melting and acceleration of the particles to be deposited
- Feeder – for supplying the powder, wire or liquid to the torch through tubes.
- Media supply – gases or liquids for the generation of the flame or plasma jet, gases for carrying the powder, etc.
- Robot – for manipulating the torch or the substrates to be coated
- Power supply – often standalone for the torch
- Control console(s) – either integrated or individual for all of the above
Detonation thermal spraying processEdit
The detonation gun consists of a long water-cooled barrel with inlet valves for gases and powder. Oxygen and fuel (acetylene most common) are fed into the barrel along with a charge of powder. A spark is used to ignite the gas mixture, and the resulting detonation heats and accelerates the powder to supersonic velocity through the barrel. A pulse of nitrogen is used to purge the barrel after each detonation. This process is repeated many times a second. The high kinetic energy of the hot powder particles on impact with the substrate results in a buildup of a very dense and strong coating.
In plasma spraying process, the material to be deposited (feedstock) — typically as a powder, sometimes as a liquid, suspension or wire — is introduced into the plasma jet, emanating from a plasma torch. In the jet, where the temperature is on the order of 10,000 K, the material is melted and propelled towards a substrate. There, the molten droplets flatten, rapidly solidify and form a deposit. Commonly, the deposits remain adherent to the substrate as coatings; free-standing parts can also be produced by removing the substrate. There are a large number of technological parameters that influence the interaction of the particles with the plasma jet and the substrate and therefore the deposit properties. These parameters include feedstock type, plasma gas composition and flow rate, energy input, torch offset distance, substrate cooling, etc.
The deposits consist of a multitude of pancake-like 'splats' called lamellae, formed by flattening of the liquid droplets. As the feedstock powders typically have sizes from micrometers to above 100 micrometers, the lamellae have thickness in the micrometer range and lateral dimension from several to hundreds of micrometers. Between these lamellae, there are small voids, such as pores, cracks and regions of incomplete bonding. As a result of this unique structure, the deposits can have properties significantly different from bulk materials. These are generally mechanical properties, such as lower strength and modulus, higher strain tolerance, and lower thermal and electrical conductivity. Also, due to the rapid solidification, metastable phases can be present in the deposits.
This technique is mostly used to produce coatings on structural materials. Such coatings provide protection against high temperatures (for example thermal barrier coatings for exhaust heat management), corrosion, erosion, wear; they can also change the appearance, electrical or tribological properties of the surface, replace worn material, etc. When sprayed on substrates of various shapes and removed, free-standing parts in the form of plates, tubes, shells, etc. can be produced. It can also be used for powder processing (spheroidization, homogenization, modification of chemistry, etc.). In this case, the substrate for deposition is absent and the particles solidify during flight or in a controlled environment (e.g., water). This technique with variation may also be used to create porous structures, suitable for bone ingrowth, as a coating for medical implants. A polymer dispersion aerosol can be injected into the plasma discharge in order to create a grafting of this polymer on to a substrate surface. This application is mainly used to modify the surface chemistry of polymers.
Plasma spraying systems can be categorized by several criteria.
Plasma jet generation:
- direct current (DC plasma), where the energy is transferred to the plasma jet by a direct current, high-power electric arc
- induction plasma or RF plasma, where the energy is transferred by induction from a coil around the plasma jet, through which an alternating, radio-frequency current passes
- gas-stabilized plasma (GSP), where the plasma forms from a gas; typically argon, hydrogen, helium or their mixtures
- water-stabilized plasma (WSP), where plasma forms from water (through evaporation, dissociation and ionization) or other suitable liquid
- hybrid plasma – with combined gas and liquid stabilization, typically argon and water
- atmospheric plasma spraying (APS), performed in ambient air
- controlled atmosphere plasma spraying (CAPS), usually performed in a closed chamber, either filled with inert gas or evacuated
- variations of CAPS: high-pressure plasma spraying (HPPS), low-pressure plasma spraying (LPPS), the extreme case of which is vacuum plasma spraying (VPS, see below)
- underwater plasma spraying
Another variation consists of having a liquid feedstock instead of a solid powder for melting, this technique is known as Solution precursor plasma spray
Vacuum plasma sprayingEdit
Vacuum plasma spraying (VPS) is a technology for etching and surface modification to create porous layers with high reproducibility and for cleaning and surface engineering of plastics, rubbers and natural fibers as well as for replacing CFCs for cleaning metal components. This surface engineering can improve properties such as frictional behavior, heat resistance, surface electrical conductivity, lubricity, cohesive strength of films, or dielectric constant, or it can make materials hydrophilic or hydrophobic.
The process typically operates at 39–120 °C to avoid thermal damage. It can induce non-thermally activated surface reactions, causing surface changes which cannot occur with molecular chemistries at atmospheric pressure. Plasma processing is done in a controlled environment inside a sealed chamber at a medium vacuum, around 13–65 Pa. The gas or mixture of gases is energized by an electrical field from DC to microwave frequencies, typically 1–500 W at 50 V. The treated components are usually electrically isolated. The volatile plasma by-products are evacuated from the chamber by the vacuum pump, and if necessary can be neutralized in an exhaust scrubber.
In contrast to molecular chemistry, plasmas employ:
- Molecular, atomic, metastable and free radical species for chemical effects.
- Positive ions and electrons for kinetic effects.
Plasma also generates electromagnetic radiation in the form of vacuum UV photons to penetrate bulk polymers to a depth of about 10 μm. This can cause chain scissions and cross-linking.
Plasmas affect materials at an atomic level. Techniques like X-ray photoelectron spectroscopy and scanning electron microscopy are used for surface analysis to identify the processes required and to judge their effects. As a simple indication of surface energy, and hence adhesion or wettability, often a water droplet contact angle test is used. The lower the contact angle, the higher the surface energy and more hydrophilic the material is.
Changing effects with plasmaEdit
At higher energies ionization tends to occur more than chemical dissociations. In a typical reactive gas, 1 in 100 molecules form free radicals whereas only 1 in 106 ionizes. The predominant effect here is the forming of free radicals. Ionic effects can predominate with selection of process parameters and if necessary the use of noble gases.
Wire arc sprayEdit
Wire arc spray is a form of thermal spraying where two consumable metal wires are fed independently into the spray gun. These wires are then charged and an arc is generated between them. The heat from this arc melts the incoming wire, which is then entrained in an air jet from the gun. This entrained molten feedstock is then deposited onto a substrate with the help of compressed air. This process is commonly used for metallic, heavy coatings.
Plasma transferred wire arcEdit
Plasma transferred wire arc (PTWA) is another form of wire arc spray which deposits a coating on the internal surface of a cylinder, or on the external surface of a part of any geometry. It is predominantly known for its use in coating the cylinder bores of an engine, enabling the use of Aluminum engine blocks without the need for heavy cast iron sleeves. A single conductive wire is used as "feedstock" for the system. A supersonic plasma jet melts the wire, atomizes it and propels it onto the substrate. The plasma jet is formed by a transferred arc between a non-consumable cathode and the type of a wire. After atomization, forced air transports the stream of molten droplets onto the bore wall. The particles flatten when they impinge on the surface of the substrate, due to the high kinetic energy. The particles rapidly solidify upon contact. The stacked particles make up a high wear resistant coating. The PTWA thermal spray process utilizes a single wire as the feedstock material. All conductive wires up to and including 0.0625" (1.6mm) can be used as feedstock material, including "cored" wires. PTWA can be used to apply a coating to the wear surface of engine or transmission components to replace a bushing or bearing. For example, using PTWA to coat the bearing surface of a connecting rod offers a number of benefits including reductions in weight, cost, friction potential, and stress in the connecting rod.
High velocity oxygen fuel spraying (HVOF)Edit
During the 1980s, a class of thermal spray processes called high velocity oxy-fuel spraying was developed. A mixture of gaseous or liquid fuel and oxygen is fed into a combustion chamber, where they are ignited and combusted continuously. The resultant hot gas at a pressure close to 1 MPa emanates through a converging–diverging nozzle and travels through a straight section. The fuels can be gases (hydrogen, methane, propane, propylene, acetylene, natural gas, etc.) or liquids (kerosene, etc.). The jet velocity at the exit of the barrel (>1000 m/s) exceeds the speed of sound. A powder feed stock is injected into the gas stream, which accelerates the powder up to 800 m/s. The stream of hot gas and powder is directed towards the surface to be coated. The powder partially melts in the stream, and deposits upon the substrate. The resulting coating has low porosity and high bond strength.
HVOF coatings may be as thick as 12 mm (1/2"). It is typically used to deposit wear and corrosion resistant coatings on materials, such as ceramic and metallic layers. Common powders include WC-Co, chromium carbide, MCrAlY, and alumina. The process has been most successful for depositing cermet materials (WC–Co, etc.) and other corrosion-resistant alloys (stainless steels, nickel-based alloys, aluminium, hydroxyapatite for medical implants, etc.).
Cold spraying (or gas dynamic cold spraying) was introduced to the market in the 1990s. The method was originally developed in the Soviet Union – while experimenting with the erosion of the target, which was exposed to a two-phase high-velocity flow of fine powder in a wind tunnel, scientists observed accidental rapid formation of coatings.
In cold spraying, particles are accelerated to very high speeds by the carrier gas forced through a converging–diverging de Laval type nozzle. Upon impact, solid particles with sufficient kinetic energy deform plastically and bond mechanically to the substrate to form a coating. The critical velocity needed to form bonding depends on the material's properties, powder size and temperature. Metals, polymers, ceramics, composite materials and nanocrystalline powders can be deposited using cold spraying. Soft metals such as Cu and Al are best suited for cold spraying, but coating of other materials (W, Ta, Ti, MCrAlY, WC–Co, etc.) by cold spraying has been reported.
The deposition efficiency is typically low for alloy powders, and the window of process parameters and suitable powder sizes is narrow. To accelerate powders to higher velocity, finer powders (<20 micrometers) are used. It is possible to accelerate powder particles to much higher velocity using a processing gas having high speed of sound (helium instead of nitrogen). However, helium is costly and its flow rate, and thus consumption, is higher. To improve acceleration capability, nitrogen gas is heated up to about 900 °C. As a result, deposition efficiency and tensile strength of deposits increase.
Warm spraying is a novel modification of high velocity oxy-fuel spraying, in which the temperature of combustion gas is lowered by mixing nitrogen with the combustion gas, thus bringing the process closer to the cold spraying. The resulting gas contains much water vapor, unreacted hydrocarbons and oxygen, and thus is dirtier than the cold spraying. However, the coating efficiency is higher. On the other hand, lower temperatures of warm spraying reduce melting and chemical reactions of the feed powder, as compared to HVOF. These advantages are especially important for such coating materials as Ti, plastics, and metallic glasses, which rapidly oxidize or deteriorate at high temperatures.
- Crankshaft reconditioning or conditioning
- Corrosion protection
- Fouling protection
- Altering thermal conductivity or electrical conductivity
- Wear control: either hardfacing (wear-resistant) or abradable coating
- Repairing damaged surfaces
- Temperature/oxidation protection (thermal barrier coatings)
- Medical implants
- Production of functionally graded materials (for any of the above applications)
Thermal spraying is a line of sight process and the bond mechanism is primarily mechanical. Thermal spray application is not compatible with the substrate if the area to which it is applied is complex or blocked by other bodies.
Thermal spraying need not be a dangerous process, if the equipment is treated with care, and correct spraying practices are followed. As with any industrial process, there are a number of hazards, of which the operator should be aware, and against which specific precautions should be taken. Ideally, equipment should be operated automatically, in enclosures specially designed to extract fumes, reduce noise levels, and prevent direct viewing of the spraying head. Such techniques will also produce coatings that are more consistent. There are occasions when the type of components being treated, or their low production levels, requires manual equipment operation. Under these conditions, a number of hazards, peculiar to thermal spraying, are experienced, in addition to those commonly encountered in production or processing industries.
Metal spraying equipment uses compressed gases, which create noise. Sound levels vary with the type of spraying equipment, the material being sprayed, and the operating parameters. Typical sound pressure levels are measured at 1 meter behind the arc.
Combustion spraying equipment produces an intense flame, which may have a peak temperature more than 3,100 °C, and is very bright. Electric arc spraying produces ultra-violet light, which may damage delicate body tissues.Plasma also generates quite a lot of UV radiation, easily burning exposed skin and can also cause "flash burn" to the eyes. Spray booths, and enclosures, should be fitted with ultra-violet absorbent dark glass. Where this is not possible, operators, and others in the vicinity should wear protective goggles containing BS grade 6 green glass. Opaque screens should be placed around spraying areas. The nozzle of an arc pistol should never be viewed directly, unless it is certain that no power is available to the equipment.
Dust and fumesEdit
The atomization of molten materials produces a large amount of dust and fumes made up of very fine particles (ca. 80–95% of the particles by number <100 nm). Proper extraction facilities are vital, not only for personal safety, but to minimize entrapment of re-frozen particles in the sprayed coatings. The use of respirators, fitted with suitable filters, is strongly recommended, where equipment cannot be isolated. Certain materials offer specific known hazards:
- Finely divided metal particles are potentially pyrophoric and harmful when accumulated in the body.
- Certain materials e.g. aluminum, zinc and other base metals may react with water to evolve hydrogen. This is potentially explosive and special precautions are necessary in fume extraction equipment.
- Fumes of certain materials, notably zinc and copper alloys, have a disagreeable odour and may cause a fever-type reaction in certain individuals (known as metal fume fever). This may occur some time after spraying and usually subsides rapidly. If it does not, medical advice must be sought.
- Fumes of reactive compounds can dissociate and create harmful gasses. Respirators should be worn in these areas and gas meters should be used to monitor the air before respirators are removed.
Combustion spraying guns use oxygen and fuel gases. The fuel gases are potentially explosive. In particular, acetylene may only be used under approved conditions. Oxygen, while not explosive, will sustain combustion, and many materials will spontaneously ignite, if excessive oxygen levels are present. Care must be taken to avoid leakage, and to isolate oxygen and fuel gas supplies, when not in use.
Electric arc guns operate at low voltages (below 45 V dc), but at relatively high currents. They may be safely hand-held. The power supply units are connected to 440 V AC sources, and must be treated with caution.
- Kuroda, Seiji; Kawakita, Jin; Watanabe, Makoto; Katanoda, Hiroshi (2008). "Warm spraying—a novel coating process based on high-velocity impact of solid particles". Sci. Technol. Adv. Mater. 9 (3): 033002. doi:10.1088/1468-6996/9/3/033002. PMC 5099653. PMID 27877996.
- Paulussen, S; Rego, R; Goossens, O; Vangeneugden, D; Rose, K (2005). "Plasma polymerization of hybrid organic–inorganic monomers in an atmospheric pressure dielectric barrier discharge". Surface and Coatings Technology. 200 (1–4): 672–675. doi:10.1016/j.surfcoat.2005.02.134.
- Leroux, F; Campagne, C; Perwuelz, A; Gengembre, L (2008). "Fluorocarbon nano-coating of polyester fabrics by atmospheric air plasma with aerosol". Applied Surface Science. 254 (13): 3902. Bibcode:2008ApSS..254.3902L. doi:10.1016/j.apsusc.2007.12.037.
- Moridi, A.; Hassani-Gangaraj, S. M.; Guagliano, M.; Dao, M. (2014). "Cold spray coating: review of material systems and future perspectives". Surface Engineering. 30 (6): 369–395. doi:10.1179/1743294414Y.0000000270.
- Degitz, Todd; Dobler, Klaus (November 2002). "Thermal Spray Basics". Welding Journal.
- Blunt, Jane and Balchin, N. C. (2001). Health and safety in welding and allied processes. Woodhead Publishing. pp. 190–205. ISBN 978-1-85573-538-5.CS1 maint: multiple names: authors list (link)
- Suryanarayanan, R. (1993). Plasma Spraying: Theory and Applications. World Scientific Pub Co Inc. p. 211. Bibcode:1993psta.book.....S. ISBN 978-981-02-1363-3.
- Bemer, D.; Regnier, R.; Subra, I.; Sutter, B.; Lecler, M. T.; Morele, Y. (2010). "Ultrafine Particles Emitted by Flame and Electric Arc Guns for Thermal Spraying of Metals". Annals of Occupational Hygiene. 54 (6): 607–14. doi:10.1093/annhyg/meq052. PMID 20685717. | <urn:uuid:6e272e6f-0846-43a2-922b-0d655cc67faf> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Thermal_spraying | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00180.warc.gz | en | 0.891743 | 4,710 | 3.671875 | 4 |
. . . though that cause was, I believe, one of the worst for which a people ever fought, and one for which there was the least excuse. I do not question, however, the sincerity of the great mass of those who were opposed to us.
— U.S. Grant, writing, years later, about the Confederate surrender at Appomattox
Ellos con duros estatutos fieros
y con su extraña condición avara
pusieron tan gran yugo a nuestros cuellos
que forzados salimos de él y de ellos
By harsh law and regulation,
By such an alien greed driven;
On our necks the hard yoke was laid;
Orders and outrage we betrayed.
— A Numantine ambassador to the Roman general in a version of Cervantes’ “Numancia”
The following, discommodious piece shall move between the Roman general Scipio’s siege of Numantia, in northern Spain, to Grant’s siege of Vicksburg, to twenty-first century Moslem terrorists. We may, with help from Cervantes, see connections, while seeking to avoid endorsing either war, terrorism, or imperialism (or slavery).
Over the centuries, the expression es defensa numantina has come in Spain to refer to any desperate, suicidal, last-ditch resistance to invading forces. A pause to note the Jewish “Numantia”: Masada, the plateau-top fortification where, besieged by the Romans, the Sicarii are said to have committed suicide with their families. Sicarii, from the Latin seccor (to slice). In the decades before the destruction of Jerusalem (70 CE), the Sicarii opposed the Roman occupation of Judea. At public gatherings, they pulled small daggers (sicae) out of their cloaks, stabbing both Romans and Jewish Roman sympathizers, before slipping away through the crowd.
In his Roman History, Appian (c. CE 95 – c. CE 165) tells how, some centuries earlier, the Celtiberian people of Numantia, a fort on a hill along the Douro river, held off the legions for close to a decade. Among the problems for the Romans in Lusitania (the northern Iberian peninsula): never-ending war provided their military leaders and soldiers opportunities for advancement. (One day, in a history of the United States’ post-World-War-II military engagements, this may be reprised. Frequent conflicts and the trumpeting of threats to national security may be described as providing defense contractors opportunities for making lots of money. And this phenomenon may be seen as having hamstrung US government efforts to advance the international interests of a broader section of the population or of other major corporations.)
As regards the invasion of Lusitania, rumors of the incessant battles and heavy Roman losses, among other things, led to young Roman men avoiding enrollment as soldiers and potential officers not volunteering. Some in Rome began urging peace. In 134 BCE, however, the Senate named to lead the campaign Scipio, who in his youth had taken and destroyed Carthage, and who was, additionally, a statesman and patron of Roman writers and thinkers, as well as being one of the promoters of the Numantine war. (A bust of Scipio appears below.)
While not overlooking Scipio’s ruthlessness and his and his colleagues’ imperial ambitions, I find it hard to read of him, either in the histories or in Cervantes’ play Numancia, without admiring his tactics, discipline, and effectiveness. His appointment to lead the Roman invaders rallied the troops, and Scipio whipped them into fighting form. Prostitutes and traders were driven from the Romans’ camp. Beds were forbidden, with Scipio himself sleeping on straw. There were all-day marches through lands in which there was little potable water. Scipio had the soldiers build and then destroy fortifications, just for the exercise. While marching the soldiers were not allowed to shift from their assigned places in the ranks. Etc.
We may think of the Roman histories as less a description of realities than as mythology, a way of transmitting to the citizenry the regime’s values, to include ideas about how to triumph and how to surrender, should it come to that. Of course the same may be said of many plays, of the writings of many ethicists, and of histories of the United States and its leaders. In Appian’s History, Scipio says, “‘He must be considered a reckless general who would fight before there is any need, while a good one takes risks only in cases of necessity.’ He added by way of simile that physicians do not cut and burn their patients till they have first tried drugs.”
In a version of Cervantes’ verse play, Scipio tells his troops:
No quiero yo que sangre de romanos
colore más el suelo de esta tierra;
basta la que han vertido estos hispanos
en tan larga reñida y cruda guerra.
I don’t want more Roman blood coloring the surface of this land. In crude, undisciplined warfare, these Spaniards have spilled enough.
(As regards Vicksburg, Grant’s way of putting this, in his Memoirs, was: “I now determined upon a regular siege . . . and to incur no more losses.”)
On his way to Numantia, Scipio took a detour, going through the land that was supplying food to the Numantines. Along the way, the Numantines tried to ambush his troops, but the Romans fought the Numantines off. Scipio got wind that young men from a rich town not far away wanted to come to Numantia’s defense. So—this being well before the invention of drones—Scipio surrounded that town, demanded that the young men be turned over to him, cut off their hands.
Ready though the Numantines were to die, the historian Florus writes, they were given no opportunity to fight. Around the town the Romans built a four-mile-long wall, ten feet high and eight feet wide. To keep the Numantines from escaping down the river, Scipio had his men build towers on either side, to which were attached timbers with ropes full of knives and spear heads, these blades kept constantly in motion by the current.
I do not know if Grant (pictured at right) ever read of Scipio’s Numantia campaign or appreciated the parallels. Vicksburg, Mississippi—a city of a hundred hills, it has been called—is on a high bluff overlooking the Mississippi River from the east. From this vantage, the Confederate army, like the Numantines long before, were able to fend off a series of attacks, from Grant’s predecessors and from Grant himself. Finally, Grant decided to send his troops through the swamp land, across the Mississippi, and all the way around Vicksburg to its south. In what was, before the invasion of Normandy, the largest amphibious operation in American history, Grant’s forces—guided, it is said, by a former slave—were able to re-cross the river and establish a beachhead forty miles south of Vicksburg. They did not proceed directly north to Vicksburg, but fought their way northeast to Jackson, the state capital and railhead, and from there west.
In this way, Grant cut off the Confederates’ supplies and was able to starve them into submission. And more than that. It was during the Vicksburg campaign that Grant—and one of his generals, William Sherman—learned how a large army might feed itself off the farm animals and produce of the local people. In his Memoirs Grant reports that he told the people living in this part of Mississippi that he had sent troops and wagons to collect all the food and forage for fifteen miles on either side of their route.
What are we to eat? the people asked him. Grant recalls:
My response was that we had endeavored to feed ourselves from our own northern resources while visiting them; but their friends in gray had been uncivil enough to destroy what we had brought along, and it could not be expected that men, with arms in their hands, would starve in the midst of plenty. I advised them to emigrate east, or west, fifteen miles and assist in eating up what we left.
The federal troops constructed elaborate entrenchments that surrounded the city and moved closer and closer to the Confederate fortifications. With their backs against the Mississippi and Union gunboats firing from the river, Confederate soldiers and citizens alike were more or less trapped. On July 3, 1863, Dora Miller, a Vicksburg resident (and Union supporter), entered in her diary:
Provisions so nearly gone, except the hogshead of sugar, that a few more days will bring us to starvation indeed. Martha says rats are hanging dressed in the market for sale with mule meat, — there is nothing else. The officer at the battery told me he had eaten one yesterday.
The day after, the Confederate general surrendered himself and his 30,000 famished men. Mrs. Miller wrote that toward 5 that afternoon, she was told to keep on the lookout; the “army of occupation” was coming along. And—
in a few minutes the head of the column appeared. What a contrast to the suffering creatures we had seen so long were these stalwart, well-fed men, so splendidly set up and accountered. Sleek horses, polished arms, bright plumes, — this was the pride and panoply of war. Civilization, discipline, and order seemed to enter with the measured tramp of those marching columns; and the heart turned with throbs of added pity to the worn men in gray, who were being blindly dashed against this embodiment of modern power.
Cervantes wrote his play in the late sixteenth century. Some scholars have labelled Numancia the first real stage tragedy to appear in Europe since the demise of classical Greece and Rome. Indeed, the style of the play suggests that Cervantes picked up where Aeschylus left off. As the play’s first English translator, James Y. Gibson, put it, “Each speech is uttered as it were to the beat of the drum, or to the prolonged wailings of the Dead March.” The speakers—who, in addition to Scipio and his lieutenants and several Numantines, include Spain, War, Hunger, and Fame—talk us through the event rather than catching us up in a plot.
But what plot can there be? Scipio, the Roman legions, history long ago sealed the Numantines’ fate. They are not to be defeated, like the Trojans, in heroic hand-to-hand combat or by trickery, but by discipline, engineering, starvation, overwhelming force. Against the 30-60,000 Roman soldiers, it is thought the Numantines were able to muster 8,000 fighters, if not fewer.
What human beings can escape the machine of history (or fate) or the machinations and brute forces that rule their age? Not the powerless, nor the powerful. Witnessing the first atomic bomb explosion, Robert Oppenheimer, a director of the Manhattan Project, famously recalled a line from the Bhagavad-Gita. “Now, I am become Death, the destroyer of worlds” is the Oppenheimer rendition. Standing over the ruins of Numantia, Scipio might have said as much, and Sherman, too, on his March to the Sea, and even if the slave-holding world he was destroying had come to seem to many—to the slaves first and foremost!—as worse than cruel and unjust. In the Bhagavad-Gita passage, Krishna is urging war on Arjuna. Whether they are killed in battle or not, the people will be overwhelmed by the force of time (the destroyer of worlds).
A question then becomes—or seems to become? perhaps they’re a diversion, such questions?—how to respond to overwhelmng force, to time and morality, to the force of history and the force of the Romans or other imperialist powers. “A people which was supported by the resources of the whole world” is a phrase used by the historian Florus. In our twenty-first century, an unemployed Arab youth might feel similarly about the United States and its allies.
From Florus we learn:
Eventually, as [the Numantines’] hunger increased, envoys were sent to Scipio, asking if they would be treated with moderation if they surrendered, pleading that they had fought for their women and children, and the freedom of their country. But Scipio would accept only deditio [complete surrender].
Soon after this, all their eatables being consumed, having neither grain, nor flocks, nor grass, they began, as is frequently necessary in wars, to lick boiled hides. When these also failed, they boiled and ate the bodies of human beings, first of those who had died a natural death, chopping them in small bits for cooking. Afterwards being nauseated by the flesh of the sick, the stronger laid violent hands upon the weaker. No form of misery was absent. They were rendered savage in mind by their food, and their bodies were reduced to the semblance of wild beasts by famine, plague, long hair, and neglect. . . . [T]here was something fearful to the beholders in the expression of their eyes—an expression of anger, grief, toil, and the consciousness of having eaten human flesh. Having reserved fifty of them for his triumph, Scipio sold the rest and razed the city to the ground.
As Tacitus writes of the Romans in ancient Britain: “They rob, kill, and rape, and this they call Roman rule. They make a desert and call it peace.” This was well before carpet bombing, though not before violent battling related to scarce water supplies.
In Cervantes’ play, and in the version of this play offered this past April by el Teatro Español in Madrid, in honor of the four hundredth anniversary of Cervantes’ death, the people of Numantia are presented as triumphing. Imprisoned in their own homes, lacking food and water, the men of Numantia opted for what we might call an ancient version of suicide bombing. They would have rushed at the Roman palisades and knives, perhaps killing not just themselves but a few Romans, too, and at least earning the honor of having been killed in battle, rather than the eternal shame of surrender.
The Numantine women begged them not to go. The women did not want to be left, the spoils of war, the Romans’ chattel. It becomes, in Cervantes’ play, the young men’s task to kill their lovers, the parents’ task to kill their children.
Hijo: Madre, ¿por qué lloráis? ¿Adónde vamos?
Teneos, que andar no puedo de cansado.
Mejor será, mi madre, que comamos,
que el hambre me tiene fatigado.
Mujer: Ven en mis brazos, hijo de mi vida,
do te daré la muerte por comida.
From Gordon’s translation:
Son: Why weepest, mother? Whither do we go?
Stay, stay, I am so faint, I have no breath!
My mother, let us eat, ’tis better so,
For me this bitter hunger wearyeth.
Mother: Come to my arms, my darling sweet and good,
And I to thee will give thy death for food!
Being free does not involve being able to do what one wants, Sartre proposed, but wanting to do what one can. I wonder if he ever reflected on what a gruesome proposition this can be. One of Cervantes’ Numantines asks a compatriot: “¿Qué nuevo modo de morir procuras?” (What new way of dying do you seek?)
But in his program notes, Juan Carlos Pérez de la Fuente, director of el Teatro Español, wrote that Cervantes had dramatized how people in power, especially political power: “podrán arrebatarnos casi todo: el pan, la casa, el trabajo, la libertad . . . incluso la vida; pero la dignidad, nunca.” The powerful can take most everything from us: our bread, our homes, our work, our freedom, even our lives; but our dignity, never.
Fame—or history, we might call it—gets the last word in the Cervantes’ text.
La fuerza no vencida, el valor tanto,
digno de prosa y verso celebrarse;
mas, pues de esto se encarga la memoria,
demos feliz remate a nuestra historia.
Such awesome courage, no force can defeat;
It finds its proper place in myths and verse.
And since of this feat our words resound,
In Numancia, a happy end is found.
In the introduction to his translation Gibson presents a view of the play that seems yet milder still. And, if we bring this view to the actions of the Moslem terrorists of our times, it will seem too mild. Gibson writes:
To do what the enthusiasm of the soul prompts and compels; to do it with single-hearted unselfishness; without regard to the adequacy or inadequacy of means; without regard even to eventual success or non-success, but with simple regard to the inspired voice of duty within, come what may: that is Quixotism in supreme degree.
Concluding, I come back to a point made earlier. What human beings can escape the machine of history (or fate) or the machinations and brute forces that rule their age? This fact alone, or the possibility of its truth, can make human beings crazed. (And perhaps our crazed behaviors include such things as devoting a life to trying to build an Internet superstore or to writing intellectual essays. This is hardly the first time I have admired Pascal’s note about how, driven by their predicament, some human beings turn to games and gambling, some to trying to solve mathematics problems. Some run great risks in pursuit of celebrity, and some beat their brains out describing and analyzing human behavior such as this. Of course most of us would elevate all such activities above violent attack and seeking to enslave others, and this even as we are willing to share in the spoils of war and of wage slavery.)
As regards the Moslem terrorists, Cervantes’ play has helped me appreciate the desperate position in which they find themselves. I have in mind not those who, whether they are aware of their role or not, might better be described as soldiers for this or that regime. I have in mind those who are besieged by one or more regimes, and who, desperate to find some measure of dignity and autonomy, have chosen what may seem to them like the only option, or the only honorable one.
— Wm. Eaton
William Eaton is the Editor of Zeteo. A collection of his essays, Surviving the Twenty-First Century, was published by Serving House Books in 2015. For more, see Surviving the website. The present text is one in an emerging series of postmodern juxtapositions. In this regard, see from the Literary Explorer Paris, Madrid, Florence, New York—Novel Collage and from Zeteo O que é felicidade (Corcovado, Kalamazoo).
Notes on Images
Celtiberian ruins, of Numantia perhaps, appear just above. The photograph at the very top of this piece was released by French police in November 2015. It is credited to AFP and Getty Images. The person pictured, who the police were seeking to identify, was said to be the third suicide bomber behind the Stade de France blasts. From a Daily Mirror news story, 22 November 2015. A copy of the police Appel à Témoins (call for information) appears at the very end of the present post.
Ulysses S. Grant, Memoirs, edited with notes by E.B. Long (De Capo Press, 1982). Photograph at right is of a Union army encampment just below Vicksburg, during the siege.
History has left two distinct versions of Cervantes’ play, and neither of these is considered to be close to the original. There are also several titles, including El cerco de Numancia (The Siege of Numantia), La destruición de Numancia, and simply Numancia.
In an effort to keep this epigraph to four lines and to have it represent Numancia, Cervantes’ play, as a whole, I have inserted into my translation information that comes from earlier and later in a Cervantes’ text. Below the entire stanza from the Cervantes text and from James Young Gibson’s translation (which scrupulously duplicates in English Cervantes’ Spanish verse forms).
Dice que nunca de la ley y fueros
del senado romano se apartara
si el insufrible mando y desafueros
de un cónsul y otro no le fatigara.
Ellos con duros estatutos fieros
y con su extraña condición avara
pusieron tan gran yugo a nuestros cuellos
que forzados salimos de él y de ellos
She [Numancia] says, that from the Roman Senate’s law
And rule she never would have turned aside,
Had not some brutal Consuls, with their raw
And ruthless hands, done outrage to her pride.
With fiercer statutes than the world e’er saw,
With greedy lust, extending far and wide,
They placed upon our necks such grievous yoke,
As might the meekest citizens provoke.
Copies of both Gibson’s translation and this version of Cervantes’ text, in Spanish, have been available online. I have been working on cleaning up the translation text available online and adding to it a few footnotes giving key passages from the Spanish text. This work is incomplete, but available here.
A translation by Horace White of Appian’s writing on the Spanish wars may be found online.
For an overview, one might see Michael B. Ballard, Vicksburg, The Campaign that Opened the Mississippi (University of North Carolina Press, 2004).
The Confederate general, John C. Pemberton, might have tried to fight his way out—the city was not as thoroughly encircled as Numantia had been. But Pemberton was hoping for relief, which never came, from General Joseph E. Johnston and his sizable army, and he did not want to give up the city, which was key to the Confederates maintaining control of the lower Mississippi and to the eastern Confederate states not being cut off from those west of the river and from those states’ horses, cattle, and potential soldiers.
A Confederate version of the story might stress that Pemberton was from Pennsylvania; there were many at the time who thought he was a saboteur, sent, as if by the devil, to lose the war for the South. Johnston was almost equally vilified, for his cautiousness.
Dora Miller, A Woman’s Diary of the Siege of Vicksburg, originally published in The Century, Illustrated Monthly Magazine, Vol. XXX, May 1885 to October 1885. In the first entry of the diary as a whole—an entry written in New Orleans on December 1, 1860—Miller (picture at right) identifies her allegiance and begins:
I understand it now. Keeping journals is for those who can not, or dare not, speak out. So I shall set up a journal, being only a rather lonely young girl in a very small and hated minority. . . . Surely no native-born woman loves her country better than I love America. The blood of one of its revolutionary patriots flows in my veins, and it is the Union for which he pledged his “life, fortune, and sacred honor” that I love, not any divided or special section of it. Living from birth in slave countries, both foreign and American, and passing through one slave insurrection in early childhood, the saddest and also the pleasantest features of slavery have been familiar. If the South goes to war for slavery, slavery is doomed in this country. To say so is like opposing one drop to a roaring torrent. This is a good time to follow St. Paul’s advice that women should refrain from speaking, but they are speaking more than usual and forcing others to speak against their will.
Grant’s victory, together with the defeat of General Robert E. Lee at Gettysburg the day before, has traditionally marked the turning point in the Civil War, the moment, as Bob Zeller has put it, when “the Confederate States of America went from a viable political enterprise to a lost cause.” And, as such, the Confederacy lives on as a mindset or collection of mindsets.
Zeller has been, inter alia, cofounder and president of The Center for Civil War Photography. Quotation is from The Long, Gruesome Fight to Capture Vicksburg (from Hallowed Ground Magazine, Summer 2013).
Again, history has left two distinct versions of Cervantes’ play, neither considered to be close to the original. Picture at right reproduces Numancia, by Alejo Vera y Estaca, 1880; in the collection of el Museo del Prado, Madrid.
Gibson also quotes from a translation of the German scholar Augustus W. Schlegel’s “History of Dramatic Literature”:
The Destruction of Numantia has altogether the elevation of the tragical cothurnus; and, from its unconscious and unlaboured approximation to antique grandeur and purity, forms a remarkable phenomenon in the history of modern poetry . . . . There is, if I may so speak, a sort of Spartan pathos in the piece; every single and personal consideration is swallowed up in the feeling of patriotism, . . .
It is said that Scipio, when he looked Carthage as it was in the last throes of its complete destruction (by the forces under his command), wept openly for his enemies and recited (not in English, of course) a couplet from The Iliad:
A day will come when sacred Troy shall perish
And Priam and his people shall be slain.
We enter here upon quite another history—of how generals (and after them scientists and athletes) have learned, over time, to speak publicly of their triumphs. (The lines from The Iliad are translated from Latin, from Plutarch and Polybius.)
General Grant first became famous throughout the North and South for the victory of his Union armies at Fort Donelson, Tennessee, in 1862, and for his reply to his Confederate counterpart when this latter wrote him to explore the possibility of a surrender. Grant’s reply: “No terms except unconditional and immediate surrender can be accepted.”
It has been proposed that, to steel their courage or dull their taste, the Numantines drugged themselves with a liquor called Celia, which was a kind of beer, I believe.
Tacitus, De vita et moribus Iulii Agricolae (On the life and character of Julius Agricola).
« Etre libre, ce n’est pas pouvoir faire ce que l’on veut, mais c’est vouloir ce que l’on peut. » Sartre, Situations I.
This from a footnote in published editions of Les Pensées: The original French for the full passage:
L’homme est si malheureux qu’il s’ennuierait même sans aucune cause d’ennui par l’état propre de sa complexion. Et il est si vain qu’étant plein de mille causes essentielles d’ennui, la moindre chose comme un billard et une balle qu’il pousse suffisent pour le divertir.
Mais, direz-vous, quel objet a-t-il en tout cela? Celui de se vanter demain entre ses amis de ce qu’il a mieux joué qu’un autre. Ainsi les autres suent dans leur cabinet pour montrer aux savants qu’ils ont résolu une question d’algèbre qu’on aurait pu trouver jusqu’ici, et tant d’autres s’exposent aux dernier périls pour se vanter ensuite d’une place qu’ils auront prise aussi sottement à mon gré. Et enfin les autres se tuent pour remarquer toutes ces choses, non point pour en devenir plus sages, mais seulement pour montrer qu’ils les savent. Et ceux-là sont les plus sots de la bande, puisqu’ils le sont avec connaissance, au lieu qu’on peut penser des autres qu’ils ne le seraient plus s’ils avaient cette connaissance.
Click for pdf | <urn:uuid:6bbd10f3-a5ce-47ff-b48c-4b6d072f9872> | CC-MAIN-2019-47 | https://zeteojournal.com/2016/06/02/numantia-cervantes-vicksburg-terrorists/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00459.warc.gz | en | 0.931402 | 6,419 | 2.53125 | 3 |
Contraceptive Methods, Sociocultural And Historical Aspects
CONTRACEPTIVE METHODS Stephanie B. Teal
SOCIOCULTURAL AND HISTORICAL ASPECTS Vern L. Bullough
Hormones are the chemical messengers the body uses to control and coordinate various physical processes. The major hormones influencing the female reproductive organs are estrogen and progesterone. Manipulation of these hormones may disrupt the normal processes required for fertility, such as ovulation, transport of egg and sperm in the Fallopian tubes, thinning of cervical mucus, and preparation of the uterine lining (endometrium) for implantation. Hormonal methods of contraception must affect these processes enough to prevent fertility, without causing too many other bothersome side effects or risks.
Combination oral contraceptive pills. The combination oral contraceptive (COC) pill is a highly effective, reversible female contraceptive. It contains both estrogen and progestin (a compound that mimics natural progesterone). Taken every day for three out of four weeks, it prevents ovulation by inhibiting the secretion of two regulatory hormones from the brain's pituitary gland. The estrogen suppresses follicle stimulating hormone (FSH) and thus prevents preparation of an egg for ovulation. The main contraceptive effect, however, is from the progestin, which suppresses luteinizing hormone (LH). The lack of the LH surge prevents ovulation. The progestin also has effects on the endometrium and cervical mucus. The endometrium becomes much less favorable to implantation due to thinning. Meanwhile, the cervical mucus becomes thick, limiting sperm penetration and transport into the uterine cavity. Even if ovulation occasionally occurs, these other effects contribute to the overall high contraceptive efficacy of 98 percent (Trussell and Vaughan 1999).
The COC pill has significant noncontraceptive benefits, including reduction of menstrual blood loss, reduction of cramps, and improved regularity of the menstrual cycle. It also significantly reduces the risks of ovarian and endometrial cancer, pelvic inflammatory disease, breast cysts, and endometriosis. Both acne and excessive hair growth are improved by COC pill use.
Although the COC pill has many contraceptive and noncontraceptive benefits, it is not appropriate for everyone. Contraindications include breast cancer, severe liver disease, and uncontrolled hypertension. Blood clots in the deep veins are a rare but sometimes serious risk associated with the pill. Women who smoke are already at higher risk of blood clots and heart attack due to their cigarette usage, and smokers are discouraged from COC use. In nonsmokers, however, the pill is safe to use through the age of menopause.
Depo-Provera. Depo-Provera (depot medroxyprogesterone acetate) is a long-acting, reversible injectable contraceptive available in many countries since the late 1970s and in the United States since 1992. It results in initially high progestin levels which taper off over the following weeks. It is given as an injection every twelve to thirteen weeks. The progestin dose results in thickening of cervical mucus and thinning of the endometrium, but also is high enough to suppress ovulation, leading to a high efficacy rate of 99 percent (Trussell and Vaughan 1999). Because of the lack of estrogen with this method, a common side effect is unscheduled irregular bleeding. This usually resolves over several months, and 50 percent of women have no bleeding at all after one year of use (Kaunitz 2001). In fact, this method may be beneficial to women who are troubled by heavy, prolonged menstrual periods. Depo-Provera is also an excellent contraceptive for those who cannot use estrogen, want a private method whose timing is not related to intercourse, or do not want to take a pill every day. Because it can have a prolonged effect on a woman's return to fertility, Depo-Provera is not a good option for women planning pregnancy within the next year. It is still controversial whether it promotes weight gain: this effect has only been noted in U.S. trials of this internationally popular method (Kaunitz 2001).
Lunelle. Lunelle, an injectable monthly contraceptive, contains one-sixth the dose of medroxyprogesterone acetate as Depo-Provera, and also contains estrogen. Lunelle is given by injection every twenty-three to thirty-three days. Like Depo-Provera, the progestin in Lunelle inhibits the secretion of the hormone LH, preventing ovulation. Because of the estrogen the bothersome unscheduled bleeding of Depo-Provera is much improved. In the first ninety days of use, 57 percent of Lunelle users report variations in their bleeding patterns, compared with 91 percent of Depo-Provera users (Hall 1998). However, long-term Lunelle users tend to see normalization of their bleeding patterns, and after a year, 70 percent report normal monthly bleeding. Lunelle is highly effective. In studies conducted by the World Health Organization, over 12,000 women in nine countries were followed for a total of 100,000 woman-months use: five pregnancies occurred (Hall 1998). The formulation in Lunelle has been used in some countries for twenty years prior to FDA (Food and Drug Administration) approval in the United States.
Implantables. Several sustained-release progestin-only contraceptives have been developed to reduce the frequency of administration and decrease the high progestin levels associated with Depo-Provera. Norplant consists of six capsules filled with the progestin levonorgestrel that are placed under the skin of the upper arm. The capsules release the hormone at a constant low rate, resulting in a daily dose about 25 to 50 percent that of low-dose COCs. Unscheduled bleeding does occur, especially during the first year, but women often return to a normal menstrual pattern thereafter. Norplant may be used for up to five years.
Implanon. A single capsule system which is effective for three years, Implanon's major benefit over Norplant is the ease of insertion and removal, which can be difficult if the capsules are placed too deeply or irregularly. One of the most obvious benefits of these implants is the low demand on the contraceptive user, especially as compared to daily pill use. Efficacy is also extremely high, with a failure rate of less than 1 percent per year.
Progestin Intrauterine Device. Widely used in Europe, the progestin intrauterine device (IUD) is a low-maintenance method that has high efficacy, rapid reversibility, and reduction of menstrual blood loss. The Mirena progestin IUD is a small, T-shaped flexible plastic device that slowly releases levonorgestrel contained in the long stem of the T. The contraceptive effect is primarily from the thickening of cervical mucus and alteration of sperm motility and function. Although ovulation is not usually inhibited, the failure rate is only 0.14 percent. After placement, the progestin IUD may be left in place up to five years, or removed when pregnancy is desired.
Nonhormonal methods rely on prevention of contact of the egg and sperm. Many nonhormonal methods require implementation around the time of intercourse, or place restrictions on when or how intercourse may occur, whereas others require little maintenance. Because of this, these methods have a much wider range of contraceptive failure than the hormonal methods, ranging from as high as 25 percent for withdrawal and natural family planning, to as low as 0.5 to 1 percent for the IUD and sterilization.
Intrauterine Device. The intrauterine device is a highly effective, reversible, long-acting, nonhormonal method of contraception. It is popular in Europe, Asia, and South America. Nonhormonal IUDs come in many different forms, but the most common type in the United States is the TCu-380A, also known as Paraguard. The Paraguard IUD is a small plastic "T" wrapped with copper. It exerts its effect through several mechanisms: first, the copper significantly decreases sperm motility and lifespan, second, the IUD produces changes in the endometrium that are hostile to sperm. The IUD does not affect ovulation, nor does it cause abortions. The overall failure rate of the IUD is less than 1 percent per year, which is comparable to female sterilization (Meirik et al. 2001). After removal, a woman can become pregnant immediately. Despite its benefits, its popularity in the United States waned in the mid-1970s due to a rash of litigation related to reports of increased pelvic infection and infertility related to its use. Later studies largely refuted these concerns, but the bad publicity has lingered (Hubacher et al. 2001). Although slowly increasing, U.S. use rate of the IUD still lags far behind the rest of the world.
Condom: male and female. The male condom is a sheath of latex or polyurethane that is placed over the penis prior to intercourse as a barrier to sperm. It is inexpensive, readily available, and has the added health benefit of providing protection against sexually transmitted diseases, including HIV. Condoms may also be lubricated with a spermicide.
The female condom is a polyurethane sheath with two rings attached, which is placed in the vagina prior to intercourse. In clinical trials it has had high patient acceptance, and has the benefit of being a woman-controlled method of sexually transmitted disease protection. Couples should not use both a male and a female condom during an act of intercourse, as this increases the risk of breakage. The failure rate of condoms is 12 to 20 percent (Fu et al. 1999).
Diaphragm. The diaphragm is a rubber cupshaped device which is filled with spermicide and inserted into the vagina, creating a barrier in front of the cervix. Like the condom, the efficacy rate of the diaphragm is dependent on the user, but ranges from 80 to 90 percent. The diaphragm does provide some protection against gonorrhea and pelvic inflammatory disease, but has not been shown to reduce transmission of HIV or other viral sexually transmitted infections. Although it must be obtained by prescription, a diaphragm is relatively inexpensive, and with proper care lasts for several years. It may be combined with condom use for greater contraceptive efficacy and disease prevention.
Withdrawal. Also known as coitus interruptus, withdrawal requires the male partner to remove his penis from the woman's vagina prior to ejaculation. Although theoretically sperm should not enter the vagina and fertilization should be prevented, this method has a failure rate of up to 25 percent in typical use (Trussell and Vaughan 1999). Withdrawal is probably most useful as a back-up method for couples using, for example, periodic abstinence.
Natural family planning. Periodic abstinence, also known as natural family planning, depends on determining safe periods when conception is less likely, and using this information to avoid pregnancy. The various methods of natural family planning include the calendar, thermal shift, symptothermal, and cervical mucus methods. All of these methods require training in the recognition of the fertile phase of the menstrual cycle, as well as a mature commitment by both partners to abstain from intercourse during this time. If the woman does not have a predictable menstrual cycle, some of these methods are more difficult to use effectively. Although with perfect use the failure rate could be as low as 5 percent, actual failure rates are closer to 25 percent and above (Fu et al. 1999; Trussell and Vaughan 1999).
Female sterilization. Female sterilization is the most common method of birth control for married couples in the United States. The technique is performed surgically, through one or two incisions in the abdomen. The Fallopian tubes may be tied, cut, burnt, banded with rings, or blocked with clips. Sterilization should be considered final and irreversible, although expensive microsurgery can sometimes repair the tube enough to allow pregnancy. Some couples assume that because this method is irreversible, it has a perfect efficacy rate, but this is not true. Each method has a slightly different rate of failure or complication, but the overall failure rate for female sterilization is about 1 percent (Peterson et al. 1996). The failure rate of sterilization is also dependent on the age of the patient, with younger patients more likely to experience an unplanned pregnancy up to ten years after the procedure. Younger patients are also more likely to experience regret in the years following sterilization.
Male sterilization. Male sterilization (vasectomy) is also a highly effective, permanent method of contraception. It is accomplished by making a small hole on either side of the scrotum and tying off the spermatic cord which transports sperm into the semen just prior to ejaculation. Compared to female sterilization, it is less expensive, more effective, easier to do with less surgical risk, and is easier to reverse if necessary. Vasectomy has no effect on male sexual function, including erectile function, ejaculation, volume of semen, or sexual pleasure. However, vasectomy rates consistently lag far behind those of female sterilization in all parts of the world, due mainly to cultural factors.
Emergency contraception, also known as post-coital contraception, includes any method that acts after intercourse to prevent pregnancy. The Yuzpe method uses COC pills to deliver two large doses of hormones, twelve hours apart. These must be taken within seventy-two hours of the unprotected intercourse to be effective. A prepackaged emergency contraceptive kit called Preven is also available. The kit contains a pregnancy test, instructions, and two pills with the appropriate doses of estrogen and progestin. Studies show a pregnancy rate of 3.2 percent for the cycle in which the woman took the emergency contraception, which is a 75 percent reduction of the 8 percent expected pregnancy rate per unprotected cycle (Ho 2000). The main side effects are nausea and possibly vomiting from the high dose of estrogen. Emergency contraception using a special progestin-only pill containing levonorgestrel avoids this side effect. It is marketed as Plan B. A study of 967 women using Plan B showed a pregnancy rate of 1.1 percent, or an 85 percent reduction. Both methods cause a 95 percent reduction in the risk of pregnancy if taken within the first twelve hours after unprotected intercourse (Nelson et al. 2000). The mechanism of action of the hormonal pills is probably the prevention of ovulation, with some contribution of changes in the endometrium. They do not cause abortion.
Control of family size is an important consideration for all adults, in every country. Many different contraceptive methods exist, and no single method is appropriate for all couples. When choosing a contraceptive method, factors such as effectiveness, reversibility, side effects, privacy, cost, and cultural preferences should be considered.
See also: ABORTION; ABSTINENCE; ASSISTED REPRODUCTIVE TECHNOLOGIES; BIRTH CONTROL: SOCIOCULTURAL AND HISTORICAL ASPECTS; CHILDLESSNESS; FAMILY LIFE EDUCATION; FAMILY PLANNING; FERTILITY; INFANTICIDE; SEXUALITY EDUCATION
Alan Guttmacher Institute. (1999). "Sharing Responsibility: Women, Society and Abortion Worldwide." New York: Author.
Fu, H.; Darroch, J. E.; Haas, T.; Ranjit, N. (1999). "Contraceptive Failure Rates: New Estimates from the 1995 National Survey of Family Growth." Family Planning Perspectives 31(2):56–63.
Hall, P. E. (1998). "New Once-a-Month Injectable Contraceptives, with Particular Reference to Cyclofem/Cyclo-Provera." International Journal of Gynaecology and Obstetrics 62:S43–S56.
Hatcher, R. A.; Trussel, J.; Stewart, F., and Cates, W. (1998). Contraceptive Technology, 17th edition. New York: Irvington.
Ho, P. C. (2000). "Emergency Contraception: Methods and Efficacy." Current Opinion in Obstetrics and Gynecology. 12(3):175–179.
Hubacher, D.; Lara-Ricalde, R.; Taylor, D. J.; Guerra- Infante, F.; and Guzman-Rodriguez, R. (2001). "Use of Copper Intrauterine Devices and the Risk of Tubal Infertility among Nulligravid Women." New England Journal of Medicine 345(8):561–567.
Kaunitz, A. M. (2001). "Injectable Long-Acting Contraceptives." Clinical Obstetrics and Gynecology 44(1):73–91.
Meirik, O.; Farley, T. M. M.; and Sivin, I. (2001). "Safety and Efficacy of Levonorgestrel Implant, Intrauterine Device, and Sterilization" Obstetrics and Gynecology 97(4):539–547.
Nelson, A. L.; Hatcher, R. A.; Zieman, M.; Watt, A.; Darney, P. D., Creinin, M. D. (2000). Managing Contraception, 3rd edition. Tiger, GA: Bridging the Gap Foundation.
Peterson, H. B.; Xia, Z.; Hughes, J. M.; Wilcox, L. S.; Tylor, L. R.; and Trussell, J. (1996). "The Risk of Pregnancy after Tubal Sterilization: Findings from the U.S. Collaborative Review of Sterilization." American Journal of Obstetrics and Gynecology 174:1161–1170.
Riddle, J. M. (1992). Contraception and Abortion from the Ancient World to the Renaissance. Cambridge, MA: Harvard University Press.
Senanayake, P., and Potts, M. (1995). An Atlas of Contraception. Pearl River, NY: Parthenon.
Trussell, J., and Vaughan, B. (1999) "Contraceptive Failure, Method-Related Discontinuation and Resumption of Use: Results from the 1995 National Survey of Family Growth." Family Planning Perspectives 31(2):64–72.
Alan Guttmacher Institute. (2000). "Contraceptive Use." Available from www.agi-usa.org/pubs/fb_contr_use.html.
STEPHANIE B. TEAL
Widespread Public Discussion
Key to the emerging public discussion about birth control was concern with overpopulation, and only later did the feminist issue of right to plan families emerge. The population issue was first put before the public by the Reverend Thomas Robert Malthus (1766–1834) in his Essay on the Principle of Population (1708). The first edition was published anonymously, but Malthus signed his name to the second, expanded edition published in 1803. Malthus believed that human beings were possessed by a sexual urge that led them to multiply faster than their food supply, and unless some checks could somehow be applied, the inevitable results of such unlimited procreation were misery, war, and vice. Population, he argued, increased geometrically (1, 2, 4, 8, 16, 32 . . .) whereas food supply only increased arithmetically (1, 2, 3, 4, 5, 6, . . .) Malthus's only solution was to urge humans to exercise control over their sexual instincts (i.e., to abstain from sex except within marriage) and to marry as late as possible. Sexually, Malthus was an extreme conservative who went so far as to classify as vice all promiscuous intercourse, "unnatural" passions, violations of the marriage bed, use of mechanical contraceptives, and irregular sexual liaisons.
Many of those who agreed with Malthus about the threat of overpopulation disagreed with him on the solutions and instead advocated the use of contraceptives. Those who did so came to be known as neo-Malthusians. Much of the debate over birth control, however, came to be centered on attitudes toward sexuality. Malthus recognized the need of sexual activity for procreation but not for pleasure. The neo-Malthusians held that continence or abstinence was no solution because sex urges were too powerful and nonprocreative sex was as pleasurable as procreative sex.
To overcome the lack of public information about contraception, the neo-Malthusians felt it was essential to spread information about the methods of contraception. The person in the English speaking world generally given credit for first doing so was the English tailor, Francis Place (1771–1854). Place was concerned with the widespread poverty of his time, a poverty accentuated by the growth of industrialization and urbanization as well as the breakdown of the traditional village economy. Large families, he felt, were more likely to live in poverty than smaller ones, and to help overcome this state affairs, Place published in 1882 his Illustrations and Proofs of the Principle of Population. He urged married couples (not unmarried lovers) to use "precautionary" means to plan their families better, but he did not go into detail. To remedy this lack of instruction, he printed hand-bills in 1823 addressed simply To the Married of Both Sexes. In it he advocated the use of a dampened sponge which was to be inserted in the vagina with a string attached to it prior to "coition" as an effective method of birth control. Later pamphlets by Place and those who followed him added other methods, all involving the female. Pamphlets of the time, by Place and others, were never subject to any legal interference, although they were brought to the attention of the attorney general who did not take any action. Place ultimately turned to other issues, but his disciples, notably Richard Carlile (1790–1843), took up the cause. It became an increasingly controversial subject in part because Place and Carlile were social reformers as well as advocates of birth control. Carlile was the first man in England to put his name to a book devoted to the subject, Every Woman's Book (1826).
Early U.S. Birth Control Movement
In the United States, the movement for birth control may be said to have begun in 1831 with publication by Robert Dale Owen (1801–1877) of the booklet Moral Physiology. Following the model of Carlile, Owen advocated three methods of birth control, with coitus interruptus being his first choice. His second alternative was the vaginal sponge, and the third the condom. Ultimately far more influential was a Massachusetts physician, Charles Knowlton (1800–1850) who published his Fruits of Philosophy in 1832. In his first edition, Knowlton advocated a policy of douching, a not particularly effective contraceptive, but it was the controversy the book caused rather than its recommendation for which it is remembered. As he lectured on the topic through Massachusetts, he was jailed in Cambridge, fined in Taunton, and twice acquitted in trials in Greenfield. These actions increased public interest in contraception, and Knowlton had sold some 10,000 copies of his book by 1839. In subsequent editions of his book, Knowlton added other more reliable methods of contraception.
Once the barriers to publications describing methods of contraception had fallen, a number of other books appeared throughout the English-speaking world. The most widely read material was probably the brief descriptions included in Elements of Social Science (1854), a sex education book written by George Drysdale (1825–1901). Drysdale was convinced that the only cause of poverty was overpopulation, a concept that his more radical freethinking rivals did not fully accept. They were more interested in reforming society by eliminating the grosser inequities, and for them contraception was just one among many changes for which they campaigned.
Influence of Eugenics
Giving a further impetus to the more conservative voices in the birth control movement was the growth of the eugenics movement. The eugenicists, while concerned with the high birthrates among the poor and the illiterate, emphasized the problem of low birthrates among the more "intellectual" upper classes. Eugenics came to be defined as an applied biological science concerned with increasing the proportion of persons of better than average intellectual endowment in succeeding generations. The eugenicists threw themselves into the campaign for birth control among the poor and illiterate, while urging the "gifted" to produce more. The word eugenics had been coined by Francis Galton (1822–1911), a great believer in heredity, who also had many of the prejudices of an upper-class English gentleman in regard to social class and race. Galton's hypotheses were given further "academic" respectability by Karl Pearson (1857–1936), the first holder of the Galton endowed chair of eugenics at the University of London. Pearson believed that the high birthrate of the poor was a threat to civilization, and if members of the "higher" races did not make it their duty to reproduce, they would be supplanted in time by the members of the "lower races."
When put in this harsh light, eugenics gave "scientific" support to those who believed in racial and class superiority. It was just such ideas that Adolph Hitler attempted to implement in his "solution" to the "racial problem." Although Pearson's views were eventually opposed by the English Eugenics Society, the U.S. eugenics movement, founded in 1905, adopted his view. Inevitably, a large component of the organized family planning movement in the United States was made up of eugenicists. The fact that the Pearson-oriented eugenicists also advocated such beliefs as enforced sterilization of the "undesirables" inevitably tainted the group in which they were active even when they were not the dominant voices.
Dissemination of Information and Censorship
Population studies indicate that at least among the upper-classes in the United States and Britain, some form of population limitation was being practiced. Those active in the birth control movement, however, found it difficult to contact the people they most wanted to reach, namely the poor, overburdened mothers who did not want more children or who, in more affirmative terms, wanted to plan and space their children. The matter was complicated by the enactment of anti-pornography and anti-obscenity legislation which classed birth control information as obscene. In England, with the passage of the first laws on the subject in 1853, contraception was interpreted to be pornographic since of necessity it included discussion of sex. Books on contraception that earlier had been widely sold and distributed were seized and condemned. Such seizures were challenged in England in 1877 by Charles Bradlaugh (1833–1891) and Annie Besant (1847–1933). Bradlaugh and Besant were convicted by a jury that really wanted to acquit them, but the judgement was overturned on a technicality. In the aftermath, information on contraception circulated widely in Great Britain and its colonies.
In the United States, however, where similar legislation was enacted by various states and by the federal government, materials that contained information about birth control and that were distributed through the postal system or entered the country through customs ran into the censoring activities of Anthony Comstock (1844–1915) who had been appointed as a special postal agent in 1873. One of his first successful prosecutions was against a pamphlet on contraception by Edward Bliss Foote (1829–1906). As a result, information about contraceptives was driven underground, although since state regulations varied some states were more receptive to information about birth control. Only those people who went to Europe regularly kept up with contemporary developments such as the diaphragm, which began to be prescribed in Dutch clinics at the end of the nineteenth century. The few physicians who did keep current in the field tended to restrict their services to upper-class groups. The dominant voice of the physicians in the increasingly powerful American Medical Association was opposed to the use of contraceptives and considered them immoral. That this situation changed is generally credited to Sanger, a nurse.
In 1914, Sanger, then an active socialist, began to publish The Woman Rebel, a magazine designed to stimulate working women to think for themselves and to free themselves from bearing unwanted children. To educate women about the possibilities of birth control, Sanger decided to defy the laws pertaining to the dissemination of contraceptive information by publishing a small pamphlet, Family Limitation (1914), for which she was arrested. Before her formal trial, she fled to England, where she spent much of her time learning about European contraceptive methods, including the diaphragm. While she was absent her husband, William Sanger (1873–1961), who had little to do with his wife's publishing activities, was tricked into giving a copy of the pamphlet to a Comstock agent, and for this was arrested and convicted, an act that led to the almost immediate return of his wife. Before she was brought to trial, however, Comstock died. The zealousness of his methods had so alienated many prominent people that the government—without Comstock pushing for a conviction—simply decided not to prosecute Sanger, a decision which received widespread public support.
In part through her efforts, by 1917 another element had been added to the forces campaigning for more effective birth control information, namely the woman's movement (or at least certain segments of it). Women soon became the most vocal advocates and campaigners for effective birth control, joining "radical" reformers and eugenicists in an uneasy coalition.
Sanger, though relieved at being freed from prosecution, was still anxious to spread the message of birth control to the working women of New York. To reach them, she opened the first U.S. birth control clinic, which was patterned after the Dutch model. Since no physician would participate with her, she opened it with two other women, Ethel Byrne, her sister and also a nurse, and Fania Mindell, a social worker. The well-publicized opening attracted long lines of interested women—as well as several vice officers—and after some ten days of disseminating information and devices, Sanger and her two colleagues were arrested. Byrne, who was tried first and sentenced to thirty days in jail, promptly went on a hunger strike, attracting so much national attention that after eleven days she was pardoned by the governor of New York. Mindell, who was also convicted, was only fined $50. By the time of Sanger's trial, the prosecution was willing to drop charges provided she would agree not to open another clinic, a request she refused. She was sentenced to thirty days in jail and immediately appealed her conviction. The New York Court of Appeals rendered a rather ambiguous decision in acquitting her, holding that it was legal to disseminate contraceptive information for the "cure and prevention of disease," although they failed to specify the disease. Sanger, interpreting unwanted pregnancy as a disease, used this legal loophole and continued her campaign unchallenged.
New York, however, was just one state; there were many state laws to be overcome before information about contraceptives could be widely disseminated. Even after the legal barriers began to fall, the policies of many agencies made it difficult to distribute information. Volunteer birth control clinics were often prevented from publicly advertising their existence. It was not until 1965 that the U.S. Supreme Court, in Griswold v. Connecticut, removed the obstacle to the dissemination of contraceptive information to married women. It took several more years before dissemination of information to unmarried women was legal in every state.
In Europe, the battle, led by the Netherlands, for the dissemination of information about birth control methods took place during the first half of the twentieth century. It was not until after World War II when, under Sanger's leadership, the International Federation for Planned Parenthood was organized, that a worldwide campaign to spread the message took place. At the beginning of the twenty-first century two major countries, Japan and Russia, still used abortion as a major means of family planning. In many countries, more than 60 percent of women of childbearing age are using modern contraceptives, including Argentina, Australia, Austria, the Bahamas, Belgium, Brazil, Canada, China, Costa Rica, Cuba, Denmark, Finland, France, Hungary, Italy, Jamaica, Korea, New Zealand, Netherlands, Norway, Spain, Sweden, Switzerland, Singapore, Thailand, the United Kingdom, and the United States. Many other nations are approaching this rate of success, but much lower rates exist throughout Africa (where Tunisia seems to the highest at 49 percent), in most of the former areas of the Soviet Union and the eastern block countries, and in much of Asia and Latin America. The International Planned Parenthood Federation does periodic surveys of much of the world which are regularly updated on its website (see also Bullough 2001).
Teenagers and Birth Control
With legal obstacles for adults removed, and a variety of new contraceptives available, the remaining problems are to disseminate information and encourage people to use contraceptives for effective family planning. One of the more difficult audiences to reach has been teenagers. Many socalled family life or sex education programs refuse to deal with the issue of contraceptives and instead emphasize abstinence from sex until married. Unfortunately, abstinence—or continence as it is sometimes called—has the highest failure rate of any of the possible means of birth control since there is no protection against pregnancy if the will power for abstinence fails. The result was a significant increase in the 1990s of unmarried teenage mothers, although not of teenage mothers in general. The highest percentage of teenage mothers in the years the United States has been keeping statistics on such matters came in 1957, but the overwhelming majority of these were married women. Although the number of all teenage mothers has been declining ever since, reaching new lows in 1999–2000, an increased percentage of them are unmarried. In fact, it is the change in marriage patterns and in adoption patterns, more than the sexual activity of teenagers, that led to public concern over unmarried teenage mothers. Since societal belief patterns have increasingly frowned upon what might be called "forced marriages" of pregnant teenagers, and the welfare system itself was modified to offer support to single mothers, at least within certain limits, teenagers who earlier might have given up their children for adoption decided to keep them.
Many programs have been introduced since the federal government in 1997 created the abstinence-only-until-marriage program to teach those teenagers most at-risk to be more sexually responsible. Only a few of the programs included a component about contraceptives since the federally funded programs do not provide for it, and only a few states such as California have provided funds to do so. Most of the programs emphasize self-esteem, the need for adult responsibility, and the importance of continence, all important for teenage development, but almost all the research on the topic, summaries of which are regularly carried in issues of SIECUS Report, has found that the lack of specific mention of birth control methods has handicapped their effectiveness in curtailing teenage pregnancy. This deficiency has been somewhat compensated for by the development of more efficient and easy-to-use contraceptives and availability of information about them from other sources.
Still, although contraception and family planning increasingly have come to be part of the belief structure of the U.S. family, large segments of the population remain frightened by, unaware of, or unconvinced by discussion about birth control. Unfortunately, because much of public education about birth control for much of the twentieth century was aimed at the poor and minorities, some feel that birth control is a form of racial suicide. It takes a lot of time and much education to erase such fears and success can only come when such anxieties can be put to rest.
See also: ABORTION; ABSTINENCE; ADOLESCENT PARENTHOOD; ASSISTED REPRODUCTIVE TECHNOLOGIES; BIRTH CONTROL: CONTRACEPTIVE METHODS; CHILDLESSNESS; FAMILY LIFE EDUCATION; FAMILY PLANNING; FERTILITY; INFANTICIDE; SEXUALITY EDUCATION; WOMEN'S MOVEMENTS
Bullough, V. L., and Bullough, B. Contraception. (1997) Buffalo, NY: Prometheus.
Bullough, V. L. (2001). Encyclopedia of Birth Control. Santa Barbara, CA: ABC-Clio.
Chandrasekhar, S. (1981). A Dirty, Filthy Book: The Writings of Charles Knowlton and Annie Besant on Reproductive Physiology and Birth Contol and An Account of the Bradlaugh-Besant Trial. Berkeley and Los Angeles: University of California Press.
Fryer, P. (1965). The Birth Controllers. London: Secker & Warburg.
Grossman, Atina. (1995). Reforming Sex: The German Movement for Birth Control and Abortion Reform. New York: Oxford University Press.
McLaren, Angus. (1990). A History of Contraception. London: Blackwell.
New York University. Margaret Sanger Papers Project. New York: New York University Department of History.
Population Information Program. Population Reports. Baltimore, MD: Johns Hopkins University School of Public Health.
Reed, J. (1978). From Private Vice to Public Virtue: The Birth Control Movement and American Society Since 1830. New York: Basic Books.
Riddle, John M. (1997). Eve's Herbs: A History of Contraception and Abortion in the West. Cambridge, MA: Harvard University Press.
Solway, R. A. (1982). Birth Control and the Population Question in England, 1877–1930. Chapel Hill: University of North Carolina Press.
Griswold v. Connecticut, 381 U.S. 479, 85 S.Ct. 1678, 14 L.Ed.2d 510 (1965).
International Planned Parenthood Federation. "Country Profiles." Available from http://www.ippf.org/regions/country.
VERN L. BULLOUGH
- Motherhood - Transition To Motherhood, Maternal Role In Childrearing, Extent And Effects Of Maternal Employment, Motherhood And Marital Quality
- Interparental Violence—Effects on Children - The Impact Of Exposure, Effects On Parent-child Relationships, Longer-term Effects, Cultural Diversity
- Birth Control - Contraceptive Methods
- Birth Control - Sociocultural And Historical Aspects
- Other Free Encyclopedias | <urn:uuid:84b0bb14-9723-4701-95d0-e40c8a5f18d1> | CC-MAIN-2019-47 | https://family.jrank.org/pages/163/Birth-Control.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00419.warc.gz | en | 0.951403 | 8,094 | 2.96875 | 3 |
In higher plants, ethylene is produced from L-methionine via the intermediates, S-adenosyl-L-methionine (SAM) and 1-aminocyclopropane-1-carboxylic acid (ACC) . The enzymes involved in this metabolic sequence are SAM synthetase, which catalyzes the conversion of methionine to SAM ; ACC synthase, which is responsible for the hydrolysis of SAM to ACC and 5-methylthioadenosine ; and ACC oxidase, which metabolizes ACC to ethylene, carbon dioxide and cyanide .
In 1978, an enzyme capable of degrading ACC was isolated from Pseudomonas sp. strain ACP . Since then, ACC deaminase has been detected in several fungi and yeasts [17, 18] as well as in bacterial strains [1, 2, 19-26] This enzyme cleaves the plant ethylene precursor, ACC, to produce ammonia and a-ketobutyrate. It has been proposed that microorganisms that contain the enzyme ACC deaminase can all promote plant growth since they act as a sink for ACC and thereby lower ethylene levels in a developing or stressed plant.
Ethylene is important for normal development in plants as well as for their response to stress . Ethylene is important during the early phase of plant growth; it is required by many plant species for seed germination, and the rate of ethylene production increases during germination and seedling growth . Ethylene also induces some plant defences including induced systemic resistance . However, high levels of ethylene can lead to inhibition of root elongation and the onset of senescence.
Strains of ACC deaminase-containing plant growth-promoting bacteria can reduce the amount of ACC within plant tissues that is detectable by HPLC, and hence the ethylene levels in plants are also lowered [5, 30]. As a consequence of this activity, ACC deaminase-containing plant growth-promoting bacteria promote root elongation in a variety of (ethylene sensitive) plants . In addition to lowering ethylene levels during plant development, ACC deaminase-containing plant growth-promoting bacteria decrease the levels of "stress ethylene" — the accelerated biosynthesis of ethylene associated with biological and environmental stresses and pathogen attack . Thus, the deleterious effects of flooding, high salt or drought on tomato plants [26, 33, 34] were decreased and the shelf life of the petals of ethylene sensitive cut flowers was prolonged, following treatment with ACC deaminase-containing plant growth promoting bacteria . Moreover, biocontrol strains of bacteria carrying ACC deaminase genes were better able to protect plants against various phytopathogens . In addition, canola seedlings grown in the presence of high levels of nickel, produced much less ethylene when the seeds were inoculated with an ACC deaminase-containing nickel-resistant plant growth-promoting strain that also produced indoleacetic acid and high levels of siderophores . In each of these situations, the ''stress ethylene'' produced and the damage caused by it, was reduced by the activity of ACC deaminase.
2. Plant growth-promoting bacteria and phytoremediation
While plants grown on metal contaminated soils might be able to withstand some of the inhibitory effects of high concentrations of metals within a plant, two features of most plants could result in a decrease in plant growth and viability. In the presence of plant inhibitory levels of metals, most plants (i) synthesize stress ethylene and (ii) become severely depleted in the amount of iron that they contain. Fortunately, ACC deaminase-containing plant growth-promoting bacteria may be used to relieve some of the toxicity of metals to plants. This can occur in two different ways (i) a decrease in the level of stress ethylene in plants growing in metal-contaminated soil and (ii) utilization by plants of complexes between bacterial siderophores and iron. Plant siderophores bind to iron with a much lower affinity than bacterial siderophores so that in metal-contaminated soils a plant is generally unable to accumulate a sufficient amount of iron (and often becomes chlorotic) unless bacterial siderophores are present.
In one study, in an effort to overcome the inhibition of plant growth by nickel, a bacterium was isolated from a nickel contaminated soil sample; the bacterium was (i) nickel-resistant, (ii) capable of synthesizing the auxin indoleacetic acid, (iii) able to grow at the cold temperatures (i.e., 5-10°C) that one expects to find in nickel contaminated soil environments in northern climes such as Canada, and (iv) an active producer of ACC deaminase . In order to isolate plant growth-promoting bacteria, all of the nickel-resistant bacterial isolates from a nickel-contaminated rhizosphere soil sample were tested for the ability to grow on minimal medium with ACC as the sole source of nitrogen . Nickel-resistant bacterial strains that were also able to grow on ACC were tested for the ability to produce siderophores and grow in cold temperatures. It was ascertained in laboratory tests, that the selected bacterium could promote plant growth (both roots and shoots) in the presence of high levels (1-6 mM) of nickel [22, 38].
Subsequently, a spontaneous siderophore overproducing mutant of this bacterium was selected. When the wild-type bacterium and the siderophore overproducing mutant were tested in the laboratory, both of them were observed to promote the growth of tomato, canola and Indian mustard plants in soil that contained otherwise inhibitory levels of nickel, lead or zinc. In addition, the siderophore overproducing mutant decreased the inhibitory effect of the added metal on plant growth significantly more than the wild-type bacterium. Metal contamination of soils is often associated with iron-deficiency of the plants grown in these soils . The low iron content of plants that are grown in the presence of high levels of metals generally results in these plants becoming chlorotic, since iron deficiency inhibits both chloroplast development and chlorophyll biosynthesis . Moreover, iron deficiency is a stress that causes the plant to synthesize stress ethylene. However, once they have bound iron, bacterial iron-siderophore complexes can be taken up by plants and thereby serve as an iron source for plants .
Thus, there is (at least) a dual role for bacteria that facilitate plant growth in metal-contaminated soils. On the one hand, the bacteria lower the level of stress ethylene in the plant thereby allowing it to develop longer roots and thus better establish itself during early stages of growth [11, 22]. On the other hand, the bacterium helps the plant to acquire sufficient iron for optimal plant growth, in the presence of levels of metals that might otherwise make the acquisition of iron difficult . When the siderophore overproducing mutant was tested in the field with nickel-contaminated soil, it was observed that both the number of seeds that germinated, and the size that the plants were able to attain was increased by 50-100% by the presence of the bacterium.
In another study, the common reed, Phragmites australis, a plant that has often been suggested for use in the phytoremediation of wetlands, was grown from seed in the laboratory in copper-contaminated soil. It was observed that the addition of a copper-resistant strain of Pseudomonas asplenii that had been genetically transformed to express a bacterial ACC deaminase gene significantly stimulated seed germination in the presence of high levels of copper where the native form of this bacterium had no stimulatory effect on seed germination (M.L.E Reed, B. Warner and B.R. Glick, submitted for publication). This is consistent with the notion that one reason that plant germination is often inhibited by the presence of high levels of soil contaminants is that a high level of ethylene is produced in seeds as a response to the contaminant. In this case, lowering seed ethylene levels so that they are no longer inhibitory should promote seed germination in the presence of a range of contaminants. Moreover, in these experiments, both the native and the transformed strain of Pseudomonas asplenii had a small but reproducible stimulatory effect on Phragmites australis root and shoot growth. This indicates that, at least for Phragmites australis, growth inhibition by copper is not solely a consequence of stress ethylene synthesis but rather likely mainly reflects copper inhibition of plant metabolic processes.
In a separate but similar study, the growth of canola roots and shoots in copper-contaminated soil was stimulated (significantly) but to the same extent by both native and ACC deaminase-transformed Pseudomonas asplenii (M.L.E. Reed and B.R. Glick, submitted for publication). In this case, the promotion of plant growth by both native and transformed Pseudomonas asplenii was attributed to the production of indoleactic acid by the added bacteria since this strain does not produce siderophores (and therefore could not be involved in providing iron to the plant). Only the transformed strain has ACC deaminase activity so that, as with Phragmites australis, decreasing ethylene levels is not a factor in growth promotion, and bacterial indoleacetic acid has previously been shown to be capable of directly promoting plant growth .
Given the extreme toxicity of arsenate to most plants and bacteria, the development of a phytoremediation scheme for the detoxification of arsenate-contaminated soils is not a simple matter. For example, unlike what has been observed with nickel, lead, copper and zinc, arsenate-resistant plant growth-promoting bacteria do not significantly protect plants from arsenate inhibition.
Polycyclic aromatic hydrocarbons (PAHs) are a particularly recalcitrant group of contaminants and are known to be highly persistent in the environment. In situ microbial remediation (i.e., bioremediation) has been attempted, but it is difficult to generate sufficient biomass in natural soils to achieve an acceptable rate of movement of hydrophobic PAHs (which are often tightly bound to soil particles) to the microbes where they can be degraded. In addition, relatively few microorganisms can use high molecular weight PAHs as a sole carbon source. More recently, there have been some improvements in the strategies for bacterial remediation of contaminated soil, including inoculation with bacteria that were selected from PAH contaminated sites, or supplementing contaminated soils with nutrients . However, there has only been limited success with these techniques. For bioremediation to be effective, the overall rate of PAH removal and degradation must be accelerated above current levels. One way to achieve this is to increase the amount of biomass in the contaminated soil. For this reason, the use of phytoremediation has received considerable attention [45-47].
Although using plants for remediation of persistent contaminants may have advantages over other methods, many limitations exist for the large-scale application of this technology. For example, many plant species are sensitive to contaminants including PAHs so that they grow slowly, and it is time consuming to establish sufficient biomass for meaningful soil remediation. In addition, in most contaminated soils, the number of microorganisms is depressed so that there are not enough bacteria either to facilitate contaminant degradation or to support plant growth. To remedy this situation, both degradative and plant growth-promoting bacteria may be added to the plant rhizosphere. Phytoremediation (where contaminant degradation is dependent solely on plants) is not significantly faster than bioremediation (where biodegradation of the organics is by microorganisms independent of plants) for removal of PAHs or TPHs (Total Petroleum Hydrocarbons) [48-50]. However, cultivating plants together with plant growth-promoting bacteria allows the plants to germinate in the presence of soil contaminants to a much greater extent than they would otherwise, and then to grow well under stressful conditions and accumulate a larger amount of biomass than plants grown in the absence of plant growth-promoting bacteria. In addition, the plant growth-promoting bacteria in these experiments significantly increase the amount of PAH or TPH that is removed from the soil. In heavily contaminated soils, plant growth-promoting bacteria increased seed germination and plant survival, increased the plant water content, helped plants to maintain their chlorophyll contents and chlorophyll a/b ratio, and promoted plant root and shoot growth. In the case of PAHs, this is most likely due to a combination of the direct promotion of plant growth by bacterial indoleacetic acid and a lowering of the concentration of stress ethylene by bacterial ACC deaminase (MLE Reed and BR Glick, submitted for publication). As a consequence of the treatment of plants with plant growth-promoting bacteria, the plants provide a greater sink for the contaminants since they are better able to survive and proliferate.
3. Phytoremediation with plants engineered to produce less ethylene
If ACC deaminase-containing plant growth-promoting bacteria, bound to plant roots, can act as a sink for some of the excess ACC produced as a consequence of environmental stress, then transgenic plants expressing a bacterial ACC deaminase gene should behave similarly and have a level of stress ethylene lower than non-transformed plants and consequently be less susceptible to the deleterious effects of the stress. In fact, in two separate studies transgenic plants expressing ACC deaminase were shown to proliferate to a much greater extent than the comparable non-transformed plants in the presence of metals [51, 52]. In one study, transgenic tomato plants expressing a bacterial ACC deaminase gene under the transcriptional control of two tandem 35S cauliflower mosaic virus promoters (constitutive expression), the rolD promoter from Agrobacterium rhizogenes (root specific expression) or the pathogenesis related PRB-1b promoter from tobacco, were compared to non-transgenic tomato plants in their ability to grow in the presence of cadmium, cobalt, copper, magnesium, nickel, lead or zinc and to accumulate these metals . These transgenic tomato plants acquired a greater amount of metal within the plant tissues, and were less subject to the inhibitory effects of the metals on plant growth than were non-transformed plants. Moreover, plants in which the ACC deaminase gene was under the transcriptional control of the rolD promoter were more resistant to the various metals than were the other transgenic plants.
Of course, there is no expectation that transgenic tomato plants will ever become part of a phytoremediation strategy. Nevertheless, the results that were obtained with tomato plants were intriguing and served as a starting point for the development of other transgenic plants with lowered ethylene concentrations that could be used as a component of a phytoremediation scheme. Thus, both transgenic tobacco (because of its potentially large leaf biomass) and canola (because of its previously demonstrated ability to be a moderate accumulator of numerous metals) were transformed with bacterial ACC deaminase genes under the transcriptional control of either the 35S or rolD promoters (Li, Q, Shah, S, Saleh-Lakh, S and GLick, BR, submitted for publication; Stearns, JC, Shah, S, Dixon, DG, Greenberg BM and Glick, BR, submitted for publication). When they were tested, in laboratory and greenhouse experiments, the transgenic tobacco and canola plants responded similarly to the presence of nickel in the soil to the previously constructed transgenic tomatoes. In all instances, transgenic plants in which the exogenous ACC deaminase gene was controlled by the rolD promoter demonstrated the highest level of resistance to growth inhibition by nickel. Moreover, rolD canola plants were also resistant to growth inhibition by high levels of salt in the soil (Sergeeva, E, Shah, S and Glick, BR, submitted for publication). Reminiscent of the protection from salt stress that is afforded by a salt-resistant plant growth-promoting bacterium . From these and other data, it appears that the behaviour of plants to a variety of stresses (metals, salt, flooding and pathogens), transformed with an exogenous ACC deaminase gene controlled by the rolD promoter, is similar to the way in which these plants respond when ACC deaminase-containing plant growth-promoting bacteria have colonized the plant roots. In both cases, root-associated ACC deaminase acts as a sink for ACC and thereby prevents the formation of growth inhibitory levels of stress ethylene. The major difference between these two scenarios is that, in addition to lowering ethylene levels, the bacteria can directly promote plant growth by providing the hormone indoleacetic acid or siderophores that help the plant to obtain a sufficient amount of iron. In fact, in laboratory and greenhouse experiments, ACC deaminase-containing plant growth-promoting bacteria generally are a greater stimulus to plant growth under a range of stressful and potentially inhibitory conditions than are ACC deaminase transgenes expressed exclusively in the roots. Unfortunately, as a consequence of a number of environmental factors (such as weather and the presence of predators in the soil) plant growth-promoting bacteria may not always be as persistent in field conditions as they are in the greenhouse. One way around this problem may be to select or engineer endophytic bacterial strains that promote plant growth by employing some of the above mentioned bacterial mechanisms [53-55]. Finally, it should also be noted that plant ethylene levels may be decreased through a variety of genetic manipulations (e.g., the use of antisense versions of ACC oxidase) other than ACC deaminase. .
In another study, the growth of canola plants expressing ACC deaminase under the control of two tandem 35S cauliflower mosaic virus promoters in the presence of arsenate was monitored . About 70-80% of the transgenic plants germinated while a maximum of 25-30% of the non-transformed plants germinated. Although a small ethylene pulse is important in breaking seed dormancy in many plants, too much ethylene can inhibit plant seed germination . In the presence of arsenate, ACC deaminase may enhance the process of germination by hydrolyzing any excess ACC that forms as a consequence of the stress, hence lowering the inhibitory level of ethylene in seeds. Transgenic canola also had much higher fresh and dry weights of roots and shoots, and higher leaf chlorophyll contents, than non-transformed canola grown in the presence of arsenate. Moreover, the addition of plant growth-promoting bacteria to the roots of transgenic canola plants grown in arsenate-contaminated soils helped the plants to grow to a slightly larger size. In this case, growth promotion is probably attributable to the bacterial indoleacetic acid. When biomass and rate of seed germination are considered in calculating arsenate accumulation, for each seed planted, transgenic canola expressing ACC deaminase takes up approximately eight times as much arsenate as non-transformed canola. This notwithstanding, considerable work remains to be done before a practical system for the phytoremediation of arsenic can be implemented.
Microbial activities exerted in the rhizosphere can influence plant growth, development and metabolism at both the root and the shoot levels, and can reduce the effects of various stresses. More specifically, traits that directly contribute to the promotion of plant growth and stress reduction include the synthesis of indoleacetic acid, siderophores and the enzyme ACC deaminase. Several strains of plant growth-promoting bacteria with different properties are already commercially available and are being used to increase crop yields.
Given the current reluctance on the part of many consumers worldwide to embrace the use of foods derived from genetically modified plants, it may be advantageous to use either natural or genetically engineered plant growth-promoting bacteria as a means to promote growth or reduce disease through induction of resistance, rather than genetically modifying the plant itself to the same end. Moreover, given the large number of different plants, the various cultivars of those plants and the multiplicity of genes that would need to be engineered into plants, it is not feasible to genetically engineer all plants to be resistant to all pathogens and environmental stresses. Rather, it seems more logical to engineer plant growth-promoting bacteria to do this job; the first step in this direction could well be the introduction of appropriately regulated ACC deaminase genes. While ethylene signalling is required for the induction of systemic resistance elicited by rhizobacteria, a significant increase in the level of ethylene is not. Hence, lowering of ethylene levels by bacterial ACC deaminase is not incompatible with the induction of systemic resistance. Indeed, some bacterial strains possessing ACC deaminase also induce systemic resistance.
Work from the author's laboratory was supported by grants from the Natural Science and Engineering Research Council, CRESTech (a province of Ontario Centre of Excellence), Ontario Hydro, and Inco. The following individuals contributed to the work reviewed here: Genrich Burd, George Dixon, XiaoDong Huang, Sibdas Ghosh, Bruce Greenberg, Varvara Grichko, Jiping Li, Qiaosi Li, Wenbo Ma, Shimon Mayak, Barbara Moffatt, Lin Nie, Cheryl Patten, Donna Penrose, Lucy Reed, Saleema Saleh-Lakha, Elena Sergeeva, Saleh Shah, Jennifer Stearns, Tsipi Tirosh, Chunxia Wang and Barry Warner.
Glick, BR (1995) The enhancement of plant growth by free-living bacteria. Can J Microbiol 41: 109-117.
Glick, BR; Patten, CL; Holguin, G and Penrose, DM (1999) Biochemical and genetic mechanisms used by plant growth promoting bacteria. Imperial College Press, London, UK, ISBN 1-86094-152-4.
Whipp, JM (1990) Carbon utilization, in Lynch JM, Ed. The rhizosphere. pp 59-97, John Wiley, Chichester, UK, ISBN 0471925489.
Bayliss, C; Bent, E; Culham, DE; MacLellan, S; Clarke, AJ; Brown, GL and Wood, J (1997) Bacterial genetic loci implicated in the Pseudomonas putida GR12-2R3-canola mutualism: identification of an exudate-inducible sugar transporter. Can J Microbiol 43: 809-18.
Penrose, DM and Glick, BR (2001) Levels of 1-aminocyclopropane-1-carboxylic acid (ACC) in exudates and extracts of canola seeds treated with plant growth-promoting bacteria. Can J Microbiol 47: 368-72.
Brown, ME (1974) Seed and root bacterization. Annu Rev Phytopathol 12: 181-97.
Davison, J (1988) Plant beneficial bacteria. Bio/technology 6: 282-286.
Kloepper, JW; Lifshitz, R and Zablotowicz, RM (1989) Free-living bacterial inocula for enhancing crop productivity. Trends Biotechnol 7: 39-43.
Lambert, B and Joos, H (1989) Fundamental aspects of rhizobacterial plant growth promotion research. Trends Biotechnol 7: 215-9.
Patten, CL and Glick, BR (1996) Bacterial biosynthesis of indole-3-acetic acid. Can J Microbiol 42: 207-20.
Glick, BR; Penrose, DM and Li, J (1998) A model for the lowering of plant ethylene concentrations by plant growth promoting bacteria. J Theor Biol 190: 63-8.
Yang, SF and Hoffman, NE (1984) Ethylene biosynthesis and its regulation in higher plants. Annu Rev Plant Physiol 35: 155-89.
Giovanelli, J; Mudd, SH and Datko, AH (1980) Sulphur amino acids in plants, in: Miflin, BJ, Ed, Amino acids and derivatives. The biochemistry of plants: a comprehensive treatise, Vol. 5, pp 435-505, Academic Press, New York, USA, ISBN 0-12-675416-0.
Kende, H (1989) Enzymes of ethylene biosynthesis. Plant Physiol 91: 1-4.
John, P (1991) How plant molecular biologists revealed a surprising relationship between two enzymes, which took an enzyme out of a membrane where it was not located, and put it into the soluble phase where it could be studied. Plant Mol Biol Rep 9: 192-4.
Honma, M and Shimomura, T (1978) Metabolism of 1-aminocyclopropane-1-carboxylic acid. Agric Biol Chem 42: 1825-31.
Honma, M (1993) Stereospecific reaction of 1-aminocyclopropane-1-carboxylate deaminase, in: Pech, JC; Latche, A and Balague, C, Eds. Cellular and molecular aspects of the plant hormone ethylene. pp 111-6 Kluwer Academic Publishers, Dordrecht, The Netherlands, ISBN 0-7923-2169-3.
Minami, R; Uchiyama, K; Murakami, T; Kawai, J; Mikami, K; Yamada, T; Yokoi, D; Ito, H; Matsui, H and Honma, M (1998) Properties, sequence, and synthesis in Escherichia coli of 1-aminocyclopropane-1-carboxylate deaminase from Hansenula saturnus. J Biochem 123: 1112-8.
Klee, HJ and Kishore, GM (1992) Control of Fruit Ripening and Senescence in Plants. United States Patent Number: 5,702,933.
Jacobson, CB; Pasternak, JJ and Glick, BR (1994) Partial purification and characterization of 1-aminocyclopropane-1-carboxylate deaminase from the plant growth promoting rhizobacterium Pseudomonas putida GR12-2. Can J Microbiol 40: 1019-25.
Campbell, BG and Thomson, JA (1996) 1-Aminocyclopropane-1-carboxylate deaminase genes from Pseudomonas strains. FEMS Microbiol Lett 138: 207-10.
Burd, GI; Dixon, DG and Glick, BR (1998) A plant growth promoting bacterium that decreases nickel toxicity in plant seedlings. Appl Environ Microbiol 64: 3663-8.
Belimov, AA; Safronova, VI; Sergeyeva, TA; Egorova, TN; Matveyeva, VA; Tsyganov VE, Borisov, AY; Tikhonovich, IA; Kluge, C; Preisfeld, A; Dietz, KJ and Stepanok VV (2001) Characterization of plant growth promoting rhizobacteria isolated from polluted soils and containing 1-aminocyclopropane-1-carboxylate deaminase. Can J Microbiol 47: 642-52.
Ghosh, S; Penterman, JN; Little, RD; Chavez, R and Glick, BR (2003) Three newly isolated plant growth-promoting bacilli facilitate the growth of canola seedlings. Plant Physiol Biochem 41: 277-81.
Ma, W; Sebestianova, S; Sebestian, J; Burd, GI; Guinel, F and Glick, BR (2003) Prevalence of 1-aminocyclopropane-1-carboxylate deaminase in Rhizobia spp. Antoine van Leeuwenhoek 83: 285-91.
Mayak, S; Tirosh, T and Glick, BR (2004) Plant growth-promoting bacteria that confer resistance to water stress in tomato and pepper. Plant Sci 166: 525-530.
Deikman, J (1997) Molecular mechanisms of ethylene regulation of gene transcription. Physiol Plant 100: 561-6.
Abeles, FB; Morgan, PW and Saltveit, Jr, ME (1992) Ethylene in plant biology. 2nd ed.: Academic Press New York, USA, ISBN 0-12-041451-1.
Stearns, J and Glick, BR (2003) Transgenic plants with altered ethylene biosynthesis or perception. Biotechnol Adv 21: 193-210.
Penrose, DM; Moffatt, BA and Glick BR (2001) Determination of 1-aminocyclopropane-1-carboxylic acid (ACC) to assess the effects of ACC deaminase-containing bacteria on roots of canola seedlings. Can J Microbiol 47: 77-80.
Hall, JA; Peirson, D; Ghosh, S and Glick BR (1996) Root elongation in various agronomic crops by the plant growth promoting rhizobacterium Pseudomonas putida GR12-2. Isr J Plant Sci 44: 37-42.
Morgan, PW and Drew, CD (1997) Ethylene and plant responses to stress. Physiol Plant 100: 620-30.
Grichko, VP and Glick, BR (2001) Amelioration of flooding stress by ACC deaminase-containing plant growth-promoting bacteria. Plant Physiol Biochem 39: 11-7.
Mayak, S; Tirosh, T and Glick, BR (2004) Plant growth-promoting bacteria that confer resistance in tomato to salt stress. Plant Physiol Biochem. 42: 565-572.
Nayani, S; Mayak, S and Glick, BR (1998) The effect of plant growth promoting rhizobacteria on the senescence of flower petals. Ind J Exp Biol 36: 836-9.
Wang, C; Knill, E; Glick, BR and Defago, G (2000) Effect of transferring 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase genes into Pseudomonas fluorescens strain CHA0 and its gacA derivative CHA96 on their growth promoting and disease-suppressive capacities. Can J Microbiol 46: 898-907.
Penrose, DM and Glick, BR (2003) Methods for isolating and characterizing ACC deaminase-containing plant growth-promoting rhizobacteria. Physiol Plant 118: 10-15.
Ma, W; Zalec, K and Glick, BR (2001) Effects of the bioluminescence-labelling of the soil bacterium Kluyvera ascorbata SUD165/26. FEMS Microbiol Ecol 35: 137-44.
Mishra, D and Kar, M (1974) Nickel in plant growth and metabolism. Bot Rev 40: 395-452.
Imsande, J (1998) Iron, sulfur, and chlorophyll deficiencies: a need for an integrative approach in plant physiology. Physiol Plant 103: 139-44.
Bar-Ness, E; Chen, Y; Hadar, Y; Marschner, H and Romheld V (1991) Siderophores of Pseudomonas putida as an iron source for dicot and monocot plants. Plant Soil 130: 231-41.
Burd, GI; Dixon, DG and Glick, BR (2000) Plant growth-promoting bacteria that decrease heavy metal toxicity in plants. Can J Microbiol 46: 237-45.
Patten, CL and Glick BR (2002) The role of bacterial indoleacetic acid in the development of the host plant root system. Appl Environ Microbiol 68: 3795-801.
Suthersan, SS (2002) Natural and enhanced remediation systems. pp 239-267, CRC Press, Boca Raton, USA, ISBN 1566702828.
Cunningham, SD and Berti, WR (1993) Remediation of contaminated soils with green plants: an overview. In Vitro Cell Dev Biol 29P: 207-12.
Cunningham, SD; Berti, WR and Huang, JW (1995) Phytoremediation of contaminated soils. Trends Biotechnol 13: 393-7.
Cunningham, SD and Ow, DW (1996) Promises and prospects of phytoremediation. Plant Physiol 110: 715-9.
Huang, X-D; El-Alawi, Y; Penrose, DM; Glick, BR and Greenberg, BM (2004) Responses of plants to creosote during phytoremediation and their significance for remediation processes. Environ Pollut 130: 453-463.
Huang, X-D; El-Alawi, Y; Penrose, DM; Glick, BR and Greenberg, BM (2004) Multi-process phytoremediation system for removal of polycyclic aromatic hydrocarbons from contaminated soils. Environ Pollut 130: 465-476.
Huang, X-D; El-Alawai, Y; Gurska, J; Glick, BR and Greenberg, BM (2004) A multi-process phytoremediation system for decontamination of Persistent Total Petroleum Hydrocarbons (TPHs) from soils. Microchem J, in press
Grichko, VP; Filby, B and Glick, BR (2000) Increased ability of transgenic plants expressing the bacterial enzyme ACC deaminase to accumulate Cd, Co, Cu, Ni, Pb, and Zn. J Biotechnol 81: 45-53.
Nie, L; Shah, S; Burd, GI; Dixon, DG and Glick, BR (2002) Phytoremediation of arsenate contaminated soil by transgenic canola and the plant growth-promoting bacterium Enterobacter cloacae CAL2. Plant Physiol Biochem 40: 355-61.
Glick, BR (2004) Teamwork in phytoremediation. Nature Biotechnol 22: 526-527.
Barac, T; Taghavi, S; Borremans, B; Provoost, A; Oeyen, L; Colpaert, JV; Vangronsveld, J and van der Lelie, D (2004) Engineered endophytic bacteria improve phytoremediation of water-soluble, volatile, organic pollutants. Nature Biotechnol 22: 583-588.
Sessitsch, A; Coenye, T; Sturz, AV; Vandamme, P; Ait Barka, E; Wang-Pruski, G; Faure, D; Reiter, B; Glick, BR and Nowak, J (2005) Burkholderia phytofirmins sp. Nov., a novel plant-associated bacterium with plant beneficial properties. Int J Syst Evol Microbiol, in press.
Bewley, JD and Black, M (1985) Dormancy and the control of germination, in Seeds: physiology of development and germination, pp. 175-235, Plenum Press, New York, USA, ISBN 0-30-641687-5.
Was this article helpful?
What exactly is a detox routine? Basically a detox routine is an all-natural method of cleansing yourbr body by giving it the time and conditions it needs to rebuild and heal from the damages of daily life and the foods you eat and other substances you intake. There are many different types of known detox routines. | <urn:uuid:53c4a0bf-63fd-4ceb-ae4b-5059a06636f9> | CC-MAIN-2019-47 | https://www.europeanmedical.info/plant-growth/acc-deaminase-and-the-reduction-of-plant-ethylene.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00461.warc.gz | en | 0.910965 | 7,607 | 2.953125 | 3 |
Jonestown was a Marxist settlement in northwestern Guyana founded by Jim Jones of the Peoples Temple, mostly comprised of emigres from the Unites States. It gained lasting international notoriety in 1978, when nearly its whole population died in a mass suicide orchestrated by its founder.
Named after Jones, the settlement was founded in 1974, on his initiative about seven miles (11 km) southwest of the small town of Port Kaituma. It had a population of almost a thousand at its height, with most residents having lived there less than one year. There, Jones established what he described as a "socialist paradise," but reports soon reached the United States of harsh conditions, abuse, armed guards, and people being forced to remain in Jonestown against their will.
In November 1978, United States Congressman Leo Ryan and a group of reporters and relatives of Jones' followers visited Jonestown to investigate the alleged abuses. On November 18, while attempting to fly out, Ryan and four others were killed at an airstrip by members of the Peoples Temple. That evening, Jones led his followers in their mass murder-and-suicide. Over 900 men, women, and children perished, Jones among them.
After a period of abandonment, the Guyanese government allowed Hmong refugees from Laos to re-occupy the settlement for a brief period in the early 1980s, but after that it was deserted. It was mostly destroyed by a fire in the mid-1980s, and afterward left to decay and be reclaimed by the jungle.
The Peoples Temple was formed in Indianapolis, Indiana, during the mid-1950s and later became affiliated with the Disciples of Christ under Jones' leadership. Beginning in 1965, Jones and about 80 followers moved to Redwood Valley in Mendocino County, California, where he taught a blend of Christianity, hippie philosophy, and Marxist liberation theology.
In 1972, Jones moved his congregation to San Francisco and opened another church in Los Angeles, California. In San Francisco, Jones vocally supported prominent liberal-left political candidates. He was appointed to city commissions and was a frequent guest at political events. He also supported charity efforts and recruited new members from the ranks of the poor into his interracial and intercultural congregation.
Soon, scandals regarding tax evasion, drug use, and abuse of his members convinced Jones that the capitalist "establishment" was inevitably turning against him, and he began planning a relocation of the Temple outside the U.S. In 1974, he leased over 3,800 acres (15.4 km²) of jungle land from the Guyanese government. Jones encouraged all of his followers to move to Jonestown, also called the "Peoples Temple Agricultural Project," in 1977. Jonestown's population increased from 50 members in 1977 to more than 900 at its peak in 1978.
Many of the Peoples Temple members believed that Guyana would be, as Jones promised, a "socialist paradise." However, the life they found there was anything but ideal. Work was performed six days a week, from seven in the morning to six in the evening, with humid temperatures that often reached over 100 degrees Fahrenheit (38 degrees Celsius).
According to some, meals for the members often consisted of nothing more than rice and beans. As with other communist agricultural projects, children were raised communally and both children and adults also taught to address Jones as "Father" or "Dad." Up to $65,000 in monthly U.S. welfare payments to Jonestown residents was allegedly appropriated by Jones. Local Guyanese related stories about harsh beatings and a well into which Jones had misbehaving children thrown in the middle of the night.
Jones kept in communication with left-wing leaders and governments, and during a 1977 custody battle with the parents of an underage Jonestown resident, University of California radicals Angela Davis and Huey Newton communicated via radio-telephone to the Jonestown crowd, urging them to hold strong against the "conspiracy." Jones made radio broadcasts stating "we will die unless we are granted freedom from harassment and asylum." Guyana Deputy Minister Ptolemy Reid finally assured Jones' wife Marceline that Guyanese Defense Forces would not invade Jonestown.
Medical problems such as severe diarrhea and high fevers struck half the community in February 1978. According to the New York Times, copious amounts of drugs such as Thorazine, sodium pentathol, chloral hydrate, Demerol, and Valium were administered to Jonestown residents, with detailed records being kept of each person’s drug regimen.
Various forms of punishment were used against members considered to be serious disciplinary problems, and some members who attempted to run away were allegedly drugged to the point of incapacitation. Increasingly alienated from the U.S. and looking to nations like Cambodia, North Korea, and the Soviet Union as models, Jones reportedly had armed guards patrolling the compound day and night both to protect the compound from the CIA and to prevent unauthorized travel by Jonestown's own residents.
Jones' recorded readings of the news were part of the constant broadcasts over Jonestown's tower speakers. Jones' news readings usually portrayed the United States as a "capitalist" and "imperialist" villain, while casting "socialist" leaders, such as former North Korean dictator Kim Il-sung and Joseph Stalin in a positive light.
On October 2, 1978, Feodor Timofeyev from the Soviet Union embassy in Guyana visited Jonestown for two days and gave a speech. Jones stated before the speech that "For many years, we have let our sympathies be quite publicly known, that the United States government was not our mother, but that the Soviet Union was our spiritual motherland."
Convinced that the U.S. and the capitalist world might attempt to destroy his socialist experiment, Jones preached an increasingly apocalyptic vision and began rehearsing for a mass suicide in case of a CIA attack. According to former Jonestown member Deborah Layton:
Everyone, including the children, was told to line up. As we passed through the line, we were given a small glass of red liquid to drink. We were told that the liquid contained poison and that we would die within 45 minutes. We all did as we were told. When the time came when we should have dropped dead, Rev. Jones explained that the poison was not real and that we had just been through a loyalty test. He warned us that the time was not far off when it would become necessary for us to die by our own hands.
Reports of these and other abuses began reaching the U.S. through relatives and Peoples Temple members who succeeded in leaving Jonestown. Charges included human rights violations, false imprisonment, the confiscation of money and passports, mass suicide rehearsals, and the murder of seven attempted defectors. Relatives became increasingly concerned that members were being held against their will or had been brainwashed or drugged into submission by an increasingly unstable Jones.
On Tuesday November 14, 1978, Congressman Leo Ryan, a Democrat from San Francisco, flew to Guyana along with a team of 18 people consisting of government officials, media representatives, and members of the anti-Jones group "Concerned Relatives of Peoples Temple Members." The group also included Richard Dwyer, Deputy Chief of Mission of the U.S. Embassy to Guyana at Georgetown, believed by some to have been a CIA officer.
After the delegation's arrival in Guyana, Jones' lawyers in Georgetown, Mark Lane and Charles Garry, refused to allow Ryan's party access to Jonestown. Ryan had previously visited the Temple office in the suburb of Lamaha Gardens, but his request to speak to Jones by radio was denied. On Friday, November 17, Ryan informed Lane and Garry that he would leave for Jonestown at 2:30 p.m., regardless of Jones' schedule or willingness. Accompanied by Lane and Garry, Ryan flew to Port Kaituma airstrip, six miles (10 km) from Jonestown. Only Ryan and three others were initially accepted into Jonestown, but the rest of Ryan's group was allowed in after sunset.
At first the visit was cordial. Jones organized a reception and concert for the Ryan delegation, and its members were given guided tours around the community. Some of the residents were reportedly angry with the visitors, seeing Ryan as a hostile investigator in cahoots with the CIA and resenting the presence of reporters and relatives who were perceived as hostile to the community. Jones reportedly commented that he felt like a dying man and ranted about government conspiracies and martyrdom. At some point in the evening, two Peoples Temple members, Vernon Gosney and Monica Bagby, passed a note to addressed to Ryan, reading "Please help us get out of Jonestown."
That night the primary Ryan delegation (Ryan, his legal adviser Jackie Speier, U.S. embassy official Dwyer, and Guyanese official Neville Annibourne) stayed in Jonestown. Members of the press corps and the "Concerned Relatives" went to Port Kaituma and stayed at a small café. Meanwhile, back in Jonestown, feelings of an adversarial confrontation were rising, and in the early morning of November 18, more than a dozen Temple members walked out of the colony in the opposite direction from Port Kaituma.
When the reporters and the Concerned Relatives group arrived back at Jonestown, Jones' wife Marceline gave a tour of the settlement for the reporters. However, a dispute arose when the reporters insisted on entering the home of an elderly black woman, and other residents accused the press of being racist for trying to invade her privacy.
Jim Jones, who was reportedly severely addicted to drugs, woke late on the morning of November 18, and the NBC crew confronted him with Vernon Gosney's note. Jones angrily declared that those who wanted to leave the community would lie and would attempt to "destroy Jonestown." Then two more families stepped forward and asked to be escorted out of Jonestown by the Ryan delegation. Jones reportedly remained calm and gave them permission to leave, along with some money and their passports, telling them they would be welcome to come back at any time. That afternoon Jones was informed that two other families had defected on foot.
While negotiations proceeded, emotional scenes developed, as some family members wished to leave and others, determined to stay, accused them of betrayal. Al Simon, an Amerindian member of the Peoples Temple, walked toward Ryan with two of his small children in his arms and asked to go back with them to the U.S., but his wife Bonnie denounced her husband over Jonestown's loudspeaker system. Meanwhile, enough people had expressed a desire to leave on Ryan's chartered plane that there would not be room for them in one trip.
Ryan attempted to placate Jones by informing Jones' attorney that he would issue a basically positive report, noting that none of the people targeted by the Concerned Parents group wanted to leave Jonestown. Jones, however, reportedly had grown despondent, declaring that "all is lost."
Ryan planned on sending a group back to the capital of Georgetown and staying behind with the rest until another flight could be scheduled. Then Temple member Don Sly attacked Ryan with a knife, allegedly on Jones' orders. Although the congressman was not seriously hurt in the attack, he and Dwyer realized that both the visiting party and the defectors were in danger. Shortly before departure, Jones loyalist Larry Layton asked to join the group that was leaving, but other defectors voiced their suspicions about his motives, which Ryan and Speier disregarded.
Ryan's party and 16 ex-Temple members left Jonestown and reached the nearby Port Kaituma airstrip at 4:30 p.m., where they planned to use two planes (a six-passenger Cessna and a slightly larger Twin Otter) to fly to Georgetown. Before the Cessna took off, Layton produced a gun he had hidden under his poncho and started shooting at the passengers. He wounded Monica Bagby and Vernon Gosney, and was finally disarmed after wounding Dale Parks.
About this time, a tractor appeared at the airstrip, driven by members of Jones' armed guards. Jones loyalists opened fire while circling the plane on foot. Ryan was shot dead along with four journalists. A few seconds of the shooting were captured on camera by NBC cameraman Bob Brown, whose camera kept rolling even as he was shot dead. Ryan, three news team members, and 44 year old Jonestown defector Patricia Parks were killed in the few minutes of shooting. Jackie Speier was injured by five bullets. Steve Sung and Anthony Katsaris also were badly wounded. The Cessna was able to take off and fly to Georgetown, leaving behind the damaged Otter, whose pilot and co-pilot also flew out in the Cessna. The Jonestown gunmen, meanwhile, returned to the settlement.
Journalist Tim Reiterman, who had stayed at the airstrip, photographed the aftermath of the violence. Dwyer assumed leadership at the scene, and at his recommendation, Layton was arrested by Guyanese state police. The ten wounded and others in their party gathered themselves together and spent the night in a café, with the more seriously wounded cared for in a small tent on the airfield. A Guyanese government plane came to evacuate the wounded the following morning.
Six teenage defectors attempted to hide in the adjacent jungle until help arrived and their safety was assured, but became lost for three days and nearly died, until they were found by Guyanese soldiers.
A great deal remains either unknown or controversial concerning what happened in Jonestown on the evening of November 18, 1978. What is known for certain is that 909 people died in Jonestown that night, including 287 children. Most of the dead apparently died from ingesting grape-flavored Flavor Aid, poisoned with Valium, chloral hydrate, Penegram, and presumably (probably) cyanide.
About 45 minutes after the Port Kaituma shootings, the airstrip shooters, numbering about nine, arrived back in Jonestown. Their identities are not all certainly known, but most sources agree that Joe Wilson (Jones’ head of security), Thomas Kice Sr., and Albert Touchette were among them.
In the early evening, Jones called a meeting under the Jonestown pavilion. A tape recording found at the scene recorded about 43 minutes of Jonestown's end. When the community gathered, Jones told the assembly: "They'll torture our children, they'll torture some of our people here, they'll torture our seniors. We cannot have this." He then put into effect the mass suicide plan the group had previous rehearsed, saying: "All it is, is taking a drink to take… to go to sleep. That's what death is, sleep." Several community members also made statements that hostile forces would convert captured children to fascism and supported the decision to commit "revolutionary suicide." Jones argued with one Temple member who actively resisted the decision for the whole congregation to die: Christine Miller is heard objecting to mass death and calling for an airlift to Russia. After several exchanges, in which Ryan explained that "the Congressman is dead," she backed down, apparently after being shouted down by the crowd.
The children were poisoned first, sometimes accompanied by their parents. The poisoned drink was squirted into children's mouths with plastic syringes. Survivor Stanley Clayton, who was assisting already-poisoned children, reports that some children resisted and were physically forced to swallow by guards and nurses. According to Clayton, the poison caused death within about five minutes. After consuming the drink, people were escorted away and told to lie down along walkways and areas out of view of the people who were still being dosed.
In response to reactions of seeing the poison take effect, Jones commanded: "Stop this hysterics. This is not the way for people who are socialists or Communists to die. No way for us to die. We must die with some dignity."
Four people who were intended to be poisoned managed to survive. They were:
Three more survivors were brothers Tim and Mike Carter (30 and 20), and and Mike Prokes (31) who were given luggage containing $500,000 U.S. currency and documents, which they were told to deliver to Guyana’s Soviet Embassy, in Georgetown. They soon ditched most of the money and were apprehended heading for the Temple boat at Kaituma. One document read: "The following is a letter of instructions regarding all of our assets (balances totaling in excess of $7.3 million) that we want to leave to the Communist Party of the Union of Soviet Socialist Republics."
Before the killing began, Jones' two lawyers, Charles Garry and Mark Lane, talked their way past Jonestown's armed guards and made it to the jungle, eventually arriving in Port Kaituma. While in the jungle near the settlement, they heard cheering, then gunshots. This observation concurs with the testimony of Clayton, who heard the same sounds as he was sneaking back into Jonestown to retrieve his passport.
According to Guyanese police, Jones and his immediate staff, after having successfully carried out the "revolutionary suicide," came together and killed themselves and each other with handguns, after giving a final cheer. However, only two people were reported to have gunshot wounds: Jim Jones and Annie Moore—one wound each.
The first headlines reporting the event claimed that 407 Temple members had been killed and that the remainder had fled into the jungle. This death count was revised several times over the next week until the final total of 909 was reached.
The sheer scale of the killings, as well as Jones' socialist leanings, led some to suggest CIA involvement. In 1980, the House Permanent Select Committee on Intelligence investigated the Jonestown mass suicide and announced that there was no evidence of CIA involvement at Jonestown. Most government documents relating to Jonestown, however, remain classified.
Guyanese Chief Medical Examiner Dr. Leslie Mootoo and his assistants examined 137 bodies soon after the tragedy. He concluded that all but two or three of these bodies were victims of murder. However, no determination was made as to whether those injections initiated the introduction of poison or whether they were so-called "relief" injections to quicken death and reduce suffering from convulsions from those who had previously taken poison orally. Mootoo and American pathologist Dr. Lynn Crook determined that cyanide was present in some of the bodies, while analysis of the contents of the vat revealed tranquilizers and two poisons: potassium cyanide and potassium chloride. He also reported that many needles and syringes were found on tables and on the ground around the area, many with bent or broken needles, suggesting struggles among unwilling adults. Plastic cups, Flavor-Aid packets and syringes, some with needles and some without, littered the area where the bodies were found.
However, only seven bodies of 913 were autopsied, including Jim Jones, Annie Moore, and Dr. Lawrence Schact. Annie Moore left a note which in part stated: "We died because you would not let us live in peace." Marceline Jones left a note indicating that she wished to "leave all bank accounts in my name to the Communist Party of the USSR. I especially request that none of these are allowed to get into the hands of my adopted daughter, Suzanne Jones Cartmell."
A number of inconsistencies in the testimony and evidence of the Jonestown tragedy have raised various suspicions and conspiracy theories:
Larry Layton was found not guilty of murder by a Guyanese court, employing the defense that he was "brainwashed." He was later extradited to the U.S. and put in prison on lesser charges. He is the only person ever to have been held responsible for the events at Jonestown. He was paroled 24 years later, in 2002.
The area formerly known as Jonestown was at first tended by the Guyanese government, which allowed its re-occupation by Hmong refugees from Laos, for a few years in the early 1980s, but it has since been altogether deserted. It was mostly destroyed by a fire in the mid-1980s, after which the ruins were left to decay. The buildings and grounds were not taken over by local Guyanese people because of the social stigma associated with the murders and suicides.
The Jonestown tragedy created a wave of fear about "cults." As a result, several new religious movements with no history of violence reported increased persecution, anti-cult movements received thousands of inquiries from concerned relatives, and a new wave of illegal "deprogramming" attempts were directed at NRM members in an effort to "save" them from the dangers of alleged brainwashing and possible mass suicide.
All links retrieved June 5, 2018.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia: | <urn:uuid:ad4374cf-2e21-4991-974c-22fc379872d3> | CC-MAIN-2019-47 | https://www.newworldencyclopedia.org/entry/Jonestown | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00540.warc.gz | en | 0.9827 | 4,460 | 2.609375 | 3 |
When patients are more involved in their own healthcare, including the management of their personal health data, they become active participants in their healthcare and become more responsible. Providing a platform for personal health records may yet become the foundations for an effective and efficient healthcare system, that is oriented more to the welfare of the client. The Philippine National Health Information Infrastructure plays a crucial role in making personal health records happen in the Philippines. Personal health record systems may be the impetus that will spur facilities to digitize their operations and allow clients to have electronic copies of their health records.
The advent of Internet technology has made it easier for consumers to participate in the management of their own health records. Add to this the proliferation of person-based devices that allow consumers to hold on to electronic copies of their health data (e.g., in their cellphones, PDAs, flash drives), and the situation seems full of possibilities. Traditionally, health records are maintained in hospitals, and patients could only access them on a per request basis (often in the form of a photocopy of the original).
But the new emerging technologies (Internet, eXtensible Markup Language or XML, flash memory, mobile computing, etc) are already enabling consumers to store copies of their own CT/MRI scans and laboratory results in their personal storage devices, which they can bring with them as they transfer from one facility to another. As more modalities automate the digitization of health data and make these portable, the more we foresee consumers demanding access to their personal records and asserting their rights to manage their own data themselves. We would attempt to examine them further in the following paras.
A Present with Possibilities
The twenty-first century has been called the information age, and it promises to revolutionize the health sector. With the lowering cost of hardware and the advent of open systems interfaces, it is now possible for individuals to participate more actively in transactions that concern their private information. In no other domain is this participation more relevant than in health, where intimate and personal data need to be exchanged with providers and healthcare facilities, to effect the most cost-effective approach to the solution of any health issue.
Even today, conventional health record management in the Philippines mostly follows a facility-centric or provider-centric model, where the data are stored in paper records in the clinic or in the hospital. This system essentially constrains patients into returning to the same provider every time, because their data is most complete in that facility. If they decide to transfer to another facility, they have to request for a photocopy of their record from the provider. This photocopy is often incomplete; and for health data that are not text-based such as radiology and pathology images, the second facility usually requests for repeat examinations, so they can have a copy of their own. This results in greater medical expense for the patient.
However, recent developments in health information technology are changing all these. All of them increase the ability of patients/consumers to participate in the management of their health data. First is the wide availability of person-based computing devices, such as cellphones or any device, with flash-based memory. These devices have matured to the point that their storage and computing power give them as much capability as desktop computers. Aside from cellphones, there are flash-based memory devices (such as Universal serial bus [USB] thumb drives), which can store volumes of digital information. The USB port specification has opened numerous possibilities for health data capture. The USB drives enable patients to have portable copies of their own health data, text-based or multimedia-based, that they can bring to any healthcare provider or facility.
Second is the increasing digitization of health data. With the advent of electronic medical records, automated laboratory machines, and digital radiology equipment, much of what used to be written only on paper or printed in imaging plates, are now also being made available in digital format. These digital files can be transferred from one facility to the other, without any degradation of quality. With such data, it is possible to have an examination done in one healthcare facility and viewed in another, regardless of the second healthcare facility’s infrastructure.
Third is the increasing availability of Internet connectivity. The Internet allows for the low-cost exchange of data between facilities, and between patients and facilities in a seamless manner. It is essentially possible for patient data to be in various places of the Internet and to be consolidated into one comprehensive record in a provider’s clinic.
The Issues to Reflect Upon
Certainly, there are numerous issues that accompany such a dramatic shift from the conventional healthcare, which we must dwell in, before going in raptures over these innovations. Foremost would be the security and integrity of the data as they are stored in media, that are prone to tampering. Another would be the issue of identification and authentication, both of the patient and of the provider, who would be accessing the data. Resistance from providers is also expected as personal health records would require them to invest heavily on technology infrastructure. Last but not the least, integration of the health records, parts of which may be in different facilities aside from the patient, remain a serious challenge to personal health record systems.
Here in this article we propose a framework for the design and implementation of a state-of-the-art personal health record systems in the Philippines. This framework allows policy makers to manage the transition from conventional to electronic health records in a secure yet cost-effective way; using a patient-centric approach. If the bottomline is greater participation of patients in their own care, a thorough analysis of the strengths and weaknesses of personal health records must be made to facilitate the inevitable shift from provider- and facility-centric care to one that revolves around the one who truly counts …. the patient.
However, before we analyze further, we must clearly understand what a personal health record entails. Contrary to common belief, the PHR does not contain all of a patient’s health data. Rather it is a subset of data that can give healthcare providers a more comprehensive and longitudinal perspective of the patient’s care. The ASTM International (formerly American Society for Testing and Materials) has published Standard E 2369 Continuity of Care Record (CCR), which might throw some light in this context. It reads, “The standard provides a core data set of the most relevant administrative, demographic, and clinical information facts about a patient’s healthcare, covering one or more healthcare encounters. The CCR data set includes a summary of the patient’s health status (e.g., problems, medications, allergies) and basic information about insurance, advance directives, care documentation, and the patient’s care plan.”
In Search of a Comprehensive PHR for the Philippines
The Electronic Commerce Act of 2000 provides much of the policy framework for electronic-based transactions in the Philippines and would include electronic health records. From the recent Electronic Health Records Philippines 2006 conference, we can find several applications which had some components of an electronic health record. The Community Health Information Tracking System (CHITS) is a Philippines government health center-based information system designed to manage the administrative and clinical tasks of a local health center. The Integrated Surgical Information System (ISIS) is a hospital-based patient registry that manages data about surgical patients at the Philippine General Hospital. The Blood Bank Information Management Package (BLIMP) manages the donor information system and transfusion services at the UP-PGH Blood Bank. At the Riverside Medical Center in Bacolod, Philippines, a pharmacy information system called HYSYPTO has been deployed, where prescriptions are filled up by the pharmacy right after doctor’s orders are scanned on the floor. However, none of these systems may be called personal health records since all of these are facility-based and do not share any information to the patients. Based on the author’s review of healthcare literature and of the local environment, there is no end-to-end model right now for personal health records in the Philippines. An end-to-end model allows for the transferring and viewing of health data seamlessly and securely, from one facility to another, with patients serving as the bearers of the data.
In radiology, a few imaging facilities have procured DICOM -compliant equipment, which could output digital data. Upon request, these facilities have provided their patients CD-ROMs of their images, which patients can view on a personal computer.
Laboratories on the other hand often give out test results in paper printout format. Having no actual system to receive electronic data from laboratories, only a few facilities provide electronic data to their patients in the Philippines. For the few who are able to output electronic data, they often deliver results to providers via fax or via e-mail, in portable document format (PDF). Others allow patients to print out their lab results through a web interface. Presently, there is no laboratory facility that offers electronic data to patients in the raw. The issue foremost among the operators of lab facilities is the integrity of the data once they are in the hands of patients. And there is no agreed upon way of assuring that the laboratory data would not be tampered, after transferring it to the patient.
As a matter of conventional practice, health facilities, especially providers’ clinics, do not give electronic data, in whole or in part, to their patients. Most of the time, the data are in paper format and they comply with a certain template such as clinical abstracts or medical certificates. Rarely will a provider supply a full medical record to a patient. More comprehensive and complex documents such as operative records and surgical techniques can only be obtained from hospitals, where the procedures have been performed.
At the policy side, a partnership between the Department of Health, Department of Science and Technology, Philippine Health Insurance Corporation, University of the Philippines Manila, and the Philippine Medical Informatics Society was formalized on 10 October 2005. The partnership, called the Philippine National Health Information Infrastructure or PNHII, aimed to consolidate the standards for health information in the Philippines. The PNHII focuses on four key areas: capability-building, standards and interoperability, connectivity, and test beds.
Succinctly, we can assert that the technological infrastructure to support personal health records seem to be in place already in the Philippines, but the awareness of consumers and openness of providers to the portability of data leaves much to be desired. The issues involved are multi-faceted and needs multi-stakeholder involvement.
The Role of Stakeholders
(i) The Health Consumers
Currently, most Filipino patients are not aware that it is their right to have copies of their health data and that they own the data even if they are in paper format, and even if the paper is being managed by hospitals or clinics. This knowledge is crucial in involving patients/consumers in the care of their own health data. Without an acknowledgement of this right, patients will default the care of their health records to health facilities, who serve as caretakers of their personal health data. Unless consumers realize that they can manage their own health data, the concept of personal health record systems will not prosper in the Philippines.
(ii) Providers and Facilities
The healthcare providers and facilities must accept that the patient owns the digital data, but at the same time they are compelled to retain and manage copies of the data internally. They should also accept that the data must be supplied to the patient when demanded. In effect the provider/facility manages the data, but it is the patient who owns the data within. These concepts must be made clear to all parties for personal health records, to be accepted by all involved stakeholders.
However, current local practice puts the control and power over health records to healthcare providers and facilities. Shifting the current systems from manual to electronic within a facility itself is formidable; stiffer resistance is expected for transformation to electronic personal health records. Most of the resistance will be encountered from healthcare providers who will require additional equipment to view data from personal health records. On the other hand, it is also possible for a consumer-led trend towards personal health records to push the health facilities to invest on infrastructure. It will depend on generating a critical mass of end users and the establishment of a sustainable ecosystem to make the transition as seamless as possible.
Since personal health records entail strict privacy due to the potentially sensitive nature of their data, there are substantial security issues that need to be addressed in order to have successful personal health record systems in the country. If stakeholders do not trust the integrity of a personal health record, its utility and value decrease. There are four security components that must be addressed by a trustworthy personal health record system. First is the clear identification of the stakeholders (that the system can identify external actors [persons, other systems] before interaction). All parties involved in the accrual of digital health data must be unambiguously identified � the patient, the facility, the technician, the examining physician, the requesting physician, to name a few. This means a persistent (central or distributed) mechanism for storing authoritative identifiable data must be kept in an accessible place.
A second security component dependent on unambiguous identification is authentication. Authentication is the process by which a previously identified entity is validated to be who the identified person really is. In conventional health record systems, the authenticating process is performed by medical records staff who keep the paper-based records. These personnel are presently in charge of identifying and authenticating their patients correctly.
A partnership between the Department of Health, Department of Science and Technology, Philippine Health Insurance Corporation, University of the Philippines Manila, and the Philippine Medical Informatics Society was formalized on 10 October 2005. The partnership, called the Philippine National Health Information Infrastructure or PNHII, aimed to consolidate the standards for health information in the Philippines.
A third security feature is non-repudiation (extent by which an application makes it impossible for an actor to deny that a transaction has taken place). This follows the principle that disallows an entity that has previously participated in an electronic transaction, to refute the transaction. Current advanced devices have already integrated this into their system by using Write Once Read Many (WORM) hard drives. Software based non-repudiation techniques are also available and may be the most practical solutions when data needs to be transferred from one facility to the next. Authorization refers to the access and user privileges for authenticated users/applications.
This security component ascertains whether an entity has the privilege to view the electronic health record. Assigning authority is primarily a social issue but once established, it can be implemented into security systems.
A Use Case Scenario for a Personal Health Record Here I have conceived of a proposed case scenario for the use of a personal health record: Upon the request of his family physician, a patient goes to a radiology facility to have his CT scan taken. After the examination, the patient requests for a copy of his CT.
Consent and waiver forms are signed and the patient’s USB drive is loaded into the healthcare facility’s personal computer. The patient’s digital CT scans are signed by the facility and transferred to the USB drive. Once the data is in the USB drive, any alterations of the file will be detected, and any authorized user will be informed of such.
Upon reporting to the healthcare provider’s clinic, the patient supplies the USB drive to the provider. Using a PNHII supplied software or proprietary viewing software compliant with PNHII specifications, the provider is able to view the patient’s CT scans with full view of the digital signature of the facility from where the scans were made. Any alterations in the scans will be detected and the provider will be informed accordingly.
Electronic documents such as clinical abstracts or medical certificates may also be transferred to the patient’s USB drive as long as the security process is followed. Doubts about file integrity can be resolved by sending the file’s fingerprint to a central indexing system or to the originating facility, where it can be compared to a previous fingerprint in file.
Based on the current healthcare practices and available technology and capacity, I have recommended the following framework for personal health records development in the Philippines. All of the components of the framework should be overseen by the PNHII.
At the semantic level, all messages exchanged between facilities must be consistent across systems. This means the ‘standards and interoperability’ component of the PNHII must take the lead in determining the vocabulary for the messages as well as the syntax. The messages must comply with the 3S, as listed in the figure below:
For syntax, the eXtensible Markup Language (XML) has become the lingua franca. It is platform-independent and has established itself as a neutral data format that is acceptable to many participating systems.
Once the messages can be constructed in a semantically and syntactically consistent manner, it must be wrapped within a security layer (similar to a secure envelope) prior to the transfer. This is where a certification authority using the public key infrastructure will play a role. The public key infrastructure employs a two-step authentication process which assures that messages exchanged between two trusting entities are protected from alteration. In addition, it also provides a framework for identifying and authenticating the sender and recipients of secure messages. Stakeholders involved in the transfer of the message (including the patient) will require a public and private key that they will use to sign the data. Since most of the transfers will be from facility (radiology, laboratory or clinic) to patient, these healthcare entities are the ones who should obtain a key.
What is crucial at the facility level is the identification and authentication of the patient and the signing of the unaltered health data (CT, MRI, lab, etc) with the facility’s private key and the patient’s public key. Identifying and authenticating persons at the facility level may be done on a federated, distributed basis, by employing a web of trust. Central to the adoption of personal health records will be the ease of viewing the data using freely available software. The test beds component of the PNHII will make sure that free reference implementations are available for end users and developers.
Education and awareness campaigns must be undertaken to ensure smooth implementation. The shift from conventional health record systems to personal health records is a giant leap from current reality and is prone to failure unless deliberate attempts to bridge the gap slowly and in measured steps are made. An effective way to overcome resistance from the healthcare providers and facilities is to offer the benefits of automation in lowering the cost of operating the clinic or the facility. Focusing on patient empowerment also helps in convincing providers to make the necessary investments.
When patients are more involved in their own healthcare, including the management of their personal health data, they become active participants in their healthcare and become more responsible. Providing a platform for personal health records may yet become the foundations for an effective and efficient healthcare system, that is oriented more to the welfare of the clients.
The Philippine National Health Information Infrastructure plays a crucial role in making personal health records happen in the Philippines. Personal health record systems may be the impetus that will spur facilities to digitize their operations, and allow clients to have electronic copies of their health records. Â
CancerGrid Project: Using ICT to Tackle Cancer
According to the WHO estimates, Cancer affects 13 percent of people across the globe. It is one of the menacing threats afflicting the humanity. The recently launched CancerGrid project can provide a solution to this growing health concern.
The project is perhaps the first ever large scale application of computer grid technology for finding and developing new anti- cancer agents. It is a concerted effort by academia and industry to tackle one of the pressing medical challenges of our times.
The multidisciplinary research project is funded by the EU and is comprised of a 10-member, SME-led consortium. The partners of this ambitious project are AMRI Hungary, Inte: Ligand, Tallinn University of Technology from Estonia, GKI Economic Research Co. from Hungary,Computer and Automation Research Inst., Hungarian Academy of Sciences, University of Jerusalem from Israel, DAC from Italy, University of Bari from Italy and University Pompeu Fabra from Spain.
Here it deserves a mention that in the human genome, there is an estimated subset of approximately 3000 genes which encode proteins, including novel cancer-related targets, which could be regulated with drug-like molecules. The partners in the project will work towards developing specific chemical compound collections, which are also called chemical libraries, that will interact with these cancer proteins.
The project endeavours to develop and refine methods for the enrichment of molecular libraries for facilitating the discovery of potential anti-cancer agents. It will strive to amalgamate new technologies with biology to enrich molecular libraries and increase the likelihood of discovering potential cancer-curing drugs. The project will use the resources of grid computing to enable the researchers to tap into a potent network of interconnected workstations, which are able to process large chunks of data and reduce computational time.
Using grid-aided computer technology, the likelihood of finding innovative anti-cancer leads will substantially increase the translation of basic knowledge to application stage.
In particular, through the interaction with novel technologies and biology, the R&D consortium aims at developing focused libraries with a high content of anti-cancer leads; building models for prediction of disease-related cytotoxicity and of kinase/ HDAC/MMP and other enzyme (i.e. HSP90) inhibition or receptor antagonism using HTS results; developing a computer system based on grid technology, which helps to accelerate and automate the in silico design of libraries for drug discovery processes, and which is also suitable for future design of libraries for drug-discovery processes that have different biological targets (the result is a new marketable technology). | <urn:uuid:14924a27-58aa-45b9-b953-97ac21f79573> | CC-MAIN-2019-47 | https://ehealth.eletsonline.com/2007/06/a-framework-for-the-philippines-phr-dr-alvin-b-marcelo/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00497.warc.gz | en | 0.944556 | 4,489 | 2.765625 | 3 |
The job of political cartoonists is to push the envelope. But what happens when the size and shape of the envelope changes? That, in effect, is what has been happening ever since Hosni Mubarak was removed from power by the combination of a mass uprising and a military coup. Since then, the contours of permissible speech have been shifting constantly. This has led as much to confusion as it has to creativity.
Cartooning has long been one of the pillars of the public discourse in Egypt. Especially during authoritarian times, cartooning has often been where political critique is loudest, or most daring. But the inherently ironic logic of cartooning means that the volume and barb of this critique is never straightforward. In fact, its very meaning derives from the fact that it tacks closely—and ambiguously—to red lines. The political value of cartooning, it might be said, depends on the existence of these red lines. This leads to a paradox: rather than impeding creative cartooning, censorship and the suppression of free speech sometimes enable it.
In this sense, political cartoons serve as useful survey instruments for mapping the permissible speech of a given moment in Egypt. The borders of speech are in part drawn by laws, in part by taboos that are more implicit. A cartoonist who goes too far can trigger legal action by state or private actors. This, of course, exerts some pressure on cartoonists, but so too do more informal or internalized forms of censorship. Furthermore, the media platform matters quite a bit: print media has greater leeway than film and television. Despite the laws on the books, it is most accurate to that that the limits are fluid, defined only insofar as cartoonists’ challenge and trespass restrictions around them. This is a routine rather than exceptional part of the work that Egyptian cartoonists do.
“Insulting the President” and “Blasphemy”
To trace how artists work, we could recall an example from the Mubarak era. A long-standing prohibition on drawing the president’s face (Article 178 of the Penal Code) was consistently followed by newsmen. But Amro Selim began to cross this line in 2005 by drawing the president from behind: “Bit-by-bit we turned him around, until making a cartoon of him became the norm.” Al-Dostour’s chief editor, Ibrahim Eissa, offered Selim a platform to draw freely, and helped him break the rules. “Back then… everybody asked why we were not detained,” said Selim. “It was because we were daring, hitting the red line and going up against it.” Thus Selim and Eissa managed to give the president a funny face, in spite of occasional legal trouble. Together they mentored a new generation of cartoonists and provoked the caricature of future presidents.
The most significant change in Egyptian caricature since 2011 is the implicit permissibility of satirizing the president. Nevertheless, during President Mohamed Morsi’s year in office, the same penal code article maintained that “whoever insults the president… shall be imprisoned.” Yet, according to Judge Yussef Auf, it does not clearly stipulate what insulting the president means or what the precise penalty should be. Additionally, nearly seventy other articles limit freedom of expression. These range from prohibitions against “insults” to the parliament, army, courts, and other public authorities, to injunctions against the reporting of false news. Nonetheless, mocking these institutions became a core part of cartooning even in government-run newspapers, in spite of—or because of—these regulations.
[“Youth: OK, do you know the answer to this riddle? An elected president, responsible for killing peaceful protesters… He and his gang monopolized the government, and his party corrupted political life. The first letter of his name is "M"! President Morsi: M..m…Mubarak?!” Newspaper headline: "Martyrs in Suez and Alexandria on the Revolution`s Anniversary." Source: Amro Selim, Al-Shorouk, 27 January 2013.]
As the enforcement of the laws became increasingly arbitrary, cartoonists navigated tricky waters. In Morsi’s first six months, at least twenty-four lawsuits were filed against media actors for “insulting the president,” breaking a record set in 1909. Most of these were brought forward by individuals rather than state officials.
There is no exact science to determine the permissibility of a given cartoon. For instance, the popular right wing and anti-Morsi daily al-Watan came under legal and extra-legal attacks. The paper was subject to myriad suits charging insult to the president, including one related to a set of Morsi caricatures. There was also a concerted campaign of intimidation as thugs burned their offices and assaulted editors. At the same time, independent dailies like al-Shorouk and al-Tahrir published anti-Morsi cartoons on their front pages.
Morsi supporters also targeted cartoonists by filing blasphemy cases. Article 98 (Section F) of the Penal Code defines the violation as “any use of religion to promote or advocate extremist ideologies… with a view toward stirring up sedition, disparaging or showing contempt” toward one of the Abrahamic faiths. Yet, like Article 178, it is vaguely defined.
By the time Morsi took office, the number and intensity of blasphemy charges had escalated. Morsi’s supporters also blurred the red lines surrounding insults to the presidency and Islam. For many in the Muslim Brotherhood leadership and rank-and-file, it was simply not acceptable for Morsi to subjected to ridicule, first because he was the leading member of the organization, second, because he was head of state. “The president is a symbol of religion and state,” said Muhamed Muhamed Shaker, a member of the Lawyers Syndicate’s Human Rights Committee and one of the plaintiffs who sued comedian Bassem Youssef for insulting the president. Cartoonists responded by skewering Morsi and his colleagues for politicizing religion. Al-Masry al-Youm cartoonist Doaa Eladl drew a biblical prophet in December 2012, a jab at Morsi’s religious overtones in hammering through a constitutional referendum. A Salafi-affiliated NGO brought a lawsuit against her. The case has since been dropped, according to Eladl and the plaintiff (the latter conceded that its basis wasn’t particularly robust).
Amro Selim, who mentored Eladl at al-Dostour nearly a decade earlier, received death threats for his attacks on Morsi. Selim summed up the shift as follows:
Before the revolution, there were no religious prohibitions. You could rarely be sued for insulting religion. But now, this is the most frequent accusation. This is the difference. Under Mubarak, I was sued for insulting the president in the pages of Al-Masry Al-Youm. I was subjected to interrogation, but the case did not go forward. Now, I am accused of insulting religion. Before the revolution I was attacked for being a dissident. After the revolution, for being an infidel.
Religion was also a taboo during Mubarak’s time, but during Morsi’s year the battle grew more contentious. The Lawyers Syndicate essentially issued gag orders, supported by the Prosecutor General and other pro-Morsi forces. Morsi’s supporters used lawsuits to intimidate. Importantly, there was no chilling effect on the output on cartoonists. On the contrary, these attacks triggered a small boom.
["Closed for Prayer." Source: Andeel, Al-Masry Al-Youm, 26 January 2013]
Throughout Morsi’s year, artists performed new acrobatic feats along the red lines. Operating within (and around) explicit and implicit regulations, each artist took his or her own approach toward challenging the rules of the game. This involved learning the new red lines and how to work with them, publishing controversial cartoons even if they crossed these lines, and finally, developing other media platforms for cartooning.
For instance, controversial images could often be published within the system. In some ways, subtlety was no longer even necessary for cartooning in independent newspapers. Consider these cartoons from al-Masry al-Youm: Morsi as a general’s lap dog (Doaa Eladl, 26 June 2012); Morsi splattered with Blood (Anwar, 27 January 2013) Morsi as a sheep (Makhlouf, February to June, 2013). In al-Siyasi, Ahmad Nady drew Morsi on the toilet. Each of these cartoons crossed over into legal contentious realms. But the fact they circulated suggested a revolution was taking place in the media.
Other cartoons were censored. One key example took place in the context of the bloody clashes in which more than thirty civilians were killed in the Suez zone late January 2013. In the wake of this, Morsi declared a state of emergency. Four days later, a group of leading politicians, convened by al-Azhar, signed a ten-point memorandum renouncing the violence. The gesture was failure, since it did not condemn police or military abuses, nor demand that they be held accountable. In the days that followed, riot police stripped a civilian, Hamada Saber, and dragged him through a street near the presidential palace. Egyptian state TV broadcast it live: a horrific crime occurring in prime time.
["Document Renouncing Violence," reads the headline of the censored edition. Source: Ahmad Nady, Al-Siyasiyy, 6 February 2013]
In response, the cartoonist Ahmad Nady drew a cover for al-Siyasi magazine with these same Egyptian leaders naked. Among the lot were former presidential candidates Amr Moussa and Hamdeen Sabahi, as well as Mohamed Elbaradei and Ayman Nour, the Muslim Brotherhood’s Saad al-Katatni. The youth of the revolution were implicated in the illustration, too, beside the Grand Sheikh of al-Azhar and the Coptic Pope. Nady drew them toasting martini glasses filled with blood. Behind them stands President Morsi, who is clothed, and a Central Security goon. Morsi looks down at the political elites and says, “Take your time. We removed your political cover from the people, so do as you like.” The last clause literally translates as, “their blood is halal,” the Arabic word for “pure.” Morsi had given the leaders permission to imbibe the people’s carnage—on the cover of a weekly magazine named Political. “So I drew this, and [the editor] choked,” Nady explained. But his arrangement with the publication included total editorial freedom. Said Nady, “I don`t have red lines.” After the print run, the publisher pulled the issue and apologized. “They even took the issues that we had in our bags and counted the issues that were printed to make sure nothing was outside,” said managing editor Farah Yousry.
Nady’s violation was both ambiguous and self-evident. He had trespassed red lines beyond insulting the president and religious figures. However, it was the editorial leadership of the magazine, and not the state, who made the decision to censor his work. Abdel Monem Said Aly, chairman of the Al-Masry Al-Youm Corporation, only vaguely recalled the “offensive” cover. He said that if the issue were distributed people would burn it, and Al-Masry Al-Youm Corporation’s reputation would be scarred. According to the artist and the magazine’s managing editor, it was the Coptic Church who shouted loudest for censoring for the cover. There is an “aura around the church, and they are a minority,” said Yousry, the editor, and readers might perceive the cover as hate speech or profanity. TV producers started calling Nady, assuming that the Muslim Brotherhood had applied pressure to pull the cover. Nady told them that it had been the Church that applied the pressure. No one called him back.
There was nothing subtle about Nady’s work. His project is an overt challenge to authority and an interpolation of red lines. “I see the Church like the Salafis. They are the same for me… I don’t accept any of them,” said Nady. “So I am the first one in Egypt who draws against the Church and the Pope [in cartoons]. You can see it on Facebook.”
Nady posted it on Facebook, where his page has over fourteen thousand “likes,” far beyond the circulation of al-Siyasi. Similarly, Hany Shams, the caricature head of al-Akhbar, said that editors “prevented” or asked him to alter about dozen cartoons in 2011-2012. Some insulted Morsi and his policies, others the military. He made them all available to the public on Facebook. While bloggers have been put on trial for insulting religion online, cartoonists who post on social media have not faced legal action.
Beyond new digital platforms for publishing work, there are also other venues. For instance, the graphic novel Metro, which was banned in early 2008, was illegally reprinted in Cairo in 2012. Likewise, following the blasphemy case against Eladl, her work was showcased in an outdoor art festival, al-Fan al-Midan, in Cairo’s Abdeen Square. The work was open to the public. Beside the exhibition, her colleagues Anwar and Makhlouf improvised a political mural on a nine-foot white board: police and thugs battling young activists, drawn with sharpies. Eighty-some bystanders laughed and applauded. On the loudspeaker, the master of ceremonies delivered a passionate speech about young martyrs killed by the authorities under Morsi—a block from a presidential palace. Nothing was censored.
[Makhlouf participating in a draw off with Anwar in El-Fan El-Midan, Abdeen Square. January 2013. Photo by Jonathan Guyer]
“Why Do You Criticize the Military Council?”
Since the military overthrow of Morsi in July 2013, familiar restrictions have surfaced. In comparison to Morsi, who preferred “clumsy litigation” and “tacitly approving” anti-media campaigns as his primary means of censoring, journalist Sarah Carr has noted the reemergence of more repressive tactics to silence reporters. The current junta is coercing journalists to follow the official script. The crime of lampooning the president endures, even as the government has removed jail terms for “insulting the president.” Meanwhile, Egyptians await a new constitution, which may expand or erase these restrictions on speech. The limits are as gray as ever.
Cartoonists have experience working in a repressive environment. Following the 2011 uprising, throughout the eighteen months the military ruled Egypt, authorities quietly threatened cartoonists. For instance, an officer called Abdallah when he drew a cartoon about the military in al-Masry al-Youm.“Why do you criticize the Military Council? We are good, and we love the country,” Abdallah remembered the official saying. “[The officer] said it gently though.” Furthermore, in November 2011, editor Magdy el-Gallad told the same paper’s caricature department to cut a popular protest chant, ‘Down with Military Rule,’ from copy. “It’s direct orders, and don`t talk about this,” was how Andeel recalls the instruction. So the cartoonist and his colleagues launched a Tumblr site called, “Thrr,” or revolt. Comics with that chant, along with portraits of the Armed Forces’ chairman unpublished elsewhere, found a home online. For the print edition of Al-Masry Al-Youm, the cartoonists each drew cartoons featuring a boot, an act of defiance that fell within acceptable speech.
[The president and the chairman of the Armed Forces watch a video of kidnapped soldiers in Sinai. Dialogue reads: Morsi: “Look—they mentioned you.” Al-Sisi: “Yeah, they also mentioned you.” Source: Andeel, Al-Masry Al-Youm, 20 May 2013]
By 2013, it seemed that all of the red lines have been broken. Presidents have been insulted, leaders drawn naked, and religion laughed at. But just as Morsi’s cohort attempted to censor satire of religion, so too can we expect the new government to attempt to create new boundaries. Working around red lines, “trains the cartoonist to make up his mind in different ways so the cartoon becomes smarter,” said Abdallah. “And it is also more interesting for the readers.”
For cartoonists, the red lines have less to do with laws and more to do with the considerations that presuppose drawing, the artist’s deliberations and reservations while at the drawing board. What if Selim had quivered before the penal code and never caricatured Mubarak? Ultimately, the cartoonists themselves determine what is illustratable, and by extension the margins of acceptable speech. Self-censorship “is the most dangerous thing,” said Selim. “I always tell cartoonists not to censor themselves. If they attach brakes to themselves, they will not draw at all.”
For instance, in 1989 Bahgat Osman published, Dictatorship for Beginners: Bahgatos, President of Greater Bahgatia, though it was not widely available. The book unabashedly mocks the military. Likewise, caricaturist Moustafa Hussein and satirist Ahmad Rajab have lampooned the prime minister in al-Akhbar since 1974, even as other publications shied away from ridicule of officials.
Interview with Amro Selim, al-Shorouk office, Cairo, 3 April 2013. Maha El-Kady translated the recording from Arabic to English.
Eissa was the only journalist to be indicted under Mubarak, on separate case. In June 2005, he and two of his colleagues were convicted for “insulting the president” and “spreading false or tendentious rumors,” related to a report they published about a court case against the president. They were released on bail and ultimately pardoned. See Samia Mehrez Egypt’s Culture Wars (Cairo: American University in Cairo Press 2010): pg. 286.
Interview with Yusuf Auf by telephone, 30 May 2013.
Interview with Walid Taher, Cairo, 28 May 2013. Ahmed Shawkat translated the recording from Arabic to English.
For details on specific cases, see: Policing Belief: The Impact of Blasphemy Laws on Human Rights. Freedom House. (2010).
See, for instance: Ben Hubbard and Mayy El Sheikh, “Islamists Press Blasphemy Cases in a New Egypt. New York Times,” 18 June 2013; Kristen Chick, “In Brotherhood`s Egypt, blasphemy charges against Christians surge ahead,” Christian Science Monitor, 22 May 2013; “Factbox: What counts as blasphemy in Egyptian law?” Egypt Independent, 11 June 11 2013. None of the aforementioned reports mention litigation against cartoonists.
Interview with Muhamed Muhamed Shaker, Lawyers Syndicate, Cairo, (May 27, 2013).
Interview with Doaa Eladl, Al-Masry Al-Youm office, Cairo, 18 May 2013; Interview with Khaled El-Masry, Lawyers Syndicate, Cairo, 27 May 2013.
Selim, 2013.
Interview with Ahmad Nady, Cairo, 8 May 2013.
Interview with Farah Yousry by telephone, 30 May 30 2013.
Interview with Abdel Monem Said Aly, Giza, 25 June 2013.
Interview with Abdallah, al-Masry al-Youm office, 17 May 2013.
Interview with Andeel, Cairo, 12 June 2013.
Abdallah, 2013.
Selim, 2013. | <urn:uuid:bd229723-ff1a-4752-a1e7-70b6cdddad00> | CC-MAIN-2019-47 | https://www.jadaliyya.com/Details/29558/Under-Morsi,-Red-Lines-Gone-Gray | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00099.warc.gz | en | 0.961921 | 4,216 | 2.53125 | 3 |
Inorganic (selenite and selenate) and selenium yeast (Se-yeast) are the only approved sources of supplemental Se in the United States. The predominant form of Se in Se-yeast is seleno-methionine (Se-met). The mechanism of intestinal absorption is completely different for inorganic and Se-met; therefore, factors that reduce absorption of inorganic Se are unlikely to influence absorption of Se-met. The metabolism of inorganic Se and Se-met within a cell also differs. Inorganic Se is used almost exclusively in the synthesis of seleno-specific enzymes, whereas Se-met can be used in the synthesis of those enzymes, but it can also be incorporated into any protein that contains methionine. Clinical data comparing health effects of inorganic Se and Se-yeast are lacking, but cattle fed Se-yeast have higher concentrations of Se in whole blood (average = 20%) and milk (90%) and activity of glutathione peroxidase (16%) than cattle fed inorganic Se. Feeding Se-yeast during late gestation also greatly increases the Se concentration in tissues of the newborn calf. Based on available data, the bioactivity of Se from Se-yeast is probably about 20% higher than inorganic Se, but that difference could be greater when absorption of inorganic Se is reduced because of antagonists.
Almost 50 years ago, selenium (Se) was shown to be an essential nutrient for mammals (Schwarz and Folz, 1957) and that a Se deficiency led to white muscle disease in ruminants (Muth et al., 1958). Over time, research identified several beneficial effects when Se intake by domestic animals was increased; however, it was not until 1979 that the U.S. government permitted supplemental Se to be added to diets of domestic animals. Both the concentration (0.1 ppm at that time) and the source (sodium selenite or selenate) of supplemental Se were regulated. The regulation was amended in 1987 and allowed 0.3 ppm of supplemental Se to be added to ruminant diets, but the allowed sources (sodium selenite and selenate) did not change. In September 2003 (FDA, 2003), the regulation was amended again to allow the use of selenium yeast (Se-yeast) in diets for dairy and beef animals based on data from cattle fed Selplex (Alltech Inc., Nicholasville, Ky.). The maximum allowed supplementation rate was maintained at 0.3 ppm of Se. The approval of Se-yeast for dairy cattle greatly expanded the Se supplementation options available to nutritionists, but it also made Se supplementation a more complicated matter.
Selenium Yeast ― What Is It?
The definition of Se-yeast according to the FDA (2003) is “a dried, nonviable yeast (Saccharomyces cerevisiae) cultivated in fed-batch fermentation which provides incremental amounts of cane molasses and selenium salts . . . and allows for optimal incorporation of inorganic selenium into cellular organic matter. Residual inorganic selenium . . . must not exceed 2% of the total selenium content in the final selenium yeast product.” During fermentation, the yeast consume Se and incorporate it into various organic compounds. The most prevalent Se end product is seleno-methionine (Se-met). Although differences are likely among commercial sources of Se-yeast, on average approximately 90% of the Se is Se-met (Schrauzer, 2003). Seleno-cysteine (Se-cys) is produced in much lesser amounts. Those two seleno-amino acids are identical to the regular amino acids, methionine (met) and cysteine, except that Se replaces the sulfur atom (Figure 1). The predominant chemical form of Se in Se-yeast makes organic Se different from all other organic trace minerals. All other organic trace minerals are complexes or chelates. The metal is “associated” with an organic compound, but it is not part of the compound’s molecular structure. The Se in Se-met and Se-cys is part of the molecule; the Se cannot be removed without breaking covalent bonds.
Numerous other Se-compounds are produced by yeast, but identifying and quantifying all the different Se compounds found in Se-yeast is extremely difficult and requires very sophisticated techniques and instruments. Although the concentrations of these other Se compounds will be quite low, they may be important biologically. Some of these “minor” selenium compounds have been shown to have potent anti-carcinogenic properties in laboratory animals, and clinical data are accumulating showing similar effects in humans, especially with respect to prostate cancer (Combs et al., 2001). Essentially nothing is known regarding biological activity of these minor Se compounds in cattle. Therefore, the rest of this paper will consider only the Se provided by Se-met and Se-cys.
The most prevalent forms of Se consumed by dairy cows in the United States are selenate and selenite (from inorganic Se supplements), and Se-met and Se-cys (from Se-yeast and basal feedstuffs). Ruminal metabolism and intestinal absorption of these Se compounds differ. Most of the selenate (SeO4) consumed by a cow is reduced to selenite (SeO3) in the rumen, but some of the selenate leaves the rumen and is absorbed as selenate in the small intestine. Based on studies with rats, intestinal absorption of selenate is probably via an active (energy-requiring) transport system. Absorption of selenate by ligated intestinal loops of rats was about 80% (Vendeland et al., 1992). In the rumen, selenite (either that consumed in the diet or produced from selenate) can be converted to low molecular weight insoluble forms of selenium. These compounds have not been chemically identified but most likely are not well absorbed or utilized by the host. Some of the selenite is used to synthesize seleno-amino acids (predominantly Se-cys) that are incorporated into microbial protein. The remaining selenite leaves the rumen and reaches the small intestine where it is absorbed probably via a passive mechanism. Intestinal absorption of selenite was about 35% using ligated rat intestines (Vendeland et al., 1992). Because it is so difficult to quantify the various Se compounds, reliable data on distribution of Se in ruminal contents are limited. Reasonable estimates when selenite is fed are that 30 to 40% is converted to insoluble forms, 10 to 15% is found in microbial protein, and 40 to 60% remains as selenite (Serra et al., 1994). I could not find any information regarding ruminal metabolism of Se from Se-yeast. An in vitro experiment found that about 60% of Se-met (not Se-yeast) was incorporated directly into bacterial protein as Se-met (Paulson et al., 1968). Although data are limited, a much higher percentage of Se leaving the rumen is in the form of seleno-amino acids (predominantly Se-met) when cows are fed Se-yeast than when fed selenite or selenate. Seleno-methionine is absorbed from the intestine by the same mechanism as methionine and is quite efficient (>80%). However, because Se-met and met use the same intestinal absorption system, increasing the intestinal flow of met will decrease absorption of Se-met because of competition.
True absorption of Se from diets containing supplemental inorganic Se calculated from Se balance studies averages about 50% in dairy cows, goats, and sheep (Harrison and Conrad, 1984, Aspila, 1988, Koenig et al., 1997, Ivancic, 1999). True absorption will be lower if the diet contains appreciable quantities of antagonists to Se absorption (discussed below). Data on the true digestibility of Se from Se-met or Se-yeast are very limited and variable. True digestibility of Se from Se-met (measured in goats) was 65% (Aspila, 1988), and calculated true digestibility of Se from Se-yeast (measured in sheep) averaged about 44% (Koenig et al., 1997). Because of the method used to produce the Se-yeast in the sheep study, the proportion of Se that was inorganic was probably greater than that found in currently available Se-yeast products. Even though data are very limited, based on known absorption mechanisms, Se from Se-yeast is probably absorbed with greater efficiency than Se from selenite. Assuming Se-met from Se-yeast has an escape value of 60% (based on in vitro studies with Se-met) and that 90% of the Se in Se-yeast is Se-met, approximately 55% of the Se from Se-yeast that leaves the rumen is in the form of Se-met. Assuming the digestibility of the Se from Se-met is 80% (average digestibility of ruminal microbial protein) and the digestibility of the 45% of total Se that is not Se-met is the same as for selenite (50%), the true digestibility of Se from Se-yeast would be about 66%. This is about 30% higher than the true digestibility of Se from selenite.
Se is an essential nutrient for animals because certain enzymes (selenoenzymes) must contain a Se-cys residue in their active sites. The most familiar selenoenzyme with respect to dairy cattle nutrition is glutathione peroxidase (GSH-px) which is an important component in cellular antioxidant systems. Cells have developed a simple but elegant method of ensuring that Se-cys is inserted into the proper location in enzymes (Figure 2). Selenite that is absorbed goes to cells where it is reduced to selenide, and then the selenide is used to synthesize Se-met from a serine molecule that is linked to a specific tRNA (UGA codon). The synthesized Se-cys-tRNAUGA complex is then put in the right place during protein synthesis. If Se-cys from the diet is absorbed, it cannot be inserted directly into the active site of the enzyme during protein synthesis because it does not have the correct tRNA. Dietary Se-cys must be catabolized, and then the Se can be reduced to selenide and a Se-cys-tRNAUGA can be synthesized. Absorbed dietary Se-met can be used in place of met in protein synthesis. Cells do not appear to be able to differentiate between regular met and Se-met. Therefore, Se-met can be found in all proteins in the body in direct proportion to the amount of met found in the protein and the relative pool sizes of regular met and Se-met. Se-met can also be catabolized and its Se be converted to selenide and then put into Se-cys-tRNAUGA. The bottom line difference between inorganic (selenite) and organic Se (Se-met) is that inorganic Se is used almost exclusively to produce selenoenzymes, but organic Se can be used to produce selenoenzymes and also result in general labeling of all proteins that contain met. This difference has implications when interpreting Se concentration data.
Se-Yeast vs. Selenite: The Data
When comparing sources of nutrients, the most important question is: Which source will result in the greatest net return? To answer this question, you need to know the cost of the supplement (per unit of nutrient) and the value of the response. For Se, the response is usually health-related. Numerous studies have shown that supplemental Se (usually from inorganic sources) improve immune function and mammary gland health and reduce the prevalence of retained fetal membranes (Weiss, 2003; Weiss and Spears, 2005). Therefore, the best method to compare Se sources is with clinical trials that measure prevalence and severity of certain diseases when cows are fed different sources of Se. I could find only one study (Malbe et al., 1995) in which selenite and Se-yeast were fed and clinical measures were taken; because of the experimental design, the effects of Se source could not be compared statistically. Cows were fed diets with 0.2 ppm Se from selenite or Se-yeast, and milk SCC and prevalence of infected quarters were measured. Following eight weeks of supplementation, infected quarters decreased 60% for cows fed selenite and 43% in cows fed Se-yeast compared with day 0 values. The SCC decreased 37% and 30%, and NAGase activity in milk (a measure of inflammation) decreased 21 and 45%, respectively, for cows fed selenite and Se-yeast. All measures of mammary gland health were improved in Se supplemented cows, whereas no changes occurred in cows not fed supplemental Se. Based on these data, source of Se did not appear to have a large effect.
The effects of Se source (inorganic vs. Se-yeast) on concentrations of Se in blood and milk and activity of GSH-px have been compared in numerous experiments (Table 1, Figures 3, 4, and 5). The median increase in whole blood Se when Se-yeast was fed was 20% (Figure 3). Whole blood GSH-px activity was numerically higher in all studies when Se-yeast was fed, but only two studies reported statistically higher values (Figure 4). The median increase in activity was 16%. The relative response in GSH-px activity when Se-yeast is compared with selenite might be a function of Se intake. The two studies with the greatest difference between Se-yeast and selenite in GSH-px activity fed the lowest concentration of supplemental Se (approximately 0.1 ppm Se). Knowles (1999) reported no difference in GSH-px activity between cows fed selenite and Se-yeast when cows consumed 4 mg of supplemental Se/d (approximately 0.2 ppm), but when cows were fed 2 mg of Se/d (approximately 0.1 ppm), GSH-px activity was about 50% higher when Se-yeast provided the supplemental Se.
The median increase in milk Se was 90% when Se-yeast was fed (Figure 5). The vast majority of the Se in milk when Se-yeast is fed is in the form of Se-met. Milk Se concentrations increase linearly as intake of Se from Se-yeast or from feeds that are high in Se increase, but milk Se does not change greatly as intake of selenite increases (Figure 6). One factor considered by the FDA during the Se-yeast approval process was the concentration of Se in milk and meat. Based on human health concerns, the FDA set the maximum allowable concentration of Se in milk at 0.14 mg/L. Based on the equation in Figure 6, an intake of approximately 25 mg of Se/day from Se-yeast and basal ingredients will produce milk that exceeds the legal limit in Se concentrations (approximately 3.5 times the legal dietary limit for lactating cows).
Selenium is transferred to the fetus in utero. The concentration of Se in plasma of newborn Holstein calves was 42% higher when cows were fed Se-yeast during the last 60 days of gestation compared with cows fed selenite (Weiss, unpublished). In studies with beef cows, whole blood from newborn (or very young ) calves was 35% (Pehrson et al., 1999) and 42% (Gunter et al., 2003) higher in Se concentration, and activity of GSH-px activity in the calves was 32 and 75% higher when dams were fed Se-yeast. Awadeh et al. (1998) reported only an 18% increase in whole blood Se and no effect on GSH-px in newborn beef calves when dams were fed Se-yeast.
Based on blood concentrations and GSH-px, Se-yeast is about 1.2 times better than selenite and, based on milk concentrations, it is 1.9 times better. The relative response in milk Se concentration is much higher than the response in blood because milk protein has about twice as much methionine as do proteins in whole blood; therefore, it is twice as likely Se-met will be incorporated into milk protein than blood protein. Milk protein is synthesized constantly and removed from the cow two or three times a day. Therefore, Se-met concentrations in milk reach steady state within a few days after Se-yeast supplementation has begun. Once a red blood cell is made, it does not synthesize protein, and red cells live 100 to 130 days. Therefore, it would take three or four months of supplementation for whole blood concentrations to reach steady-state. Many of the experiments that measured whole blood Se did not last that long, so the measured difference probably was less than maximal differences. Lastly, a substantial portion of the Se in whole blood is in selenoenzymes which, based on GSH-px, is less responsive to source of supplemental Se than other proteins. This would dilute the response in whole blood Se concentrations when Se-yeast is fed. Good clinical data are needed to determine the true difference in bioactivity of Se from selenite and Se-yeast. In lieu of those data, the best estimate of relative difference between selenite and Se-yeast available currently is GSH-px activity because it reflects biological activity of Se, not availability of met. Based on those data, Se from Se-yeast, on average, is about 1.2 times more bioactive than Se from selenite.
Factors to Consider When Choosing a Se Source
Antagonists to Se Absorption
Selenite and Se-met are absorbed from the intestine by completely different mechanisms. Factors that antagonize absorption of selenite are not likely to have the same effect on absorption of Se-met. Diets with 0.2% added sulfate-sulfur reduced true absorption of Se from selenate by 20% (Ivancic and Weiss, 1999). When sulfate is present, Se from Se-yeast would be about 50% more available than Se from inorganic sources (compared with 30% when sulfate is not excessive). Sulfate is unlikely to have an effect on Se-met absorption. Although this is not a likely problem, diets that provide high concentrations of digestible met will reduce availability of Se from Se-yeast because of competition for absorption sites in the intestine.
Body Retention of Se
Cows fed Se-yeast have higher concentrations of Se in almost all tissues than do cows fed selenite. Much of this Se is in proteins as Se-met. As proteins in the body are turned over, Se-met is released and, if broken down, can provide Se for selenoenzyme synthesis. Cows fed selenite have a much lower body reserve of Se than cows fed Se-yeast. This could be beneficial in periods of high Se demand and in unexpected periods of low Se supply. Increased body reserves may be especially beneficial for newborn calves. Calves borne from cows fed Se-yeast have higher concentrations of Se in tissues and often much higher GSH-px activity than when cows are fed inorganic Se. In addition, colostrum from cows fed Se-yeast contains more Se than colostrum from cows fed selenite, thereby increasing the difference in Se status of the calves. Feeding cows some Se-yeast during the last 60 days of gestation may have beneficial effects on calf health by improving the Se status of the calf.
Costs of Supplement
Diets with 0.3 ppm of supplemental Se provided by Se-yeast will cost about 5 cents/day per lactating cow more than will diets with selenite and 2 or 3 cents more per day for dry cows (approximately $17 annually for each cow, assuming a 305-day lactation). If supplementation rate was reduced 20% to account for higher bioactivity of Se-yeast, the annual cost is about $14. The cost of an ingredient should not be the primary concern; return on investment is what matters. Unfortunately data are not available to determine whether return on investment (via improved health) differs between inorganic Se and Se-yeast.
Recommendations and Conclusions
The benefits and disadvantages of each type of Se supplement are summarized in Table 2. Se-yeast has numerous advantages over selenite, but the question remains: Is it more profitable to use Se-yeast? In situations where antagonists are not a concern, inorganic Se is probably the most cost-effective option for lactating cows. If antagonists are present, some or all of the Se should be provided by Se-yeast. To ensure adequate Se status of calves, providing a portion of the supplemental Se as Se-yeast in dry cow diets is a good idea. Current regulations permit using a combination of Se sources as long as the total supplemental Se does not exceed 0.3 ppm in the total diet. Usually using a combination of nutrient sources is better than relying on a single ingredient. Some data with other trace minerals show benefits when a combination of inorganic and organic sources are used compared with either all organic or all inorganic. The same may be true for Se. In my opinion, if antagonists are not present in feed or water, lactating cows should be supplemented with Se that is predominantly from inorganic sources. If antagonists are present, the predominant Se source should be Se-yeast. Because of potential benefits to the newborn calf, a larger proportion of Se (maybe 50%) in dry cows’ diets should come from Se-yeast even when antagonists are not present.
Aspila, P. 1988. Metabolism of selenite, selenomethionine, and feed-incorporated selenium in lactating goats and dairy cows. J. Agr Sci Finland.
Awadeh, F. T., M. M. Abdelrahman, R. L. Kincaid, and J. W. Finley. 1998. Effect of selenium supplements on the distribution of selenium among serum proteins in cattle. J. Dairy Sci. 81:1089-1094.
Combs, J., G.F., C. L. Clark, and B. W. Turnbull. 2001. An analysis of cancer prevention by selenium. Biofactors. 14:153-159.
Federal Drug Administration. 2003. Food additives permitted in feed and drinking water of animals: Selenium yeast. Federal Register 68 (170):52339-52340 (September 3).
Fisher, D. D., S. W. Saxton, R. D. Elliott, and J. M. Beatty. 1995. Effects of selenium sources on Se status of lactating cows. Vet. Clin. Nutr. 2:68-74.
Gunter, S. A., P. A. Beck, and J. M. Phillips. 2003. Effects of supplementary selenium source on the performance and blood measurements in beef cows and their calves. J. Anim. Sci. 81:856-864.
Harrison, J. H. and H. R. Conrad. 1984. Effect of calcium on selenium absorption by the nonlactating dairy cow. J. Dairy Sci. 67:1860-1864.
Ivancic, J. and W. P. Weiss. 2001. Effect of dietary sulfur and selenium concentrations on selenium balance of lactating Holstein cows. J. Dairy Sci. 84:225-232.
Knowles, S. O., N. D. Grace, K. Wurms, and J. Lee. 1999. Significance of amount and form of dietary selenium on blood, milk, and casein selenium concentrations in grazing cows. J. Dairy Sci. 82(2):429-437.
Koenig, K. M., L. M. Rode, L. M. Cohen, and W. T. Bucklet. 1997. Effects of diet and chemical form of selenium on selenium metabolism in sheep. J. Anim. Sci. 75:817-827.
Malbe, M., M. Klaassen, W. Fang, V. Myllys, M. Vikerpuur, K. Nyholm, W. Sankari, K. Suoranta, and M. Sandholm. 1995. Comparisons of selenite and selenium yeast feed supplements on Se-incorporation, mastitis, and leukocyte function in Se-deficient dairy cows. J. Vet. Med. (Ser. A). 42(2):111-121.
Muth, O. H., J. E. Oldfield, L. F. Remmert, and J. R. Schubert. 1958. Effects of selenium and vitamin E on white muscle disease. Sci. 128:1090-1092.
Paulson, G. D., C. A. Bauman, and A. L. Pope. 1968. Metabolism of 75Se-selenite, 75Se-selenate, and 75Se-selenomethionine and 35S-sulfate by rumen microorganisms in vitro. J. Anim. Sci. 27:497-504.
Nicholson, J. W., R. S. Bush, and J. G. Allen. 1993. Antibody responses of growing beef cattle fed silage diets with and without selenium supplementation. Can. J. Anim. Sci. 73:355-365.
Nicholson, J. W., R. E. McQueen, and R. S. Bush. 1991. Response of growing cattle to supplementation with organically bound or inorganic sources of selenium or yeast cultures. Can. J. Anim. Sci. 71:803-811.
Ortman, K., R. Andersson, and H. Holst. 1999. The influence of supplements of selenite, selenate, and selenium yeast on the selenium status of dairy heifers. Acta Vet. Scand. 40:23-34.
Ortman, K., and B. Pehrson. 1997. Selenite and selenium yeast as feed supplements for dairy cows. J. Vet. Med. (Ser. A) 44:373-380.
Ortman, K., and B. Pehrson. 1999. Effect of selenate as a feed supplement in dairy cows in comparison to selenite and selenium yeast. J. Anim. Sci. 77:3365-3370
Pehrson, B., M. Knutsson, and M. Gyllensward. 1989. Glutathione peroxidase activity in heifers fed diets supplemented with organic and inorganic selenium compounds. Swed. J. Agr. Res. 19:53-56.
Pehrson, B., K. Ortman, N. Madjid, and U. Trafikowska. 1999. The influence of dietary selenium as selenium yeast or sodium selenite on the concentration of selenium in the milk of suckler cows and on selenium status of their calves. J. Anim. Sci. 77:3371-3376.
Schrauzer, G. N. 2003. The nutritional significance, metabolism, and toxicology of selenomethionine. Adv. Food Nutr. Res. 47:73-112.
Schwarz, K. and C. M. Folz. 1957. Selenium as an integral part of Factor 3 against dietary necrotic liver degeneration. J. Am. Chem. Soc. 78:3292-3302.
Serra, A. B., K. Nakamura, T. Matsui, T. Harumoto, and T. Fujihara. 1994. Inorganic selenium for sheep. 1. Selenium balance and selenium levels in the different ruminal fluid fractions. Asian-Australian J. Anim. Sci. 7:83-89.
Vendeland, S. C., J. A. Butler, and P. D. Whanger. 1992. Intestinal absorption of selenite, selenate, and selenomethionine in the rat. J. Nutr. Biochem. 3:359-365.
Weiss, W. P. 2003. Selenium nutrition of dairy cows: Comparing responses to organic and inorganic selenium forms. Pp. 33-343 in Nutritional Biotechnology in the Feed and Food Industries, Alltech Inc., Lexington, KY.
Weiss, W. P. and J. W. Spears. 2005. Vitamin and trace mineral effects on immune function of ruminants. In 10th International Symp. on Ruminant Physiology. Wageningen, Denmark, Copenhagen, Denmark (in press).
|Experiment Code on Figure||Animal Type||Citation|
|Figure 3||Figure 4||Figure 5|
|A||A||A||Beef cows||Awadeh et al. (1998)|
|B||…||B||Dairy cows||Fisher et al. (1995)|
|C||C||C||Dairy cows||Knowles et al. (1999) (2 mg)|
|D||D||D||Dairy cows||Knowles et al. (1999) (4 mg)|
|E||E||E||Dairy cows||Malbe et al. (1995)|
|F||…||…||Beef heifers + steers||Nicholson et al. (1991)|
|G||…||…||Dairy heifers||Nicholson et al. (1991)|
|…||H||…||Combined||Nicholson et al. (1991)|
|I||I||…||Growing beef||Nicholson et al. (1993)|
|J||J||J||Dairy cows||Ortman and Pehrson (1997)|
|K||K||K||Dairy cows||Ortman and Pehrson (1999)|
|L||L||…||Dairy heifers||Ortman et al. (1999)|
|M||M||M||Beef cows||Pehrson et al. (1999)|
|N||N||N||Beef cows||Gunter et al. (2003)|
|…||O||…||Dairy heifers||Pehrson et al. (1989)|
|…||…||P||Dairy cows||Weiss (unpublished)|
|Cheap||Absorption can be affected by antagonists|
|Provides adequate Se in many situations||Provides limited body reserves of Se|
|Probably 20 to 30% more available||More expensive|
|Builds up body reserves of Se|
|Increases milk Se (human health benefit)|
|Increases colostrum Se (calf health benefit)|
|Increased transfer of Se to fetus|
|Not affected greatly by absorption antagonists|
William P. Weiss
Ohio Agricultural Research and Development Center
The Ohio State University
Wooster, OH 44691
(330) 263-3622, Fax (330) 263-3949 | <urn:uuid:3fe10e03-c419-4068-b1b7-758529c1a998> | CC-MAIN-2019-47 | https://dairy-cattle.extension.org/selenium-sources-for-dairy-cattle/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00461.warc.gz | en | 0.91056 | 6,621 | 2.734375 | 3 |
Although westerners often think of this traditional Chinese treatment modality as a “new” form of alternative medicine, acupuncture is so ancient in China that its origins are unclear. According to Huangfu Mi (c. 215-282 AD), author of The Systematic Classic of Acupuncture and Moxibustion, needling therapy was first used during China’s Bronze Age, over five thousand years ago. He attributes its invention to either Fu Xi or Huang Di (the Yellow Emperor), two legendary figures of the Five Emperors Period (c. 3000-2070 BC). Modern scholars generally believe that acupuncture is much older, originating more than ten thousand years ago during China’s Neolithic Age (c. 8000-3500 BC).
In actuality, acupuncture may not be as ancient as has generally been assumed. A reconsideration of all extant documents and recent archaeological finds indicates that acupuncture may date back a mere 2100 to 2300 years, first appearing during China’s Warring States Period (475-221 BC) and rapidly maturing during the Western Han Dynasty (206 BC-24 AD).
Questioning the generally accepted origins theory.
The currently accepted theory concerning the Neolithic origins of acupuncture is based on two premises. The first holds that bian shi, specialized sharp-edged stone tools that appeared during China’s Neolithic Age, were used for an early form of needling therapy, prior to the invention of metal smelting. It is known that bian shi stone tools were utilized for a number of early medical procedures, starting during the Neolithic Age and continuing through the Western Han Dynasty (206 BC-24 AD). A number of descriptions of bian shi stone therapy appear in one of China’s earliest medical works, The Yellow Emperor’s Inner Classic of Medicine (Huang Di Neijing, hereafter referred to as the Neijing) (c. 104-32 BC). It has been thought that these Neolithic stone medical instruments were precursors of the metal acupuncture needles that came into use during China’s Iron Age.
However, historical documents and new archaeological evidence clearly indicate that bian shi stone tools were flat and knife-like in form, used primarily to incise abscesses to discharge pus, or to draw blood (1). They were applied as surgical scalpels to cut, rather than as needles to puncture, and had nothing to do with needling therapy. According to the Code of Hammurabi, the ancient inhabitants of Mesopotamia used similarly shaped bronze knives to incise abscesses over 4000 years ago.
Prehistoric Chinese people possessed needles made of various materials, ranging from crude thorns and quills to bone, bamboo, pottery, and stone. But just as the history of the knife is not the history of surgery, so the invention of needles and that of acupuncture are two entirely different things. Needles have historically been among the most commonly used tools of daily life for constructing garments all over the world. Medically, needles are used to suture incisions just as making up clothes with darners, hollow syringe needles (as differentiated from a solid needle used in acupuncture) are applied to inject fluids into the body or draw them from it, but pricking a solid needle into the body to treat illness seems very strange and enigmatical. In English, “to give somebody the needle” means to displease or irritate someone. Most people prefer not to be punctured with needles, and associate needling with pain and injury. Many plants and animals have evolved thorns or quills as powerful weapons for protection or attack. Needles were even used for punishment in ancient China. By trial and error, healers throughout the world have found treatments for pain and other diseases independently, for instances, herbs, roots, wraps, rubs, blood-letting and surgery, but acupuncture alone is unique to Chinese. Considering the unique Chinese origin of acupuncture, it is reasonable to assume that the invention of acupuncture was not related to the availability of either sewing needles or bian shi stone scalpels during China’s Neolithic Age.
The second premise supporting the theory of the Neolithic origins of acupuncture holds that acupuncture evolved as a natural outgrowth of daily life in prehistoric times. It is thought that through a process of fortuitous accident and repeated empirical experience, it was discovered that needling various points on the body could effectively treat various conditions. However, this assumption is lacking in both basic historical evidence and a logical foundation.
It is known that ancient people were aware of situations in which physical problems were relieved following unrelated injury. Such a case was reported by Zhang Zihe (c. 1156-1228 AD), one of the four eminent physicians of the Jin and Yuan Dynasties (1115-1368 AD) and a specialist in blood-letting therapy: “Bachelor Zhao Zhongwen developed an acute eye problem during his participation in the imperial examination. His eyes became red and swollen, accompanied by blurred vision and severe pain. The pain was so unbearable that he contemplated death. One day, Zhao was in teahouse with a friend. Suddenly, a stovepipe fell and hit him on the forehead, causing a wound about 3-4 cun in length and letting copious amounts of dark purple blood. When the bleeding stopped, a miracle had occurred. Zhao’s eyes stopped hurting; he could see the road and was able to go home by himself. The next day he could make out the ridge of his roof. Within several days, he was completely recovered. This case was cured with no intentional treatment but only accidental trauma (2).”
If acupuncture did, in fact, gradually develop as the result of such fortuitous accidents, China’s four thousand years of recorded history should include numerous similar accounts concerning the discovery of the acupoints and their properties. But my extensive search of the immense Chinese medical canon and other literature has yielded only this single case. Actually, this story offers at most an example of blood-letting therapy, which differs in some essential regards from acupuncture. The point of blood-letting therapy is to remove a certain amount of blood. But when puncturing the body with solid needles, nothing is added to or subtracted from the body.
Blood-letting therapy is universal. Throughout recorded history, people around the world have had similar experiences with the beneficial results of accidental injury, and have developed healing methods based on the principle that injuring and inducing bleeding in one part of the body can relieve problems in another area. The ancient Greeks and Romans developed venesection and cupping based on the discovery that bleeding is beneficial in cases such as fever, headache, and disordered menstruation. Europeans during the Middle Ages used blood-letting as a panacea for the prevention and treatment of disease. Detailed directions were given concerning the most favorable days and hours for blood-letting, the correct veins to be tapped, the amount of blood to be taken, and the number of bleedings. Blood was usually taken by opening a vein with a lancet, but sometimes by blood-sucking leeches or with the use of cupping vessels. Blood-letting using leeches is still practiced in some areas of Europe and the Middle East. However, nowhere did these blood-letting methods develop into a detailed and comprehensive system comparable to that of acupuncture. If acupuncture did indeed arise from repeated empirical experience of accidental injury, it should have developed all over the world, rather than just in China.
Both historical evidence and logic indicate that there is no causal relation between the development of materials and techniques for making needles and the invention of acupuncture. It is also clear that repeated experience of fortuitous accidental injury was not a primary factor in the development of acupuncture. Therefore, the generally accepted theory concerning the Neolithic origins of acupuncture, based as it is upon such faulty premises, must be incorrect. It is now necessary to reconsider when acupuncture did, in fact, first appear and subsequently mature.
Reconsidering the evidence
If acupuncture did indeed originate during China’s Neolithic Age, references to it should appear throughout China’s earliest written records and archaeological relics. However, this is not the case.
Early cultures believed the world to be filled with the supernatural, and developed various methods of divination. During China’s Shang Dynasty (c. 1500-1000 BC), divination was practiced by burning animal bones and tortoise shells with moxa or other materials. Oracular pronouncements were then inscribed on the bone or shell, based on the resulting crackles. These inscriptions have survived as the earliest examples of written Chinese characters. Among the hundreds of thousands of inscribed oracle bones and shells found to date, 323 contain predictions concerning over twenty different diseases and disorders. However, none of these inscriptions mention acupuncture, or any other form of treatment for that matter.
Rites of the Zhou Dynasty (Zhou Li), written during the Warring States Period (475-221 BC), records in detail the official rituals and regulations of the Zhou Dynasty (c. 1000-256 BC), including those concerning medicine. Royal doctors at that time were divided into four categories: dieticians, who were responsible for the rulers’ food and drink; doctors of internal medicine, who treated diseases and disorders with grains and herbs; surgeons, or yang yi, who treated problems such as abscesses, open sores, wounds, and fractures using zhuyou (incantation), medication, and debridement (using stone or metal knives to scrape and remove pus and necrotic tissue); and veterinarians, who treated animals. But this document as well contains no references to acupuncture.
Neijing (c. 104-32 BC) is the first known work concerning acupuncture. The classic consists of two parts: Suwen – Simple Questions, and Lingshu – the Spiritual Pivot, also known as The Classic of Acupuncture (Zhen Jing). Both are concerned primarily with the theory and practice of acupuncture and moxibustion. Although authorship of the Neijing is attributed to Huang Di, the legendary Yellow Emperor (c. 2650 BC), most scholars consider that this master work, which contains excerpts from more than twenty pre-existing medical treatises, was actually compiled between 104 BC and 32 BC, during the latter part of the Western Han dynasty (206 BC-24 AD). The comprehensive and highly developed nature of the medical system presented in the Neijing has led scholars to believe that needling therapy has an extremely long history, probably reaching back to prehistoric times. The original versions of the ancient texts used in the compilation of the Neijing have been lost, and with them the opportunity to further illuminate the question of when acupuncture actually first appeared. However, startling new archaeological evidence, unearthed in China in the early 1970s and 1980s, reveals the true state of Chinese medicine prior to the Neijing, and challenges existing assumptions concerning the Neolithic origins of acupuncture.
In late 1973, fourteen medical documents, known as the Ancient Medical Relics of Mawangdui, were excavated from Grave No. 3 at Mawangdui, Changsha, Hunan Province. Ten of the documents were hand-copied on silk, and four were written on bamboo slips. The exact age of the Ancient Medical Relics of Mawangdui has not been determined. However, a wooden tablet found in the grave states that the deceased was the son of Prime Minister Li Chang of the state of Changsha, and that he was buried on February 24, 168 BC. The unsystematic and empirical nature of the material contained in the documents indicates that they were written well before their interment in 168 BC, probably around the middle of the Warring States Period (475-221 BC). In any event, it is certain that these medical documents pre-date the Neijing (compiled c. 104-32 BC), making them the oldest known medical documents in existence. These documents were probably lost sometime during the Eastern Han Dynasty (25-220 AD), since no mention of them has been found from this time until their rediscovery in 1973.
Another valuable medical find, The Book of the Meridians (Mai Shu), was excavated from two ancient tombs at Zhangjiashan in Jiangling County, Hubei Province in 1983. These ancient texts, written on bamboo slips and quite well preserved, were probably buried between 187 and 179 BC, around the same time as the Mawangdui relics. There are five documents in all, three of which (The Classic of Moxibustion with Eleven Yin-Yang Meridians, Methods of Pulse Examination and Bian Stone, and Indications of Death on the Yin-Yang Meridians) are identical to the texts found at Mawangdui.
There is abundant evidence to show that the authors of the Neijing used the earlier medical texts from Mawangdui and Zhangjiashan as primary references, further indicating the antiquity of these relics. For example, Chapter 10 of the Lingshu section of the Neijing contains a discussion of the meridians and their disorders that is very similar, in both form and content, to that found in the Classic of Moxibustion with Eleven Yin-Yang Meridians, one of the documents found at both Mawangdui and Zhangjiashan.
Of course, the Neijing did not simply reproduce these earlier documents, but rather refined and developed them, and introduced new therapeutic methods. The earlier Classic of Moxibustion with Eleven Yin-Yang Meridians is limited to moxibustion, while Chapter 10 of the Lingshu section of the Neijing mentions needling therapy, or acupuncture, for the first time. Although the medical texts preceding the Neijing discuss a wide variety of healing techniques, including herbal medicine, moxibustion, fomentation, medicinal bathing, bian stone therapy, massage, daoyin (physical exercises), xingqi (breathing exercises), zhuyou (incantation), and even surgery, these earlier documents contain no mention of acupuncture.
If needling therapy did indeed originate much earlier than the Neijing (c. 104-32 BC), the medical documents unearthed from Mawangdui and Zhangjiashan, very probably used as primary references by the Neijing’s authors, should also contain extensive discussions of acupuncture. However, they do not. This clearly indicates that acupuncture was not yet in use at the time that the Mawangdui and Zhangjiashan documents were compiled. Of course, it is not possible to draw a detailed picture of the state of acupuncture early in the Western Han Dynasty (206 BC-24 AD) based solely on the medical relics from Mawangdui and Zhangjiashan. But the fact that these documents were considered valuable enough to be buried with the deceased indicates that they do reflect general medical practice at the time.
The Historical Records (Shi Ji) (c. 104-91 BC) by Sima Qian contains evidence that acupuncture was first used approximately one hundred years prior to the compilation of the Neijing (c. 104-32 BC). The Historical Records, China’s first comprehensive history, consists of a series of biographies reaching from the time of the legendary Yellow Emperor (c. 2650 BC) to Emperor Wudi (156-87 BC) of the Western Han Dynasty. Among these are biographies of China’s two earliest medical practitioners, Bian Que and Cang Gong. Bian Que’s given name was Qin Yueren. It is known that he lived from 407-310 BC, during the late Warring States Period (475-221 BC), and was a contemporary of Hippocrates (c. 460-377 BC), the father of Western medicine. Bian Que’s life was surrounded by an aura of mystery which makes it difficult to separate fact from legend. His name means Wayfaring Magpie – a bird which symbolizes good fortune. It is said that an old man gave Bian Que a number of esoteric medical texts and an herbal prescription, and then disappeared. Bian Que took the medicine according to the mysterious visitor’s instructions. Thirty days later, he could see through walls. Thereafter, whenever he diagnosed disease, he could clearly see the internal organs of his patients’ bodies. Like the centaur Chiron, son of Apollo, who is sometimes regarded as the god of surgery in the West, Bian Que is considered to be a supernatural figure, and the god of healing. A stone relief, unearthed from a tomb dating back to the Han Dynasty (206 BC-220 AD), depicts him with a human head on a bird’s body (3). The Historical Records states that Bian Que successfully resuscitated the prince of the State of Guo using a combination of acupuncture, fomentation, and herbal medicine. Bian Que is thus considered to be the founder of acupuncture, and to have made the first recorded use of acupuncture during the Warring States Period (475-221 BC).
More solid evidence connects the birth of acupuncture with the famous ancient physician Chunyu Yi (c. 215-140 BC), popularly known as Cang Gong. Cang Gong’s life and work are described in detail in the Historical Records. The Historical Records state that in 180 BC, Cang Gong’s teacher gave him a number of precious medical texts that had escaped the book-burnings of the last days of the Great Qin Empire (221-207 BC). At that time, adherents of all opposing schools of thought were executed or exiled, and almost all books not conforming to the rigid Legalist doctrines that dominated the Qin Dynasty were burned. Although medical texts escaped the disaster, their owners still feared persecution. The banned books that Cang Gong received might have included a number whose titles appear in the Ancient Medical Relics of Mawangdui, such as the Classic of Moxibustion with Eleven Yin-Yang Meridians, Classic of Moxibustion with Eleven Foot-Arm Meridians, Method of Pulse Examination and Bian Stone, Therapeutic Methods for 52 Diseases, Miscellaneous Forbidden Methods, and The Book of Sex.
Cang Gong’s biography in the Historical Records discusses twenty-five of his cases, dating from approximately 186 BC to 154 BC. These cases studies, the earliest in recorded Chinese history, give a clear picture of how disease was treated over 2100 years ago. Of the twenty-five cases, ten were diagnosed as incurable and the patients died as predicted. Of the fifteen that were cured, eleven were treated with herbal medicine, two with moxibustion in combination with herbal medicine, one with needling, and one with needling in combination with pouring cold water on the patient’s head. It can be seen from this material that Cang Gong used herbal medicine as his primary treatment, and acupuncture and moxibustion only secondarily. His use of moxibustion adheres strictly to the doctrines recorded in the medial relics from Mawangdui and Zhangjiashan. Although only two of Cang Gong’s moxibustion cases are recorded in the Historical Records, it is known that he was expert in its use, and that he wrote a book called Cang Gong’s Moxibustion. Unfortunately, this book has been lost. In comparison with his wide-ranging utilization of herbal medicine and moxibustion, Cang Gong applied needling therapy very sparingly. Neither of Cang Gong’s two recorded acupuncture cases mentions specific acupoints or how the needles were manipulated, indicating that needling therapy at the time was still in its initial stage.
Although acupuncture was not in common use during Cang Gong’s day, his two recorded acupuncture patients were cured with only one treatment, indicating the efficacy of the nascent therapy. The rapid development of acupuncture was soon to follow. By the time the Neijing was compiled (c. 104-32 BC), approximately one hundred years after the time of Cang Gong, acupuncture had supplanted herbs and moxibustion as the treatment of choice. Only thirteen herbal prescriptions are recorded in the Neijing, compared with hundreds utilizing acupuncture.
Archaeological excavations of Western Han Dynasty (206 BC-24 AD) tombs have yielded a number of important medical relics related to acupuncture, in addition to the Neijing and Historical Records. In July of 1968, nine metal needles were excavated at Mancheng, Hebei Province from the tomb of Prince Liu Sheng (?-113 BC) of Zhongshan, elder brother of Emperor Wu Di (156-87 BC) of the Western Han Dynasty (206 BC-24 AD). Four of the needles are gold and quite well preserved, while five are silver and decayed to the extent that it was not possible to restore them completely. The number and shapes of the excavated needles indicate that they may have been an exhibit of the nine types of acupuncture needles described in the Neijing. This possibility is supported by the fact that a number of additional medical instruments were found in the tomb. These included a bronze yigong (practitioner’s basin) used for decocting medicinal herbs or making pills, a bronze sieve used to filter herbal decoctions, and a silver utensil used to pour medicine (4). Although many prehistoric bone needles have been unearthed, the fact that they have eyes indicates that they were used for sewing. Some scholars have inferred that prehistoric Chinese people may have used bone needles found with no eyes or with points on both ends for medical purposes. However, I believe that it is rash to draw such a conclusion based solely on relics that have lain buried for thousands of years. Rather, it is likely that the eyes of these needles have simply decayed over the millennia.
A thorough reevaluation of all extant literature, as well as documents and archaeological relics unearthed since the 1960s, confirms that acupuncture is not as ancient as has generally been assumed, and that it did not, in fact, appear and gradually develop during China’s Neolithic Age (c. 8000-3500 BC). Rather, this great invention arose quite suddenly and rapidly developed approximately two millennia ago. All evidence indicates that acupuncture first appeared during the Warring States Period (475-221 BC), during the time of Bian Que, developed during the early Western Han Dynasty (206 BC-24 AD), during the time of Cang Gong, and had fully matured by the latter part of the Western Han Dynasty, at the time of the compilation of the Neijing (c. 104-32 BC).
The Western Han Dynasty (206 BC-24 AD) provided fertile ground for the rapid growth and maturation of acupuncture as a comprehensive medical system. The previous centuries had seen the blossoming of Chinese culture during the intellectual give-and-take of the Spring and Autumn (770-476 BC) and Warring States (475-221 BC) periods. The subsequent territorial unification of China by the Qin Dynasty (221-207 BC) laid a foundation for the cultural integration of the diverse states. Taken in the context of China’s four thousand years of recorded history, the Western Han Dynasty was a period of intensive social and cultural advancement. Acupuncture is unique. Its invention of acupuncture in China at this time was the result of the development and unique convergence of several aspects of Chinese culture during this time, including natural science, social structure and human relations, and most importantly, holistic philosophy.
References and notes:
1. Bai Xinghua, et al., Acupuncture: Visible Holism. Oxford: Butterworth-Heinemann, 2001, pps. 15-20.
2. Zhang Zhihe (1156-1228 AD), Confucians’ Duties to Their Parents (Rumen Shiqin). Quoted in Selection and Annotation of Medical Cases Treated by Past Dynasties’ Eminent Acupuncturists (Lidai Zhenjiu Mingjia Yian Xuanzhu), ed. Li Fufeng. Harbin: Heilongjiang Science and Technology Publishing House, 1985, p. 143.
3. Liu Dunyuan. Stone Relief Showing Practice of Acupuncture and Moxibustion from the Eastern Han Dynast. Archaeology, 1972; (6): 47-51
4. Zhong Yiyan, Medical Instruments Unearthed from the Western Han Dynasty Tomb of Liu Sheng. Archaeology, 1972, (3): pp. 49-53. | <urn:uuid:2902d38f-43f9-4eb7-9839-d2d2a33f615a> | CC-MAIN-2019-47 | https://cmp.herbalremediesforeveryone.info/how-old-is-acupuncture-challenging-the-neolithic-origins-theory/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00260.warc.gz | en | 0.963196 | 5,079 | 3.53125 | 4 |
- Research article
- Open Access
- Open Peer Review
What influences 11-year-olds to drink? Findings from the Millennium Cohort Study
BMC Public Health volume 16, Article number: 169 (2016)
The Erratum to this article has been published in BMC Public Health 2016 16:826
Drinking in youth is linked to other risky behaviours, educational failure and premature death. Prior research has examined drinking in mid and late teenagers, but little is known about the factors that influence drinking at the beginning of adolescence. Objectives were: 1. to assess associations of parental and friends’ drinking with reported drinking among 11 year olds; 2. to investigate the roles of perceptions of harm, expectancies towards alcohol, parental supervision and family relationships on reported drinking among 11 year olds.
Analysis of data from the UK Millennium Cohort Study on 10498 11-year-olds. The outcome measure was having drank an alcoholic drink, self-reported by cohort members.
13.6 % of 11 year olds reported having drank. Estimates reported are odds ratios and 95 % confidence intervals. Cohort members whose mothers drank were more likely to drink (light/moderate = 1.6, 1.3 to 2.0, heavy/binge = 1.8, 1.4 to 2.3). Cohort members whose fathers drank were also more likely to drink but these estimates lost statistical significance when covariates were adjusted for (light/moderate = 1.3, 0.9 to 1.9, heavy/binge = 1.3, 0.9 to 1.9). Having friends who drank was strongly associated with cohort member drinking (4.8, 3.9 to 5.9). Associated with reduced odds of cohort member drinking were: heightened perception of harm from 1–2 drinks daily (some = 0.9, 0.7 to 1.1, great = 0.6, 0.5 to 0.7); and negative expectancies towards alcohol (0.5, 0.4 to 0.7). Associated with increased odds of cohort member drinking were: positive expectancies towards alcohol (1.9, 1.4 to 2.5); not being supervised on weekends and weekdays (often = 1.2, 1.0 to 1.4); frequent battles of will (1.3, 1.1 to 1.5); and not being happy with family (1.2, 1.0 to 1.5).
Examining drinking at this point in the lifecourse has potentially important public health implications as around one in seven 11 year olds have drank, although the vast majority are yet to explore alcohol. Findings support interventions working at multiple levels that incorporate family and peer factors to help shape choices around risky behaviours including drinking.
Regular heavy and binge drinking are recognised as major public health problems in terms of mortality, morbidity and wider social and economic consequences [1, 2], and regular and heavy drinking in youth are related to risky behaviours, educational failure and to the leading causes of death in adolescence [3–5]. Among the vast majority of people who consume alcohol, initiation of drinking takes place during adolescence . The question remains open as to whether early initiation of drinking causes problematic alcohol use later in life with recent review articles reaching opposing conclusions [7, 8]. However, the importance of adolescent drinking is likely shaped by the timing and pattern of drinking as well as the broader social context. Research from Italy and Finland suggests the significance of context specific alcohol socialisation processes in relation to adolescent drinking . Over the last decade there has been a decline in the prevalence of drinking among adolescents in the UK , however consumption levels among UK youth remain higher than the European average . Among UK adolescent drinkers there is no evidence of a reduction in the quantity of alcohol consumed , and hospital admissions due to alcohol among the under 18 s remain a concern .
Adolescence is a time of dramatic change that influences young people’s sense of autonomy and their exploration of risky behaviours. Factors shown to influence young people’s drinking include parent and peer drinking behaviours, perceptions of risk, expectancies towards alcohol and supportive family relationships [13–17]. Most prior studies have focused on drinking behaviours in mid and late teenage years [13, 17–19] and as highlighted in recent reviews [4, 20] less is known about influences on drinking among pre-teens. Improving our understanding of factors that influence drinking initiation at the beginning of adolescence could help develop policies and effective alcohol harm reduction strategies.
Given the paucity of work on the initiation of drinking in very early adolescence, in this paper we address two research objectives 1. to assess associations of parental and friends’ drinking with reported drinking among 11 year olds; and 2. to investigate the roles of perceptions of harm, expectancies towards alcohol, parental supervision and family relationships on drinking among 11 year olds. To do this we analysed data from the large contemporary population based Millennium Cohort Study.
The Millennium Cohort Study (MCS) is a UK nationally representative prospective cohort study of children born into 19244 families between September 2000 and January 2002 . Participating families were selected from a random sample of electoral wards with a stratified sampling design to ensure adequate representation of all four UK countries, disadvantaged and ethnically diverse areas. The first sweep of data was collected when cohort members were around 9 months and the subsequent four sweeps of data were collected at ages 3, 5, 7, and 11 years. At the 11 year sweep, interviews were conducted during home visits with cohort members and their carers, and questions asked about alcohol consumption, socioeconomic circumstances and family relationships. Cohort members filled out a self-completion booklet in a private place within the home. Interview data were available for 69 % of families when cohort members were aged 11.
Drinking at age 11
In a question developed for the MCS survey, cohort members were asked “Have you ever had an alcoholic drink? That is more than a few sips?” (yes/no).
Parent and friends’ drinking
Parents were asked about the frequency and amount of alcohol they drank. “How often do you have a drink that contains alcohol?” (4 or more times a week, 2–3 times a week, 2–4 times per month, Monthly or less, Never). “How many standard alcoholic drinks do you have on a typical occasion?” Response options on frequency and quantity of alcohol consumed meant it was only possible to approximate drinking categories as set out in guidelines by the UK Department of Health. The same categories were used for mothers and fathers as follows: None; Light/moderate - those who drank but were not heavy/binge drinkers; Heavy/binge - 4 or more times a week and drinks a minimum of 3–4 drinks per drinking occasion, or a minimum of 5–6 drinks per occasion. Separate categories were created for cohort members where information on parents’ drinking behaviour was missing and when the father was absent from the household.
Friends’ drinking was assessed by asking cohort members “How many of your friends drink alcohol?” Response categories were recoded: None of them as No; Some/Most/All of them as Yes; don’t know was retained as a separate category.
Cohort member and family characteristics
Gender; puberty, assessed from responses by the mother to questions (for girls - hair on body, breast growth, menstruation, boys - hair on body, voice change, facial hair); birth order (first vs subsequent); current socioemotional difficulties (normal vs high score) ; antisocial behaviours (“Have you ever … been noisy or rude in a public place so that people complained or got you into trouble? … taken something from a shop without paying for it? … written things or sprayed paint on a building, fence or train or anywhere else where you shouldn’t have? … on purpose damaged anything in a public place that didn’t belong to you, for example by burning, smashing or breaking things like cars, bus shelters and rubbish bins?” categorised 0, 1, 2 or more); truancy (yes/no); cigarette smoking (yes/no); quintiles of equivalised family income; religious affiliation (none vs any of Christian, Muslim, Hindu, Jewish, other).
Potential moderating variables
Perception of risk due to alcohol was assessed by the question “How much do you think people risk harming themselves if they drink one or two alcoholic drinks nearly every day?” (no/slight risk, some risk, great risk). Positive expectancies towards alcohol were assessed by the following questions: “Drinking beer, wine, or spirits is a way to make friends with other people”; “It is easier to open up and talk about one's feelings after a few drinks of alcohol”; “Drinking alcohol makes people …worry less; …happier with themselves”. Negative expectancies were assessed using questions: “Drinking alcohol … gets in the way of school work; … makes it hard to get along with friends”; “If I drank alcohol without my parents’ permission I would be caught and punished”. Items were summed and used as two separate scales .
Parental supervision was assessed by questions about the weekday and weekend frequency of cohort member spending unsupervised time with friends (playing in the park, going to the shops or just ‘hanging out’). Items were combined into a three category variable: rarely/never (at most occasionally at weekends/on weekdays), sometimes, often (unsupervised most weekends and at least one day per week).
Markers of family relationships were: frequent battles of will with cohort member (yes/no); mother-cohort member closeness (extremely/very close vs fairly/not very close); cohort member happiness with their family (“On a scale of 1 to 7 where ‘1’ means completely happy and ‘7’ means not at all happy, how do you feel about your family?” Responses corresponding to the top decile of the distribution were taken to indicate happy with family) .
Data on cohort member drinking were available for 12644 participants. Missing data reduced the sample to 10498 (83.0 %), as follows: friends drinking = 56; puberty = 1010; socioemotional difficulties = 475; religious affiliation = 34; antisocial behaviours = 27; perception of harm = 337; positive expectancies = 189; negative expectancies = 315; parental supervision = 87; frequent battles = 1106; relationship between mother and child = 762; happy with family = 91.
To estimate the association between our exposures of primary interest – mother’s, father’s or friends’ drinking with cohort member drinking, we ran three sets of logistic regression models adding covariates in stages. Boys were more likely to report drinking compared with girls (15.7 vs. 11.3 %), but as there were no gender differences in observed associations between parent and friends’ drinking with cohort member drinking, we present analyses for boys and girls combined, and all models adjust for gender.
Model 0 is the baseline model which includes the primary independent variable (mother’s, father’s or friends’ drinking) and gender.
Model 1 additionally adjusts for control variables: puberty, birth order, socioemotional difficulties, antisocial behaviours, truancy, smoking, income, religion and for other alcohol exposure variables e.g. when mother’s drinking is the primary exposure, we add father’s and friends’ drinking to this step of the analysis.
Model 2 is fully adjusted adding in potential moderator and mediator variables, perception of harm due to alcohol, positive and negative expectancies, parental supervision and family relationships (battles, closeness, happiness with family).
All analysis was carried out using Stata version 13.1 (Stata Corp).
Who drinks by age 11?
Overall 13.6 % of cohort members reported having drunk more than a few sips of an alcoholic drink. Cohort members who reported drinking were more likely to be boys (15.7 % vs 11.3 %, p < 0.001), to have started puberty (14.3 % vs 13.2 %), to be a second or later born child (14.0 % vs 12.9 %), to have socioemotional difficulties (18.7 % vs 12.8 %, p < 0.001), to report antisocial behaviours (none = 10.1 %, 1 = 20.7 %, 2 or more = 42.0 %, p < 0.001), report truancy (24.8 % vs 13.2 %, p < 0.001), smoke cigarettes (50.9 % vs 12.4 %, p < 0.001), to be from poorer families (15.4 % in the poorest quintile vs 11.5 % in richest quintile, p < 0.01) and not have any religious affiliation (15.7 % vs 11.6 %, p < 0.001). Table 1 shows the distribution of covariates by cohort member drinking.
Does parental or friends’ drinking matter?
Cohort members whose mothers drank were more likely to drink and these estimates changed little on adjustment for covariates (fully adjusted OR – light/moderate = 1.6, 1.3 to 2.0, heavy/binge = 1.8, 1.4 to 2.3 compared to those with non-drinking mothers). Cohort members for whom data on mother’s drinking was missing were also more likely to drink (fully adjusted OR = 2.0, 1.2 to 3.4). Cohort members whose fathers drank were also more likely to drink but these estimates lost statistical significance when covariates were taken into account (fully adjusted OR – light/moderate = 1.3, 0.9 to 1.9, heavy/binge = 1.3, 0.9 to 1.9). Having friends who drank was associated with more 7 times the odds of cohort member drinking, and twice the odds when cohort members reported not knowing whether their friends drank. These estimates changed on adjustment for covariates but remained highly statistically significant (fully adjusted ORs 4.8, 3.9 to 5.9 and 1.8, 1.4 to 2.2 respectively) (Table 2).
What is the role of perceptions of harm, expectancies towards alcohol, parental supervision, and family relationships?
Perceptions of harm, expectancies towards alcohol, parental supervision, and family relationships were associated with the likelihood of cohort member drinking in the expected direction (Appendix Table 3). Associated with the reduced likelihood of cohort member drinking were: heightened perception of harm from drinking 1–2 drinks daily (OR - some risk = 0.9, 0.7 to 1.1, great risk = 0.6, 0.5 to 0.7); and negative expectancies towards alcohol (OR = 0.5, 0.4 to 0.7). Associated with an increased risk of cohort member drinking were: positive expectancies towards alcohol (OR = 1.9, 1.4 to 2.5); not being supervised by parents on weekends and weekdays (for often OR = 1.2, 1.0 to 1.4); frequent battles of will (OR = 1.3, 1.1 to 1.5); and not being happy with family (OR = 1.2, 1.0 to 1.5).
Our results suggest that nearly 14 % of 11 year olds in the UK have had an alcoholic drink. The odds of drinking were greater when their friends drank compared to when their parents drank: boys and girls who reported having friends who drank were five times more likely to report drinking themselves compared to those who reported having friends who did not drink. Having a mother who drank heavily was associated with an 80 % increased odds of drinking, however, fathers’ drinking was not independently associated with children’s drinking. Our results suggest that 11 year olds’ perceptions of risk, their expectancies towards alcohol and relationships with their families were independently related to the likelihood of drinking.
Distinct strengths of this work are that we used data from a large sample representative of 11 year olds in the UK; we simultaneously examined relationships with parents and friends drinking; and we were able to take into account rich contextual information about young people’s understanding of the risk of drinking alcohol, their expectancies, positive and negative, towards alcohol and family relationships. On the other hand, there are several limitations to acknowledge, including that the analyses were cross sectional as information on cohort member and friends' drinking, perceptions of harm and expectancies around drinking are only available from one wave of data collection thus causal inference cannot be drawn; the data on cohort member and friends’ drinking were developed for the MCS survey making it difficult to compare prevalence rates with other studies, although closed questions as used in this study have been shown to be valid markers of alcohol consumption in adolescents ; the data on cohort member and friends’ drinking were reported by the cohort member and thus may be prone to under or over estimation with one prior contemporary study suggesting a lower prevalence of drinking among British 11 year olds , although this may be due to different survey questions; we were not able to distinguish those who had just tried one or two drinks ever from cohort members who are regularly drinking; also there were no data available on the context of cohort member drinking and so it was not possible to assess the circumstances in which, or with whom, 11 year olds drank.
Prior work has charted the prevalence of drinking among 11 year olds in the UK and elsewhere [26, 27]. To our knowledge this is the first UK study in this young age group to attempt a detailed exploration of family and peer influences, along with the young person’s views about alcohol on the likelihood of drinking. Moreover, most prior work has been set in the US and it may be that associations vary across contexts . We examined associations between parent and friends’ drinking and family relationships at the very start of the adolescent period, whereas prior studies have looked at these associations among older adolescents. For instance, Cable and Sacker’s examination of 16 year olds from the 1970 Birth cohort suggests that negative expectancies are not protective . However, we might expect to see the same pattern of association as adolescence proceeds with peer influences and associated social norms having a more profound effect on alcohol use in later than early adolescence [13, 15–17].
A recent Cochrane review concluded there was limited evidence that school/education based intervention programmes were effective, and where they did work the focus was more holistic, not solely on alcohol. In keeping with this we found markers of other risky behaviours, including smoking and antisocial behaviours to be strongly independently related to drinking at age 11. Clearly, there are opportunities to intervene and help shape choices around risky behaviours including drinking. Our findings support policies working at multiple levels that incorporate family and peer factors. For example: compared with mother’s drinking, father’s drinking was not as strongly related to drinking in their 11 year olds but this may be because fathers are more likely to drink in settings other than the home. Our observations that greater awareness of the harms from alcohol and negative expectancies are associated with reduced odds of 11 year olds drinking support strategies to empower young people to say no to alcohol. This is particularly important, as undoubtedly, peer influences become stronger in shaping young people’s behaviours as adolescence proceeds.
Our study was not able to examine contexts around drinking occasions among 11 year olds – who do they drink with? Where, when and what do they drink? How do they acquire alcohol and what are the broader social norms around drinking? One study that compared young people’s drinking in Italy and Finland showed that Italian youth were more likely to drink with meals under family supervision, whereas Finnish youth were more likely to drink in settings that led to drunkenness . Being able to investigate context in more detail would help inform alcohol harm prevention strategies. Longitudinal studies looking at changes in expectancies towards alcohol and how these relate to changes in young people’s behaviours including potential clustering with other risky behaviours are important areas for future study.
Examining drinking at this point in the lifecourse has potentially important public health implications as around one in seven 11 year olds have drank, although the vast majority are yet to explore alcohol. Even though the links between early drinking and later life drinking problems remain unclear, we need to further improve our understanding of the relative importance and meaning of drinking in early adolescence as regular and heavy drinking among young people is linked to harmful behaviours and premature death. However, apparent culturally specific differences in the meaning of drinking underscores the importance of identifying factors that shape early drinking experiences across settings. Improving our understanding of context specific drivers of early drinking presents golden opportunities to develop effective policy and prevention strategies.
Ethical approval was not required for secondary analysis of publically available archived data.
Millennium Cohort Study
Room R, Babor T, Rehm J. Alcohol and public health. Lancet. 2005;365(9458):519–30.
Parry CD, Patra J, Rehm J. Alcohol consumption and non-communicable diseases: epidemiology and policy implications. Addiction. 2011;106(10):1718–24.
Gore FM, Bloem PJ, Patton GC, Ferguson J, Joseph V, Coffey C, Sawyer SM, Mathers CD. Global burden of disease in young people aged 10–24 years: a systematic analysis. Lancet. 2011;377(9783):2093–102.
Davies S. Annual Report of the Chief Medical Officer 2012. Our Children Deserve Better: Prevention Pays. In. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/255237/2901304_CMO_complete_low_res_accessible.pdf: UK Derpartment of Health; 2013.
World Health Organization. Global status report on alcohol and health. [http://www.who.int/substance_abuse/publications/global_alcohol_report/en/]. 2014.
Degenhardt L, Chiu WT, Sampson N, Kessler RC, Anthony JC, Angermeyer M, Bruffaerts R, de Girolamo G, Gureje O, Huang Y et al. Toward a global view of alcohol, tobacco, cannabis, and cocaine use: findings from the WHO World Mental Health Surveys. PLoS Med. 2008;5(7), e141.
McCambridge J, McAlaney J, Rowe R. Adult Consequences of Late Adolescent Alcohol Consumption: A Systematic Review of Cohort Studies. PLoS Med. 2011;8(2), e1000413.
Maimaris W, McCambridge J. Age of first drinking and adult alcohol problems: systematic review of prospective cohort studies. J Epidemiol Community Health. 2014;68(3):268–74.
Rolando S, Beccaria F, Tigerstedt C, Torronen J. First drink: What does it mean? The alcohol socialization process in different drinking cultures. Drug-Educ Prev Polic. 2012;19(3):201–12.
Fuller E, Hawkins V. Health and Social Care Information Centre, Smoking, drinking and drug use among young people in England in 2013. London: NatCen Social Research; 2014.
The 2011 ESPAD Report. Substance Use Among Students in 36 European Countries.[http://www.espad.org/uploads/espad_reports/2011/the_2011_espad_report_full_2012_10_29.pdf]. 2012.
Ellison J, Abbott D. Alcoholic Drinks: Children:Written question - 213700. In. Edited by Health Do. http://www.parliament.uk/business/publications/written-questions-answers-statements/written-question/Commons/2014-11-06/213700/; 2014.
Cable N, Sacker A. Typologies of alcohol consumption in adolescence: Predictors and adult outcomes. Alcohol Alcohol. 2008;43(1):81–90.
Kuther TL. Rational decision perspectives on alcohol consumption by youth: Revising the theory of planned behavior. Addict Behav. 2002;27(1):35–47.
Nash SG, McQueen A, Bray JH. Pathways to adolescent alcohol use: family environment, peer influence, and parental expectations. J Adolesc Health. 2005;37(1):19–28.
Gardner M, Steinberg L. Peer influence on risk taking, risk preference, and risky decision making in adolescence and adulthood: an experimental study. Dev Psychol. 2005;41(4):625–35.
Bremner P, Burnett J, Nunney F, Ravat M, Mistral W. Young people, alcohol and influences. York; Joseph Roundtree Foundation. 2011.
Melotti R, Heron J, Hickman M, Macleod J, Araya R, Lewis G, Cohort AB. Adolescent alcohol and tobacco use and early socioeconomic position: the ALSPAC birth cohort. Pediatrics. 2011;127(4):e948–55.
MacArthur GJ, Smith MC, Melotti R, Heron J, Macleod J, Hickman M, Kipping RR, Campbell R, Lewis G. Patterns of alcohol use and multiple risk behaviour by gender during early and late adolescence: the ALSPAC cohort. J Public Health. 2012;34 Suppl 1:i20–30.
Marshall EJ. Adolescent Alcohol Use: Risks and Consequences. Alcohol and Alcoholism. 2014; 49: 160–164.
Millennium Cohort Study, A Guide to the Datasets (Eighth Edition) First, Second, Third, Fourth and Fifth Surveys. [http://www.cls.ioe.ac.uk/shared/get-file.ashx?id=1806&itemtype=document]
Information for researchers and professionals about the Strengths & Difficulties Questionnaires. [http://www.sdqinfo.org/]
Guo J, Hawkins JD, Hill KG, Abbott RD. Childhood and adolescent predictors of alcohol abuse and dependence in young adulthood. J Stud Alcohol. 2001;62(6):754–62.
Booker CL, Skew AJ, Kelly YJ, Sacker A. Media Use, Sports Participation, and Well-Being in Adolescence: Cross-Sectional Findings From the UK Household Longitudinal Study. Am J Public Health. 2015; 105:173-179.
Lintonen T, Ahlström S, Metso L. The Reliability of Self-Reported Drinking in Adolescence. Alcohol & Alcoholism. 2004; 39(4):362–368.
Donovan JE, Molina BS. Types of alcohol use experience from childhood through adolescence. J Adolesc Health. 2013;53(4):453–9.
Duncan SC, Duncan TE, Strycker LA. Alcohol use from ages 9 to 16: A cohort-sequential latent growth model. Drug Alcohol Depend. 2006;81(1):71–81.
Donovan JE. Adolescent alcohol initiation: A review of psychosocial risk factors. J Adolesc Health. 2004;35(6):529.e527–18.
Foxcroft DR, Tsertsvadze A. Universal alcohol misuse prevention programmes for children and adolescents: Cochrane systematic reviews. Perspect Public Health. 2012;132(3):128–34.
We would like to thank the Millennium Cohort Study families for their time and cooperation, as well as the Millennium Cohort Study Team at the Institute of Education. The Millennium Cohort Study is funded by ESRC grants.
This work was supported by a grant from the Economic and Social Research Council ES/J019119/1. The funders had no role in the interpretation of these data or in the writing of this paper.
The authors declare that they have no competing interests.
YK designed the study, and drafted the manuscript. AG provided input on analytical strategy, analysed the data and commented on drafts of the manuscript. AS provided analytical support and commented on drafts of the paper. NC, RW & AB contributed to the analytical strategy and commented on drafts of the manuscript. All authors read and approved the final manuscript. YK will act as guarantor for the manuscript.
About this article
Cite this article
Kelly, Y., Goisis, A., Sacker, A. et al. What influences 11-year-olds to drink? Findings from the Millennium Cohort Study. BMC Public Health 16, 169 (2016) doi:10.1186/s12889-016-2847-x
- Early Adolescence
- Millennium Cohort Study
- Parent Drinking
- Peer Drinking | <urn:uuid:ce925a50-9575-4645-bbe9-5d05850a9aa1> | CC-MAIN-2019-47 | https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-016-2847-x | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00060.warc.gz | en | 0.938 | 6,047 | 2.984375 | 3 |
Infection by dandy fever viruses is accounted as one of the major public wellness jobs of more than 100 states in tropical and semitropical countries. Each twelvemonth, an estimated 50 million of septic instances and over 500,000 instances of dandy fever hemorrhagic febrility and dandy fever daze syndrome or DHF/DSS were reported ( 1 ) . Dengue febrility and dandy fever hemorrhagic febrility are caused by dandy fever virus. Aedes aegypti and Aedes albopictus are used as a mosquito vectors. Dengue virus is a positive sense individual stranded RNA and a member of the genus Flavivirus in the household Flaviviridae. Its genome is about 11 Kb in length. The mature virion consists of three structural proteins and seven nonstructural proteins ( 2 ) . Dengue viruses are comprised of four serotypes ( DEN-1, 2, 3, and 4 ) which all four serotypes can do terrible disease ( 3 ) . Clinical manifestation of dengue infection is runing from symptomless to diagnostic infection. The diagnostic infection may come on into feverish unwellness to dengue febrility or to dangerous disease ( DHF/DSS ) ( 4 ) .
It is widely accepted that badness of dengue infection is determined by both viral factors and host factors. In the instance of viral factors, the different strains of dandy fever virus were reported to do the different clinical result. The well known grounds is studied in American genotype and Southeast Asian genotype of DEN-2 in which Southeast Asian genotype virus is found to tie in with terrible disease ( DHF/DSS ) while the American genotype virus can do merely mild disease ( 5, 6, 7 ) . However, the result of dandy fever virus infection besides depends on host factors due to the fact that terrible signifier of dandy fever virus infection occurs in dandy fever immune person more than in dengue non-immune person. This grounds indicates that host unsusceptibility is really of import. Several hypothesizes were used as the theoretical account to explicate functions of host unsusceptibility on the terrible disease in dengue infection and one of those hypothesis is antibody dependent sweetening ( ADE ) . ADE infection is the phenomenon in which preexistent antibody heighten viral reproduction instead than neutralizes the viruses ( 8 ) . ADE occurs when virus-antibody complex binds to FcR on the FcR-bearing cells ensuing in the addition of viral entry and viral production ( 9 ) . Several viral pathogens can use ADE to ease the infection and to increase virus production. In Ross River virus ( RRV ) , infection with subneutralizing antibody to RRV can stamp down antiviral cistron look which help virus to retroflex freely in macrophage cells, in vitro ( 10 ) . Therefore, ADE phenomenon becomes one of the jobs for dengue vaccinum development because cross-reactive antibody may heighten the viral burden ensuing in the addition of disease patterned advance ( 11 ) .
Disease badness in dandy fever virus infection positively correlates to the high viral burden. Then, the inquiry is, does ADE infection facilitate virus production. Halstead and co-worker observed that most badness instances in dengue infection normally occur with the patient sing dengue virus infection ( 12 ) . Furthermore, they farther demonstrated that infection with composites between dandy fever virus and subneutralizing antibody from terrible disease patients could heighten viremia in Rhesus monkeys ( 13 ) . A similar phenomenon has been demonstrated in an in vitro assay utilizing FcR bearing cells ( 15,16,17,18 ) . The heightening mechanism is found to originate at the interaction between Fc?RI or Fc?RII and virus-IgG composite ( 19, 20 ) . This interaction stimulates negative regulators of intracellular innate immune response, therefore, the first line of intracellular defence are suppressed ( ) . This mechanism creates an appropriate biological environment for dandy fever reproduction ensuing in increasing viral production. Recently, computational theoretical account based on epidemic theory suggested that ADE helps the dandy fever viruses spread faster than other co-circulating dandy fever viruses that did non see sweetening ( 24 ) . This grounds implies to us that ADE infection at least inpart may increase dandy fever virus fittingness. However, the word picture of dandy fever virus that released from ADE infection has non been good studied. We hypothesize that infection via dengue virus-antibody composites increase dandy fever virus fittingness. Therefore, several factors that determine viral fittingness were compared between viruses produced from DENV-ADE infection and DENV non-immune serum infection.
The chief aim of this thesis is to look into that heightening antibody could increase fittingness of dandy fever viruses. Therefore, this experiment is divided into two sub-objective which are to analyze the efficiency of reproduction of dandy fever viruses in ADE infection in comparing with those in DENV infection and to look into the feature of dandy fever viruses produced from ADE infection versus viruses from DENV infection.
Dengue virus is the infective agent which causes dandy fever febrility and dandy fever hemorrhagic febrility. Ades aegypti and Aedes albopictus mosquitoes are used as vectors in the transmittal to human. Dengue virus has spherical virion which contains a positive sense single-stranded RNA about 11 Kb in length. The genome organisation of dandy fever virus is 5′-UTR-C-prM-E-NS1-NS2A-NS2B-NS3-NS4A-NS4B-NS5-UTR-3 ‘ . A structural protein in mature dandy fever virus consists of envelope glycoprotein ( E ) , membrane protein ( M ) and capsid protein ( C ) . The envelop glycoprotein and membrane protein are located on the outer surface of virion. The envelope glycoprotein comprises of three spheres which are sphere I, domain II and domain III. The sphere III of dandy fever virus plays function in the binding to receptor on permissive cells and induces humeral immune response during dengue infection ( ) . Mutant in sphere III of E protein is involved in the virulency of dandy fever virus. In vitro, aminic acerb permutation in at E 390 in sphere III part was showed to diminish the replicative efficiency of dandy fever virus in monocyte-derived macrophage ( ) . Similarly, aminic acerb permutation from Asp to His at E 390 was showed to increase neurovirulence in mice ( ) . In add-on to the envelope glycoprotein, dandy fever virus surface is besides composed of membrane protein. In the immature atom, membrane proteins are present as PrM protein and so it is cleaved by host peptidase furin. The map of PrM protein is to protect E protein to blend when immature atoms are transported through the acidic environment of the trans-Golgi web in secretory tract ( ) . The mirid bug protein is indispensable for the ripening of viral atom and nucleocapsid formation which consists of the multiples transcripts of C protein environing a individual viral RNA genome ( ) . Furthermore, dandy fever virus contains seven non-structural proteins which are NS1, NS2A, NS2B, NS3, NS4A, NS4B and NS5 that involved in the dandy fever reproduction ( ) . NS1 protein is non found in the viral atom alternatively it is released into extracellular during infection. NS1 protein has been reported to be cofactor for viral RNA reproduction and colocalization of double-strand RNA replicative signifier ( 6, 7 ) . NS3 protein is indispensable for viral polyprotein processing, RNA reproduction, and capping of viral genomic RNA. The N-terminal part contains viral peptidase that requires the NS2B protein for peptidase activity to treat the viral polyprotein. The C-terminal part service as the RNA helicase/NTPase which responsible for wind offing a double-stranded RNA replicative signifier and RNA replicative intermediate during viral reproduction ( 17,18 ) . The 5 ‘ triphosphatase activity of NS3 protein and NS5 methytranferase are involved in the capping of viral RNA ( ) . NS5 protein has three spheres which are the N-terminal S-adenosyl methionine methyltransferase ( MTase ) sphere, atomic localisation sequence ( NLS ) , and RNA dependent RNA polymerase sphere ( 20 ) . The N-terminal S-adenosyl methionine methyltransferase ( MTase ) sphere is responsible for both guanine N-7 and ribose 2’-O methylations which are required for formation of viral RNA cap construction ( 21, 22 ) . The atomic localisation sequences ( NLS ) is recognized by cellular factors to let protein to transport into the karyon ( 23 ) . RNA dependent RNA polymerase is located in The C-terminal sphere. The minus-strand RNA serves as a templet for the RNA dependant RNA polymerase to synthesise plus-strand genomic RNA ( 24, 25 ) . The precise map of NS2A, NS2B, NS4A and NS4B remains ill-defined. NS2A and NS4A are believed to tie in in viral reproduction. NS2A was reported to adhere the 3 ‘ untranslated part ( UTR ) of viral RNA and to the other constituent of reproduction composite ( 26 ) . NS2B is necessary for NS3 protein to exhibit its proteolytic activity ( ) .
Dengue virus reproduction
In order to reproduction, dengue virus can come in to host cell by receptor-mediated endocytosis. The acidification of the endosome allows the merger of viral membrane to vesicular membrane ensuing in releasing of viral genomic RNA into the cytol ( ) . The viral protein is so translated straight from the positive-sense RNA as a individual polyprotein that is treating by viral and host peptidases ( ) . The negative-strand viral RNA is synthesized from positive-strand viral RNA and used as the templet for the production of the viral genomic RNA ( ) . The viral formation occurs in the endoplasmic Reticulum to organize the immature virion. Then, it is transported into trans-Golgi web ( TGN ) where it is cleaved by host peptidase furin to bring forth the mature and released out of the host cell by host secretory tract ( )
The diverseness of dandy fever virus
Dengue virus is RNA virus which is the member of flavivirus genus of Flaviviridae household. There are four serotypes of dandy fever virus which are DEN-1, DEN-2, DEN-3, and DEN-4. All four serotypes of dandy fever virus can do terrible disease ( 3 ) . Dengue virus most likely arises from sylvatic strain. The rhythm of transmittal exists in the wood of Asia and Africa between non-human Primatess and Aedes mosquitoes. Cross-species transmittal from non-human Primatess to worlds may happen due to the rapid addition in human population, the widespread urbanisation, and the modern transit. However, the mechanism of cross-species transmittal remains unknown ( 4 ) . The development of dandy fever virus started when DENV-4 was the first to diverge, followed by DEN-2, and the concluding spilt between DEN-1 and DEN-3. Splitting of DENV into 4 serotypes may be due to geographic divider or ecological divider in different archpriest populations so that the four serotypes evolved independently ( 5 ) . In add-on to split dandy fever virus into four serotypes, each serotype of dandy fever virus can be branched in to several genotypes. Based on nucleotide sequences of the envelope ( E ) cistron, DEN-1 viruses have been divided into two genotypes, DEN-3 viruses into five genotypes, DEN-4 viruses into one genotype and DEN-2 viruses into six genotypes ( ) . However, the apprehension of the planetary scattering and evolutionary history of those genotypes remain uncomplete ( ) . Then, DEN-2 is best studied for familial diverseness. DEN-2 can be classified into 6 genotype that are American genotype, American/Asian genotype, Asiatic 1 genotype, Asiatic 2 genotype, and Cosmopolitan genotype. These genotypes frequently have different geographical distributions. For illustration, Cosmopolitan genotype has a distribution covering the tropical universe, Asiatic 1 and Asiatic 2 genotypes are merely found in Asiatic population while the American genotype disperses chiefly in the Americas ( ) . Two major factors are associated with familial diverseness of dandy fever viruses that are mutant and intra-serotypic recombination. Mutant occurs because viral RNA-dependent RNA polymerase deficiencies of proofreading mechanism and creates dynamic distributions of non-identical but closely related mutant genome or quasispecies ( ) . Clonal sequencing of dandy fever viruses from plasma of patients reveal that dandy fever virus exist as a quesispecies in vivo ( ) . All four serotypes of dandy fever virus have a average permutation rate about 10-3 nucleotide permutations per site per twelvemonth ( ) . Intra-serotypic recombination besides arises because the polymerase enzyme switches between parental viral molecules during reproduction ( ) . Analysis of dandy fever virus cistron sequences of samples from patients was identified as recombination bespeaking that there were recombination within DEN-1 strain in natural population of dandy fever virus ( ) . Recombination has been demonstrated in the other member of the Flavividae such as hepatitis C virus and pestiviruses ( ) .
Clinical manifestation of dandy fever virus infection
Clinical manifestation of dandy fever virus infection can change from symptomless to terrible infection with hemorrhage and daze ( DHF/DSS ) . The manifestation of diagnostic signifier of dengue infection can run from uniform febrility, dandy fever febrility, dandy fever hemorrhagic febrility and dandy fever daze syndrome. Undifferentiated fever normally occurs in primary infection but may follow in secondary infection. Clinically, it is non difference from other viral infection. Dengue febrility ( DF ) follows either primary or secondary infection. It is characterized by utmost febrility, concern, retro-orbital hurting, articulation and muscular hurting. A roseola may besides happen about three to four yearss after oncoming of the febrility. Hemorrhagic manifestation is uncommon in dandy fever febrility ( 1 ) . However, bleeding can besides sometime occur in dandy fever febrility patient ( 2 ) . Dengue hemorrhagic febrility ( DHF ) is a terrible signifier of dandy fever virus infection. It normally follows in secondary dandy fever infection but sometimes occur in primary infections. The unwellness of DHF usually starts with suddenly high febrility accompanied by terrible concerns aspecially in retro-orbital country, anorexia, facial flushing, acute abdominal hurting, emesis and other symptom similar to those DF. DHF is characterized by plasma escape due to increasing of vascular permeableness, thrombopenia, bleeding, and in the most terrible instance, daze. Dengue hemorrhagic febrility is divided into four classs of badness. Grade I, haematological manifestation is merely positive tourniquet trial. Grade II, it has self-generated shed blooding in the tegument or from mucosal surfaces in add-on to the manifestation of class I. Dengue daze syndrome ( DSS ) refers to Rate III and Grade IV of DHF which daze is present. DHF with circulative failure seen as pulse force per unit area and hypotension is DHF class III. In DHF class IV, there is profound circulatory prostration with an undetectable blood force per unit area and pulsation. When the circulatory prostration is present, DHF is associated with high mortality ( 3, 4,5 ) .
The epidemiology of dandy fever
Dengue is public wellness job in many states worldwide with more than 50 million instances of dengue infection and over 500,000 instances of DHF were reported ( 1 ) . It is believed that a pandemic of dandy fever started from Southeast Asia after World War II. The motion of military personnels makes the ecology changed which contributes the distribution of the mosquito vector ( 2 ) . The first eruption of DHF occurred in Manila Philippines in 1954 and so distribute to several states in the Region. After that, Southeast Asian became the hyperendemic countries because there is co-circulating of four serotypes of dandy fever viruses ( 3 ) . At present, the epidemic of dandy fever is reported in many states such as Thailand, Indonesia, Sri Lanka, Vietnam, Singapore, India, and Myanmar ( ) . Thailand is hyperendemic country of dandy fever viruses ( 4, 5, 6 ) . DHF were foremost reported in 1958 which over 200 deceases were reported. After that, the big epidemic occurred in 1987 with 174,285cases and about 1,000 deceases. Currently ( 7 ) , DF and DHF have been reported at least 10,000 instances per twelvemonth and it has been taking cause of hospitalization of kids in Thailand. Furthermore, four serotypes of dandy fever viruses have been isolated in DHF instances ( 8 ) . In America, DHF instances were foremost reported in America part during the dengue epidemic in Cuba, 1981 which an estimated 350,000 instances of DHF and 150 deceases were reported ( 9 ) . This eruption caused by the debut of Southeast Asian genotype of DEN-2 ( 10 ) . After that, DHF were reported in many states in Americas. In 1989, the 2nd eruption of DHF occurred in Venezuela with 3,108 instances of DHF and 73 deceases ( 11 ) . The go arounding serotypes of dandy fever virus in this eruption were DEN-1, 2 and 4 ( 12 ) . In 1994-1997, the big eruption of dandy fever occurred in America parts. There were reported of DHF instances in many states of America after the re-introduction of new strain of DEN-3 which caused of DHF epidemics in Sri Lanka and India in the 1980 ( 13, 14, 15, 16 ) . At present, dengue epidemic state of affairs in America parts is non different from those in Asia ( 17 ) . Recently, a big eruption occurred in Brazil with 120,570 instances and 647 of DHF were reported. The dominant circulating serotype is DEN-3 and DEN-2 ( 18 ) . However, all four serotype of dandy fever virus have been reported to do DHF worldwide. The incidence of DHF is more often found in patients with secondary infection than primary infection ( 19, 20 ) . The increasing DHF/DSS instances have been reported in every twelvemonth and no effectual vaccinums are available. Therefore, dengue vector control grogram and the pathogenesis surveies are the pressing demand to restrict the dandy fever disease.
The clinical result of dandy fever virus infection depends on both host factors and viral factors. Several hypotheses have been proposed to explicate the pathogenesis of DHF/DSS in dandy fever virus infection such as the virulency strains of dandy fever virus or the host unsusceptibility.
Differences strains of dandy fever virus were reported to do differences clinical result of dandy fever virus infection. The virulency of dandy fever virus was observed in both epidemiological surveies and molecular surveies. The well known grounds was studied in American genotype viruses and Southeast Asian genotype viruses of dandy fever virus serotype 2. American genotype viruses are referred to low virulency while Southeast Asian genotype viruses associate high virulency. Before 1981, there was no study of DHF instances in America parts. In 1981, there was introduced of Southeast Asia genotype viruses which result in the DHF instances were foremost reported in Cuba ( ) . Then, Halstead et Al. studied the serum of samples from patient before and after the epidemic in Peru, 1995. They found that in secondary infection no instances of DHF were found with American genotype virus infection. This suggests that the American genotype did non do dandy fever hemorrhagic febrility and dandy fever daze syndrome ( ) . Molecular surveies besides supported low virulency of American genotype viruses. Pryor et al investigated amino acid difference between American genotype and Southeast Asian genotype, they found that replicate efficiency of dandy fever virus was decreased in monocyte-derived macrophages when amino acid permutation occurs at E-390 ( ) . Similarly, mutant at E-390, the 3 ‘ NTRs and 5 ‘ NTRs sequence of American genotype were introduced into of the Southeat Asian genotype virus ensuing in diminishing virus end product in cell civilization ( ) .
Several hypothesizes were used as the theoretical account to explicate functions of host unsusceptibility on the terrible disease in dengue infection and one of those hypothesis is antibody dependent sweetening ( ADE ) . Antibody dependent sweetening ( ADE ) is formulated to explicate the determination that terrible manifestation of DHF/DSS occurs in patient in secondary dandy fever virus infection that has different serotype from the old 1. During heterotypic secondary dandy fever infection, preexisting antibody recognizes the infecting virus and signifiers an antigen-antibody composite, which is so bound to and internalized by Ig Fc receptor on bearing cells. This mechanism consequences in heightening the entry of virus into the host cells and increase viral reproduction ( ) . Several viral pathogens can use ADE to ease the infection and to increase virus production. Ross River virus infection ( RRV ) , subneutralizing of anti-RRV IgG has been shown to heighten the infection of RRV monocyte and macrophage cells ( ) . This sweetening is associated by Fc receptor. Based on molecular studied found that infection with subneutralizing antibody to RRV can stamp down antiviral cistron look which contribute virus to retroflex freely ( ) . In HIV infection, heightening antibody has been reported to increase HIV infection. The sweetening of HIV infection was demonstrated by both complement receptor and Fc receptor ( ) . In add-on heightening antibody is besides the major obstruction in vaccinum development. In respiratory syncytial virus ( RSV ) vaccinum was reported to increase when infection with heightening antibody. In vitro studied demonstrated that the figure of septic cells was increased when subneutralizing antibodies from carnal immunized with the formalin-inactivated ( FI ) respiratory syncytial virus ( RSV ) vaccinum were co-infected with RSV in monocyte cell lines ( ) . In dengue virus infection, ADE was foremost described by Halstead and co-worker when they found that most of terrible instances in dengue infection normally occur in heterotypic secondary infection. They farther demonstrated that infection with composites between dandy fever virus and subneutralizing antibody from terrible disease patients could heighten viremia in Rhesus monkeys. ADE of dandy fever infection was besides investigated in mice when infection with subneutralizing serotype-specific antibody or subneutralizing serotype-cross-reactive antibody to mice. They found that both subneutralizing serotype-specific antibody and subneutralizing serotype-cross-reactive antibody can do deadly disease in mice. This suggests that dandy fever virus can utilize subneutralizing antibody to heighten infection. In vitro, dengue infection with diluted serum and monoclonal antibody has been demonstrated to increase viral production in several cell type such as monocytes, macrophages and dendritic cells ( ) . Littaua R et Al found that Fc?RII was used as the go-between in ADE of dandy fever infection ( ) . Similarly, Fc?RI was besides reported to intercede sweetening of dengue infection in monocyte cells. This consequence can propose that The ADE mechanism of dengue infection is mediate by the interaction between FcR and virus-IgG composite. Furthermore, ADE of dandy fever virus infection has affected to stamp down innate immune which is the first line of intracellular defence mechanism. Based on molecular studied revealed that the interaction between FcR and virus-IgG composite have affected to excite DAK and Atg5-Atg12 that down-regulate of MDA-5 and RIG-1 activation ensuing in suppressing the type I IFN production ( ) .This mechanism creates an appropriate biological environment for dandy fever reproduction ensuing in increasing viral production. Recently, Cummings et Al used computational theoretical account studied the impact of heightening antibody in epidemiology of dengue viral serotypes. They found that ADE helps the dandy fever viruses spread faster than other co-circulating serotypes that did non see sweetening. In add-on to heightening antibody, several host factors have been reported to find the pathogenesis of dandy fever virus infection such as the memory T-cell response and storm of cytokines. In secondary infection, the enlargement cross-reactive memory T cells which have high eagerness to old infection but low eagerness to current infection consequence in delayed viral clearance ( ) .The consequence of antibody dependent sweetening leads to high viral burden and increases antigen presentation. The interaction of antigen-resenting cell with memory T-cell induced proliferation and the production of proimflamatory cytokines such as IFN? and TNF? . These cytokines can hold direct effects to vascular endothelial cell. Furthermore, it has been reported that the storm of cytokines in dengue infection could lend of vascular escape in DHF/DSS patients ( ) . Several cytokines have been observed to increase in patient with DHF such as IL-10, IL-6, IL-8, TNF-? and IFN-? . For illustrations, TNF-? and IL-6 could consequence to increase the vascular permeableness ( ) . IL-10 degrees has been correlated with thrombocyte decay in dandy fever virus infection and may be down modulating lymph cell and thrombocyte map ( ) .
Fitness is the parametric quantity to specify the replicative version of being in its environment ( ) . In viral development, fittingness is a parametric quantity to order the endurance of viruses when its environment is changed. Viruss gain fittingness when a mutation can increase the chance to last in the altered environment while some mutation can non last ensuing in fittingness loss ( ) .Several viral pathogens are RNA virus which has high mutant rate in their genome ( ) . High mutant rate of RNA virus is caused by the deficiency of proofreading mechanism of RNA dependent RNA polymerase. This mechanism creates the heterogenous population which called quasispecies and each population in quasispecies has different fittingness ( ) . Furthermore, the quasispecies of viral pathogens are significantly job in the medical intervention. In biological environment, host immune system and antiviral drug therapy are the of import selective force per unit area to drive the development of virus in order to last in its environment ( ) . The quasispecies in HIV create antiviral drug opposition which has higher fittingness than wild type ensuing in job in vaccinum development and drug therapy ( ) . Antigenic impetus and antigenic displacement in grippe A viruses can do virus less susceptible to immune response. The amino acid alteration in glycoprotein hemagglutinin ( HA ) part which is the mark of neutralizing antibodies make virus hedging from host immune system ( ) . Recently, neuraminidase opposition mutations of grippe A viruses were observed in oseltamivir-treated patients which may increase the chance in the transmittal ( ) . High familial diverseness of Hepatitis C virus creates the job in vaccinum development. In chronic HCV infection, the familial fluctuation of HCV is higher than those in non-chronic infection ( ) . Familial fluctuation of Hepatitis C virus besides contributes virus to get away from host immune acknowledgment. The hypervariable part HVR1 within E cistron that is epitope site for neutralizing antibody has been reported to tie in in result of infection. Based on studied in patients, high fluctuation in HVR1 part is found in patient with come oning to chronic infection while non-chronic infection is significantly stable. In the instance of dandy fever virus, indirect grounds indicates that the familial fluctuation contribute to increase fittingness and to bring forth more deadly strain of dandy fever virus. Each viral strain could of course differ in virulency. In the hereafter, we might be exposed to viruses with an expanded scope of infective belongingss. | <urn:uuid:7f114bdc-291c-462d-8c5e-72897cc74f1a> | CC-MAIN-2019-47 | https://annaforoklahoma.org/infection-dengue-viruses-one-major-public-health-problems-biology-essay/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00220.warc.gz | en | 0.926772 | 6,108 | 2.9375 | 3 |
The state's geologic history can be divided into three periods. The first period was a lengthy period of geologic instability from the origin of the planet until roughly 1,100 million years ago. During this time, the state's Precambrian bedrock was formed by volcanism and the deposition of sedimentary rock and then modified by processes such as faulting, folding and erosion. In the second period, many layers of sedimentary rock were formed by deposition and lithification of successive layers of sediment from runoff and repeated incursions of the sea. In the third and most recent period starting about 1.8 million years ago, glaciation eroded previous rock formations and deposited deep layers of glacial till over most of the state, and created the beds and valleys of modern lakes and rivers.
Minnesota's geologic resources have been the historical foundation of the state's economy. Precambrian bedrock has been mined for metallic minerals, including iron ore, on which the economy of Northeast Minnesota was built. Archaen granites and gneisses, and later limestones and sandstones, are quarried for structural stone and monuments. Glacial deposits are mined for aggregates, glacial till and lacustrine deposits formed the parent soil for the state's farmlands, and glacial lakes are the backbone of Minnesota's tourist industry. These economic assets have in turn dictated the state's history and settlement patterns, and the trade and supply routes along the waterways, valleys and plains have become the state's state's transportation corridors.
Minnesota contains some of the oldest rocks on Earth, granitic gneisses that formed some 3,600 mya (million years ago) -- roughly 80% the age of the planet. About 2,700 mya, the first volcanic rocks that would later underlie Minnesota began to rise up out of an ancient ocean, forming the Superior craton. This craton later assembled into the Canadian shield, which became part of the North American craton. Much of the underlying gneiss rock of today's state had already formed nearly a billion years earlier, but lay underneath the sea. Except for an area where islands appeared in what is now the northern part of the state, most of the region remained underwater.
In Middle Precambrian time, about 2,000 mya, the land rose above the water. Heavy mineral deposits containing iron had collected on the shores of the receding sea to form the Mesabi, Cuyuna, Vermilion, and Gunflint iron ranges from the center of the state north into Northwestern Ontario, Canada. These regions also showed the first signs of life as algae grew in the shallow waters.
Over 1,100 mya, a rift formed and lava emerged from cracks along the edges of the rift valley. This Midcontinent Rift System extended from the lower peninsula of Michigan north to the current Lake Superior, southwest through the lake to the Duluth area, and south through eastern Minnesota down into what is now Kansas. The rifting stopped before the land could become two separate continents. About 100 million years later, the last volcano went quiet.
The mountain-building and rifting events left areas of high relief above the low basin of the Midcontinent rift. Over the next 1,100 million years, the uplands were worn down and the rift filled with sediments, forming rock ranging in thickness from several hundred meters near Lake Superior to thousands of meters further south. While the crustal tectonic plates continued their slow drift over the surface of the planet, meeting and separating in the successive collision and rifting of continents, the North American craton remained stable. Although now free of folding and faulting caused by plate tectonics, the region continued to experience gradual subsidence and uplift.
Five hundred fifty million years ago, the state was repeatedly inundated with water of a shallow sea that grew and receded through several cycles. The land mass of what is now North America ran along the equator, and Minnesota had a tropical climate. Small marine creatures such as trilobites, coral, and snails lived in the sea. The shells of the tiny animals sank to the bottom, and are preserved in limestones, sandstones, and shales from this era. Later, creatures resembling crocodiles and sharks slid through the seas, and fossil shark teeth have been found on the uplands of the Mesabi Range. During the Mesozoic and Cenozoic other land animals followed as the dinosaurs disappeared, but much of the physical evidence from this era has been scraped away or buried by recent glaciation. The rock units that remain in Minnesota from this time period are of Cambrian and Ordovician age, from the Mount Simon Sandstone at the bottom of the sequence of sedimentary rocks to the Maquoketa Group at the top.
In the Quaternary Period starting about two million years ago, glaciers expanded and retreated across the region. The ice retreated for the last time about 12,500 years ago. Melting glaciers formed many of the state's lakes and etched its river valleys. They also formed a number of proglacial lakes, which contributed to the state's topography and soils. Principal among these lakes was Lake Agassiz, a massive lake with a volume rivaling that of all the present Great Lakes combined. Dammed by the northern ice sheet, this lake's immense flow found an outlet in glacial River Warren, which drained south across the Traverse Gap through the valleys now occupied by the Minnesota and Mississippi Rivers. Eventually, the ice sheet melted, and the Red River gave Lake Agassiz a northern outlet toward Hudson Bay. As the lake drained away its bed became peatlands and a fertile plain. Similarly, Glacial Lake Duluth, in the basin of Lake Superior, was dammed by a glacier; it drained down the ancient course of the Midcontinental Rift to the St. Croix River and the Mississippi. When the glaciers receded, the lake was able to drain through the Great Lakes to the Saint Lawrence River.
Giant animals roamed the area. Beavers were the size of bears, and mammoths were 14 feet (4.3 m) high at the shoulder and weighed 10 tons. Even buffalo were much larger than today. The glaciers continued to retreat and the climate became warmer over the next few millennia; the giant creatures died out about 9,000 years ago.
This glaciation has drastically remodeled most of Minnesota, and all but two of the state's regions are covered with deep layers of glacial till. The driftless area of Southeastern Minnesota was untouched by the most recent glaciation. In the absence of glacial scouring and drift, this region presents a widespread highly dissected aspect absent from other parts of the state. Northeastern Minnesota was subject to glaciation and its effects, but its hard Archaen and Proterozoic rocks were more resistant to glacial erosion than the sedimentary bedrock first encountered in many other regions, and glacial till is relatively sparse. While the effects of glacial erosion are clearly present and there are some areas of glacial till, older rocks and landforms remain unburied and exposed across much of the region.
Contemporary Minnesota is much quieter geologically than in the past. Outcroppings of lava flows and magma intrusions are the only remaining traces of the volcanism that ended over 1,100 mya. Landlocked within the continent, the state is a long distance from the seas that once covered it, and the continental glacier has receded entirely from North America. Minnesota's landscape is a relatively flat peneplain; its highest and lowest points are separated by only 518 metres (1,699 ft) of elevation.
While the state no longer has true mountain ranges or oceans, there is a fair amount of regional diversity in landforms and geological history, which in turn has affected Minnesota's settlement patterns, human history, and economic development. These diverse geological regions can be classified several ways. The classification used below principally derives from Sansome's Minnesota Underfoot - A Field Guide to Minnesota's Geology, but is also influenced by Minnesota's Geology by Ojakangas and Matsch. These authorities generally agree on areal borders, but the regions as defined by Ojakangas and Matsch are more geographical in their approximations of areas of similar geology, while Sansome's divisions are more irregular in shape in order to include within a region all areas of similar geology, with particular emphasis on the effects of recent glaciation. As glaciation and its residue has largely dictated regional surface geology and topography, Sansome's divisions are often coextensive with ecological provinces, sections, and subsections.
Northeastern Minnesota is an irregularly-shaped region composed of the northeasternmost part of the state north of Lake Superior, the area around Jay Cooke State Park and the Nemadji River basin southwest of Duluth, and much of the area east of U.S. Highway 53 that runs between Duluth and International Falls. Excluded are parts of the beds of glacial lakes Agassiz and Upham, the latter now occupied by the upper valley of the Saint Louis River and its tributary the Cloquet. This area is coextensive with the Northern Superior Uplands Section of the Laurentian Mixed Forest.
Known as the Arrowhead for its shape, this region shows the most visible evidence of the state's violent past. There are surface exposures of rocks first formed in volcanic activity some 2,700 mya during construction of the Archaen-Superior province, including Ely greenstone, metamorphosed and highly folded volcanics once thought to be the oldest exposed rock on earth;Proterozoic formations created about 1,900 mya that gave the area most of its mineral riches; and more recent intrusive gabbro and extrusive basalts and rhyolites of the Duluth Complex and North Shore Volcanic Group, created by magma and lava which upwelled and hardened about 1,100 mya during the Midcontinent Rift. The Precambrian bedrock formed by this activity has been eroded but remains at or close to the surface over much of the area.
The entire area is the raw southern edge of the Canadian Shield. Topsoils are thin and poor and their parent soils derived from the rock beneath or nearby rather than from glacial till, which is sparse. Many of this region's lakes are located in depressions formed by the differential erosion of tilted layers of bedded rock of the Canadian Shield; the crevasses thereby formed have filled with water to create many of the thousands of lakes and swamps of the Superior National Forest.
In post-glacial times Northeastern Minnesota was covered by forest broken only by these interconnected lakes and wetlands. Much of the area has been little changed by human activity, as there are substantial forest and wilderness preserves, most notably the Boundary Waters Canoe Area and Voyageurs National Park. In the remainder of the region, lakes provide recreation, forests are managed for pulpwood, and the underlying bedrock is mined for valuable ores deposited in Precambrian times. While copper and nickel ores have been mined, the principal metallic mineral is iron. Three of Minnesota's four iron ranges are in the region, including the Mesabi Range, which has supplied over 90% of the state's historic output, including most of the natural ores pure enough to be fed directly into furnaces. The state's iron mines have produced over three and a half billion metric tons of ore. While high-grade ores have now been exhausted, lower-grade taconite continues to supply a large proportion of the nation's needs.
Northwestern Minnesota is a vast plain in the bed of Glacial Lake Agassiz. This plain extends north and northwest from the Big Stone Moraine, beyond Minnesota's borders into Canada and North Dakota. In the northeast, the Glacial Lake Agassiz plain transitions into the forests of the Arrowhead. The region includes the lowland portions of the Red River watershed and the western half of the Rainy River watershed within the state, at approximately the level of Lake Agassiz' Herman Beach. In ecological terms, it includes the Northern Minnesota Peatlands of the Laurentian Mixed Forest, the Tallgrass Aspen Parklands, and the Red River Valley Section of the Prairie Parklands.
Bedrock in this region is mainly Archean, with small areas of Lower Paleozoic and Upper Mesozoic sedimentary rocks along the western border. By late Wisconsinan times this bedrock had been covered by clayey glacial drift scoured and transported from sedimentary rocks of Manitoba. The bottomland is undissected and essentially flat, but imperceptibly declines from about 400 meters at the southern beaches of Lake Agassiz to 335 meters along the Rainy River. There is almost no relief, except for benches or beaches where Glacial Lake Agassiz stabilized for a time before it receded to a lower level. In contrast to the lakebed, these beaches rise from the south to the north and east at a gradient of approximately 1:5000; this rise resulted from the isostatic rebound of the land after recession of the last ice sheet. In the western part of the region in the Red River Valley, fine-grained glacial lake deposits and decayed organic materials up to 50 meters in depth form rich, well-textured, and moisture-retentive, yet well-drained soils (mollisols), which are ideal for agriculture. To the north and east, much of the land is poorly drained peat, often organized in rare and distinctive patterns known as patterned peatland. At marginally higher elevations within these wetlands are areas of black spruce, tamarack, and other water-tolerant species.
Southwestern Minnesota is in the watersheds of the Minnesota River, the Missouri River, and the Des Moines River. The Minnesota River lies in the bed of the glacial River Warren, a much larger torrent that drained Lake Agassiz while outlets to the north were blocked by glaciers. The Coteau des Prairies divides the Minnesota and Missouri River valleys, and is a striking landform created by the bifurcation of different lobes of glacial advance. On the Minnesota side of the coteau is a feature known as Buffalo Ridge, where wind speeds average 16 mph (26 km/h). This windy plateau is being developed for commercial wind power, contributing to the state's ranking as third in the nation for wind-generated electricity.
Between the river and the plateau are flat prairies atop varying depths of glacial till. In the extreme southwest portion of the state, bedrock outcroppings of Sioux Quartzite are common, with less common interbedded outcrops of an associated metamorphosed mudstone named catlinite. Pipestone, Minnesota is the site of historic Native American quarries of catlinite, which is more commonly known as "pipestone". Another notable outcrop in the region is the Jeffers Petroglyphs, a Sioux Quartzite outcropping with numerous petroglyphs which may be up to 7000-9000 years old.
Drier than most of the rest of the state, the region is a transition zone between the prairies and the Great Plains. Once rich in wetlands known as prairie potholes, 90%, or some three million acres (12,000 km²), have been drained for agriculture in the Minnesota River basin. Most of the prairies are now farm fields. Due to the quaternary and bedrock geology of the region, as well as the reduced precipitation in the region, groundwater resources are neither plentiful, nor widely distributed, unlike most other areas of the state. Given these constraints, this rural area hosts a vast network of water pipelines which transports groundwater from the few localized areas with productive groundwater wells to much of the region's population.
Southeastern Minnesota is separated from Southwestern Minnesota by the Owatonna Moraine, the eastern branch of the Bemis Moraine, a terminal moraine of the Des Moines lobe from the last Wisconsin glaciation. Ojakangas and Matsch extend the region west past the moraine to a line running north from the Iowa border between Mankato and New Ulm to the latitude of the Twin Cities, then encompassing the latter metropolis with a broad arc east to the St. Croix River. This moraine runs south from the Twin Cities in the general area of Minnesota State Highway 13 and Interstate 35. Sansome attaches this moraine to her description of West-Central Minnesota, given its similarity in glacial features to that region. Under Sansome's classification (followed here), Southeastern Minnesota is generally coterminous with the Paleozoic Plateau Section of the Eastern Broadleaf Forest Province.
The bedrock here is lower Paleozoic sedimentary rocks, with limestone and dolomite especially prevalent near the surface. It is highly dissected, and local tributaries of the Mississippi have cut deep valleys into the bedrock. It is an area of karst topography, with thin topsoils lying atop porous limestones, leading to formation of caverns and sinkholes. The last glaciation did not cover this region (halting at the Des Moines terminal lobe mentioned above), so there is no glacial drift to form subsoils, giving the region the name of the Driftless area. As the topsoils are shallower and poorer than those to the west, dairy farming rather than cash crops is the principal agricultural activity.
Central Minnesota is composed of (1) the drainage basin of the St. Croix River (2) the basin of the Mississippi River above its confluence with the Minnesota, (3) those parts of the Minnesota and Red River basins on the glacial uplands forming the divides of those two basins with that of the Mississippi, (4) the Owatonna Moraine atop a strip of land running from western Hennepin County south to the Iowa border, and (5) the upper valley of the Saint Louis River and the valley of its principal tributary the Cloquet River which once drained to the Mississippi before they were captured by stream piracy and their waters were redirected through the lower Saint Louis River to Lake Superior.Glacial landforms are the common characteristics of this gerrymander-like region.
The bedrock ranges in age from Archean granites to Upper Mesozoic Cretaceous sediments, and underlying the eastern part of the region (and the southerly extension to Iowa) are the Late Precambrian Keweenawan volcanics of the Midcontinent Rift, overlaid by thousands of meters of sedimentary rocks.
At the surface, the entire region is "Moraine terrain", with the glacial landforms of moraines, drumlins, eskers, kames, outwash plains and till plains, all relics from recent glaciation. In the multitude of glacier-formed depressions are wetlands and many of the state's "10,000 lakes", which make the area prime vacation territory. The glacial deposits are a source of aggregate, and underneath the glacial till are high-quality granites which are quarried for buildings and monuments.
The subregion of East Central Minnesota is that part of Central Minnesota near the junction of three of the state's great rivers. Included are Dakota County, eastern Hennepin County, and the region north of the Mississippi but south of an east-west line from Saint Cloud to the St. Croix River on the Wisconsin border. It includes much of the Twin Cities metropolitan area. The region has the same types of glacial landforms as the remainder of Central Minnesota, but is distinguished by its bedrock valleys, both active and buried.
The valleys now hold three of Minnesota's largest rivers, which join here. The St. Croix joins the Mississippi at Prescott, Wisconsin. Upstream, the Mississippi is joined by the Minnesota River at historic Fort Snelling. When River Warren Falls receded past the confluence of the much smaller Upper Mississippi River, a new waterfall was created where that river entered the much-lower River Warren. The new falls receded upstream on the Mississippi, migrating eight miles (13 km) over 9600 years to where Louis Hennepin first saw it and named St. Anthony Falls in 1680. Due to its value as a power source, this waterfall determined the location of Minneapolis. One tributary of the river coming from the west, Minnehaha Creek, receded only a few hundred yards from one of the channels of the Mississippi. Minnehaha Falls remains as a picturesque and informative relic of River Warren Falls, and the limestone-over-sandstone construction is readily apparent in its small gorge. At St. Anthony Falls, the Mississippi dropped 50 feet (15 m) over a limestone ledge; these waterfalls were used to drive the flour mills that were the foundation for the city's 19th century growth.
Other bedrock tunnel valleys lie deep beneath till deposited by the glaciers which created them, but can be traced in many places by the Chain of Lakes in Minneapolis and lakes and dry valleys in St. Paul.
North of the metropolitan area is the Anoka Sandplain, a flat area of sandy outwash from the last ice age. Along the eastern edge of the region are the Dalles of the St. Croix River, a deep gorge cut by runoff from Glacial Lake Duluth into ancient bedrock.Interstate Park here contains the southernmost surface exposure of the Precambrian lava flows of the Midcontinent Rift, providing a glimpse of Minnesota's volcanic past. | <urn:uuid:c6491c5a-34af-43f3-b8f1-5099af48a063> | CC-MAIN-2019-47 | http://www.popflock.com/learn?s=Geology_of_Minnesota | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00300.warc.gz | en | 0.948687 | 4,451 | 4.0625 | 4 |
Yesterday's text examined the Internet Protocol (IP) in considerable detail. As you might remember, the Internet Protocol handles the lower-layer functionality. Today I look at the transport layer, where the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) come into play.
TCP is one of the most widely used transport layer protocols, expanding from its original implementation on the ARPANET to connecting commercial sites all over the world. On Day 1, "Open Systems, Standards, and Protocols," you looked at the OSI seven-layer model, which bears a striking resemblance to TCP/IP's layered model, so it is not surprising that many of the features of the OSI transport layer were based on TCP.
In theory, a transport layer protocol could be a very simple software routine, but TCP cannot be called simple. Why use a transport layer that is as complex as TCP? The most important reason depends on IP's unreliability. As you saw yesterday, IP does not guarantee delivery of a datagram; it is a connectionless system with no reliability. IP simply handles the routing of datagrams, and if problems occur, IP discards the packet without a second thought (generating an ICMP error message back to the sender in the process). The task of ascertaining the status of the datagrams sent over a network and handling the resending of information if parts have been discarded falls to TCP, which can be thought of as riding shotgun over IP.
Most users think of TCP and IP as a tightly knit pair, but TCP can be (and frequently is) used with other protocols without IP. For example, TCP or parts of it are used in the File Transfer Protocol (FTP) and the Simple Mail Transfer Protocol (SMTP), both of which do not use IP.
The Transmission Control Protocol provides a considerable number of services to the IP layer and the upper layers. Most importantly, it provides a connection-oriented protocol to the upper layers that enable an application to be sure that a datagram sent out over the network was received in its entirety. In this role, TCP acts as a message-validation protocol providing reliable communications. If a datagram is corrupted or lost, TCP usually handles the retransmission, rather than the applications in the higher layers.
TCP is not a piece of software. It is a communications protocol. When you install a TCP stack on your machine, you are installing the TCP layer, and usually a lot more software to provide the rest of the TCP/IP services. TCP is used as a catch-all phrase for TCP/IP in many cases.
TCP manages the flow of datagrams from the higher layers to the IP layer, as well as incoming datagrams from the IP layer up to the higher level protocols. TCP has to ensure that priorities and security are properly respected. TCP must be capable of handling the termination of an application above it that was expecting incoming datagrams, as well as failures in the lower layers. TCP also must maintain a state table of all data streams in and out of the TCP layer. The isolation of all these services in a separate layer enables applications to be designed without regard to flow control or message reliability. Without the TCP layer, each application would have to implement the services themselves, which is a waste of resources.
TCP resides in the transport layer, positioned above IP but below the upper layers and their applications, as shown in Figure 4.1. TCP resides only on devices that actually process datagrams, ensuring that the datagram has gone from the source to the target machine. It does not reside on a device that simply routes datagrams, so there is usually no TCP layer in a gateway. This makes sense, because on a gateway the datagram has no need to go higher in the layered model than the IP layer.
Figure 4.1. TCP provides end-to-end communications.
Because TCP is a connection-oriented protocol responsible for ensuring the transfer of a datagram from the source to destination machine (end-to-end communications), TCP must receive communications messages from the destination machine to acknowledge receipt of the datagram. The term virtual circuit is usually used to refer to the communications between the two end machines, most of which are simple acknowledgment messages (either confirmation of receipt or a failure code) and datagram sequence numbers.
To illustrate the role of TCP, it is instructive to follow a sample message between two machines. The processes are simplified at this stage, to be expanded on later today. The message originates from an application in an upper layer and is passed to TCP from the next higher layer in the architecture through some protocol (often referred to as an upper-layer protocol, or ULP, to indicate that it resides above TCP). The message is passed as a streama sequence of individual characters sent asynchronously. This is in contrast to most protocols, which use fixed blocks of data. This can pose some conversion problems with applications that handle only formally constructed blocks of data or insist on fixed-size messages.
TCP receives the stream of bytes and assembles them into TCP segments, or packets. In the process of assembling the segment, header information is attached at the front of the data. Each segment has a checksum calculated and embedded within the header, as well as a sequence number if there is more than one segment in the entire message. The length of the segment is usually determined by TCP or by a system value set by the system administrator. (The length of TCP segments has nothing to do with the IP datagram length, although there is sometimes a relationship between the two.)
If two-way communications are required (such as with Telnet or FTP), a connection (virtual circuit) between the sending and receiving machines is established prior to passing the segment to IP for routing. This process starts with the sending TCP software issuing a request for a TCP connection with the receiving machine. In the message is a unique number (called a socket number) that identifies the sending machine's connection. The receiving TCP software assigns its own unique socket number and sends it back to the original machine. The two unique numbers then define the connection between the two machines until the virtual circuit is terminated. (I look at sockets in a little more detail in a moment.)
After the virtual circuit is established, TCP sends the segment to the IP software, which then issues the message over the network as a datagram. IP can perform any of the changes to the segment that you saw in yesterday's material, such as fragmenting it and reassembling it at the destination machine. These steps are completely transparent to the TCP layers, however. After winding its way over the network, the receiving machine's IP passes the received segment up to the recipient machine's TCP layer, where it is processed and passed up to the applications above it using an upper-layer protocol.
If the message was more than one TCP segment long (not IP datagrams), the receiving TCP software reassembles the message using the sequence numbers contained in each segment's header. If a segment is missing or corrupt (which can be determined from the checksum), TCP returns a message with the faulty sequence number in the body. The originating TCP software can then resend the bad segment.
If only one segment is used for the entire message, after comparing the segment's checksum with a newly calculated value, the receiving TCP software can generate either a positive acknowledgment (ACK) or a request to resend the segment and route the request back to the sending layer.
The receiving machine's TCP implementation can perform a simple flow control to prevent buffer overload. It does this by sending a buffer size called a window value to the sending machine, following which the sender can send only enough bytes to fill the window. After that, the sender must wait for another window value to be received. This provides a handshaking protocol between the two machines, although it slows down the transmission time and slightly increases network traffic.
The use of a sliding window is more efficient than a single block send and acknowledgment scheme because of delays waiting for the acknowledgment. By implementing a sliding window, several blocks can be sent at once. A properly configured sliding window protocol provides a much higher throughput.
As with most connection-based protocols, timers are an important aspect of TCP. The use of a timer ensures that an undue wait is not involved while waiting for an ACK or an error message. If the timers expire, an incomplete transmission is assumed. Usually an expiring timer before the sending of an acknowledgment message causes a retransmission of the datagram from the originating machine.
Timers can cause some problems with TCP. The specifications for TCP provide for the acknowledgment of only the highest datagram number that has been received without error, but this cannot properly handle fragmentary reception. If a message is composed of several datagrams that arrive out of order, the specification states that TCP cannot acknowledge the reception of the message until all the datagrams have been received. So even if all but one datagram in the middle of the sequence have been successfully received, a timer might expire and cause all the datagrams to be resent. With large messages, this can cause an increase in network traffic.
If the receiving TCP software receives duplicate datagrams (as can occur with a retransmission after a timeout or due to a duplicate transmission from IP), the receiving version of TCP discards any duplicate datagrams, without bothering with an error message. After all, the sending system cares only that the message was receivednot how many copies were received.
TCP does not have a negative acknowledgment (NAK) function; it relies on a timer to indicate lack of acknowledgment. If the timer has expired after sending the datagram without receiving an acknowledgment of receipt, the datagram is assumed to have been lost and is retransmitted. The sending TCP software keeps copies of all unacknowledged datagrams in a buffer until they have been properly acknowledged. When this happens, the retransmission timer is stopped, and the datagram is removed from the buffer.
TCP supports a push function from the upper-layer protocols. A push is used when an application wants to send data immediately and confirm that a message passed to TCP has been successfully transmitted. To do this, a push flag is set in the ULP connection, instructing TCP to forward any buffered information from the application to the destination as soon as possible (as opposed to holding it in the buffer until it is ready to transmit it).
All upper-layer applications that use TCP (or UDP) have a port number that identifies the application. In theory, port numbers can be assigned on individual machines, or however the administrator desires, but some conventions have been adopted to enable better communications between TCP implementations. This enables the port number to identify the type of service that one TCP system is requesting from another. Port numbers can be changed, although this can cause difficulties. Most systems maintain a file of port numbers and their corresponding service.
Typically, port numbers above 255 are reserved for private use of the local machine, but numbers below 255 are used for frequently used processes. A list of frequently used port numbers is published by the Internet Assigned Numbers Authority and is available through an RFC or from many sites that offer Internet summary files for downloading. The commonly used port numbers on this list are shown in Table 4.1. The numbers 0 and 255 are reserved.
TCP Port Service Multiplexer
Remote Job Entry
Quotation of the Day
File Transfer ProtocolData
File Transfer ProtocolControl
Simple Mail Transfer Protocol
NSW User System Front End
Display Support Protocol
Private Print Servers
Resource Location Protocol
Host Name Server
Login Host Protocol
Domain Name Server
Bootstrap Protocol Server
Bootstrap Protocol Client
Trivial File Transfer Protocol
NIC Host Name Server
CSNET Mailbox Name Server
Post Office Protocol v2
Post Office Protocol v3
Sun RPC Portmap
NETBIOS Name Service
NETBIOS Datagram Service
NETBIOS Session Service
Border Gateway Protocol
Each communication circuit into and out of the TCP layer is uniquely identified by a combination of two numbers, which together are called a socket. The socket is composed of the IP address of the machine and the port number used by the TCP software. Both the sending and receiving machines have sockets. Because the IP address is unique across the internetwork, and the port numbers are unique to the individual machine, the socket numbers are also unique across the entire internetwork. This enables a process to talk to another process across the network, based entirely on the socket number.
TCP uses the connection (not the protocol port) as a fundamental element. A completed connection has two end points. This enables a protocol port to be used for several connections at the same time (multiplexing).
The last section examined the process of establishing a message. During the process, the sending TCP requests a connection with the receiving TCP, using the unique socket numbers. This process is shown in Figure 4.2. If the sending TCP wants to establish a Telnet session from its port number 350, the socket number would be composed of the source machine's IP address and the port number (350), and the message would have a destination port number of 23 (Telnet's port number). The receiving TCP has a source port of 23 (Telnet) and a destination port of 350 (the sending machine's port).
Figure 4.2. Setting up a virtual circuit with socket numbers.
The sending and receiving machines maintain a port table, which lists all active port numbers. The two machines involved have reversed entries for each session between the two. This is called binding and is shown in Figure 4.3. The source and destination numbers are simply reversed for each connection in the port table. Of course, the IP addresses, and hence the socket numbers, are different.
If the sending machine is requesting more than one connection, the source port numbers are different, even though the destination port numbers might be the same. For example, if the sending machine were trying to establish three Telnet sessions simultaneously, the source machine port numbers might be 350, 351, and 352, and the destination port numbers would all be 23.
It is possible for more than one machine to share the same destination socketa process called multiplexing. In Figure 4.4, three machines are establishing Telnet sessions with a destination. They all use destination port 23, which is port multiplexing. Because the datagrams emerging from the port have the full socket information (with unique IP addresses), there is no confusion as to which machine a datagram is destined for.
When multiple sockets are established, it is conceivable that more than one machine might send a connection request with the same source and destination ports. However, the IP addresses for the two machines are different, so the sockets are still uniquely identified despite identical source and destination port numbers.
TCP must communicate with applications in the upper layer and a network system in the layer below. Several messages are defined for the upper-layer protocol to TCP communications, but there is no defined method for TCP to talk to lower layers (usually, but not necessarily, IP). TCP expects the layer beneath it to define the communication method. It is usually assumed that TCP and the transport layer communicate asynchronously.
The TCP to upper-layer protocol (ULP) communication method is well-defined, consisting of a set of service request primitives. The primitives involved in ULP to TCP communications are shown in Table 4.2.
Local connection name
Local port, remote socket
Optional: ULP timeout, timeout action, precedence, security, options
Source port, destination socket, data, data length, push flag, urgent flag
Optional: ULP timeout, timeout action, precedence, security
Local connection name, data length
Local connection name
Local port, destination socket
Optional: ULP timeout, timeout action, precedence, security, options
Local connection name, buffer address, byte count, push flag, urgent flag
Local connection name, buffer address, data length, push flag, urgent flag
Optional: ULP timeout, timeout action
Local connection name
Optional: ULP timeout, timeout action, precedence, security, options
Local connection name
Local connection name, buffer address, data length, urgent flag
Local connection name, error description
Local connection name
Local connection name, remote socket, destination address
Local connection name
Local connection name, source port, source address, remote socket, connection state, receive window, send window, amount waiting ACK, amount waiting receipt, urgent mode, precedence, security, timeout, timeout action
Local connection name, description
TCP enables two methods to establish a connection: active and passive. An active connection establishment happens when TCP issues a request for the connection, based on an instruction from an upper-level protocol that provides the socket number. A passive approach takes place when the upper-level protocol instructs TCP to wait for the arrival of connection requests from a remote system (usually from an active open instruction). When TCP receives the request, it assigns a port number. This enables a connection to proceed rapidly, without waiting for the active process.
There are two passive open primitives. A specified passive open creates a connection when the precedence level and security level are acceptable. An unspecified passive open opens the port to any request. The latter is used by servers that are waiting for clients of an unknown type to connect to them.
TCP has strict rules about the use of passive and active connection processes. Usually a passive open is performed on one machine, while an active open is performed on the other, with specific information about the socket number, precedence (priority), and security levels.
Although most TCP connections are established by an active request to a passive port, it is possible to open a connection without a passive port waiting. In this case, the TCP that sends a request for a connection includes both the local socket number and the remote socket number. If the receiving TCP is configured to enable the request (based on the precedence and security settings, as well as application-based criteria), the connection can be opened. This process is looked at again in the section titled "TCP and Connections."
TCP uses several timers to ensure that excessive delays are not encountered during communications. Several of these timers are elegant, handling problems that are not immediately obvious at first analysis. The timers used by TCP are examined in the following sections, which reveal their roles in ensuring that data is properly sent from one connection to another.
The retransmission timer manages retransmission timeouts (RTOs), which occur when a preset interval between the sending of a datagram and the returning acknowledgment is exceeded. The value of the timeout tends to vary, depending on the network type, to compensate for speed differences. If the timer expires, the datagram is retransmitted with an adjusted RTO, which is usually increased exponentially to a maximum preset limit. If the maximum limit is exceeded, connection failure is assumed, and error messages are passed back to the upper-layer application.
Values for the timeout are determined by measuring the average time that data takes to be transmitted to another machine and the acknowledgment received back, which is called the round-trip time, or RTT. From experiments, these RTTs are averaged by a formula that develops an expected value, called the smoothed round-trip time, or SRTT. This value is then increased to account for unforeseen delays.
After a TCP connection is closed, it is possible for datagrams that are still making their way through the network to attempt to access the closed port. The quiet timer is intended to prevent the just-closed port from reopening again quickly and receiving these last datagrams.
The quiet timer is usually set to twice the maximum segment lifetime (the same value as the Time to Live field in an IP header), ensuring that all segments still heading for the port have been discarded. Typically, this can result in a port being unavailable for up to 30 seconds, prompting error messages when other applications attempt to access the port during this interval.
The persistence timer handles a fairly rare occurrence. It is conceivable that a receive window might have a value of 0, causing the sending machine to pause transmission. The message to restart sending might be lost, causing an infinite delay. The persistence timer waits a preset time and then sends a one-byte segment at predetermined intervals to ensure that the receiving machine is still clogged.
The receiving machine resends the zero window-size message after receiving one of these status segments, if it is still backlogged. If the window is open, a message giving the new value is returned, and communications are resumed.
Both the keep-alive timer and the idle timer were added to the TCP specifications after their original definition. The keep-alive timer sends an empty packet at regular intervals to ensure that the connection to the other machine is still active. If no response has been received after sending the message by the time the idle timer has expired, the connection is assumed to be broken.
The keep-alive timer value is usually set by an application, with values ranging from 5 to 45 seconds. The idle timer is usually set to 360 seconds.
TCP uses adaptive timer algorithms to accommodate delays. The timers adjust themselves to the delays experienced over a connection, altering the timer values to reflect inherent problems.
TCP has to keep track of a lot of information about each connection. It does this through a Transmission Control Block (TCB), which contains information about the local and remote socket numbers, the send and receive buffers, security and priority values, and the current segment in the queue. The TCB also manages send and receive sequence numbers.
The TCB uses several variables to keep track of the send and receive status and to control the flow of information. These variables are shown in Table 4.3.
Sequence number of last urgent set
Sequence number for last window update
Acknowledgment number for last window update
Sequence number of last pushed set
Initial send sequence number
Sequence number of next received set
Number of sets that can be received
Sequence number of last urgent data
Initial receive sequence number
Using these variables, TCP controls the flow of information between two sockets. A sample connection session helps illustrate the use of the variables. It begins with Machine A wanting to send five blocks of data to Machine B. If the window limit is seven blocks, a maximum of seven blocks can be sent without acknowledgment. The SND.UNA variable on Machine A indicates how many blocks have been sent but are unacknowledged (5), and the SND.NXT variable has the value of the next block in the sequence (6). The value of the SND.WND variable is 2 (seven blocks possible, minus five sent), so only two more blocks could be sent without overloading the window. Machine B returns a message with the number of blocks received, and the window limit is adjusted accordingly.
The passage of messages back and forth can become quite complex as the sending machine forwards blocks unacknowledged up to the window limit, waiting for acknowledgment of earlier blocks that have been removed from the incoming cue, and then sending more blocks to fill the window again. The tracking of the blocks becomes a matter of guidekeeping, but with large window limits and traffic across internetworks that sometimes cause blocks to go astray, the process is, in many ways, remarkable.
As mentioned earlier, TCP must communicate with IP in the layer below (using an IP-defined method) and applications in the upper layer (using the TCP-ULP primitives). TCP also must communicate with other TCP implementations across networks. To do this, it uses Protocol Data Units (PDUs), which are called segments in TCP parlance.
The layout of the TCP PDU (commonly called the header) is shown in Figure 4.5.
The different fields are as follows:
Following the PDU or header is the data. The Options field has one useful function: to specify the maximum buffer size a receiving TCP implementation can accommodate. Because TCP uses variable-length data areas, it is possible for a sending machine to create a segment that is longer than the receiving software can handle.
The Checksum field calculates the checksum based on the entire segment size, including a 96-bit pseudoheader that is prefixed to the TCP header during the calculation. The pseudoheader contains the source address, destination address, protocol identifier, and segment length. These are the parameters that are passed to IP when a send instruction is passed, and also the ones read by IP when delivery is attempted.
TCP has many rules imposed on how it communicates. These rules and the processes that TCP follows to establish a connection, transfer data, and terminate a connection are usually presented in state diagrams. (Because TCP is a state-driven protocol, its actions depend on the state of a flag or similar construct.) Avoiding overly complex state diagrams is difficult, so flow diagrams can be used as a useful method for understanding TCP.
A connection can be established between two machines only if a connection between the two sockets does not exist, both machines agree to the connection, and both machines have adequate TCP resources to service the connection. If any of these conditions are not met, the connection cannot be made. The acceptance of connections can be triggered by an application or a system administration routine.
When a connection is established, it is given certain properties that are valid until the connection is closed. Typically, these are a precedence value and a security value. These settings are agreed upon by the two applications when the connection is in the process of being established.
In most cases, a connection is expected by two applications, so they issue either active or passive open requests. Figure 4.6 shows a flow diagram for a TCP open. The process begins with Machine A's TCP receiving a request for a connection from its ULP, to which it sends an active open primitive to Machine B. (Refer back to Table 4.2 for the TCP primitives.) The segment that is constructed has the SYN flag set on (set to 1) and has a sequence number assigned. The diagram shows this with the notation "SYN SEQ 50," indicating that the SYN flag is on and the sequence number (Initial Send Sequence number or ISS) is 50. (Any number could have been chosen.)
The application on Machine B has issued a passive open instruction to its TCP. When the SYN SEQ 50 segment is received, Machine B's TCP sends an acknowledgment back to Machine A with the sequence number of 51. Machine B also sets an ISS number of its own. The diagram shows this message as "ACK 51; SYN 200," indicating that the message is an acknowledgment with sequence number 51, it has the SYN flag set, and it has an ISS of 200.
Upon receipt, Machine A sends back its own acknowledgment message with the sequence number set to 201. This is "ACK 201" in the diagram. Then, having opened and acknowledged the connection, Machine A and Machine B both send connection open messages through the ULP to the requesting applications.
It is not necessary for the remote machine to have a passive open instruction, as mentioned earlier. In this case, the sending machine provides both the sending and receiving socket numbers, as well as precedence, security, and timeout values. It is common for two applications to request an active open at the same time. This is resolved quite easily, although it does involve a little more network traffic.
Transferring information is straightforward, as shown in Figure 4.7. For each block of data received by Machine A's TCP from the ULP, TCP encapsulates it and sends it to Machine B with an increasing sequence number. After Machine B receives the message, it acknowledges it with a segment acknowledgment that increments the next sequence number (and hence indicates that it has received everything up to that sequence number). Figure 4.7 shows the transfer of two segments of informationone each way.
The TCP data transport service actually embodies six subservices:
To close a connection, one of the TCPs receives a close primitive from the ULP and issues a message with the FIN flag set on. This is shown in Figure 4.8. In the figure, Machine A's TCP sends the request to close the connection to Machine B with the next sequence number. Machine B then sends back an acknowledgment of the request and its next sequence number. Following this, Machine B sends the close message through its ULP to the application and waits for the application to acknowledge the closure. This step is not strictly necessary; TCP can close the connection without the application's approval, but a well-behaved system would inform the application of the change in state.
After receiving approval to close the connection from the application (or after the request has timed out), Machine B's TCP sends a segment back to Machine A with the FIN flag set. Finally, Machine A acknowledges the closure, and the connection is terminated.
An abrupt termination of a connection can occur when one side shuts down the socket. This can be done without any notice to the other machine and without regard to any information in transit between the two. Aside from sudden shutdowns caused by malfunctions or power outages, abrupt termination can be initiated by a user, an application, or a system monitoring routine that judges the connection worthy of termination. The other end of the connection might not realize that an abrupt termination has occurred until it attempts to send a message and the timer expires.
To keep track of all the connections, TCP uses a connection table. Each existing connection has an entry in the table that shows information about the end-to-end connection. The layout of the TCP connection table is shown in Figure 4.9.
The meaning of each column is as follows:
TCP is a connection-based protocol. There are times when a connectionless protocol is required, so UDP is used. UDP is used with both the Trivial File Transfer Protocol (TFTP) and the Remote Call Procedure (RCP). Connectionless communications don't provide reliability, meaning there is no indication to the sending device that a message has been received correctly. Connectionless protocols also do not offer error-recovery capabilitieswhich must be either ignored or provided in the higher or lower layers. UDP is much simpler than TCP. It interfaces with IP (or other protocols) without the bother of flow control or error-recovery mechanisms, acting simply as a sender and receiver of datagrams.
UDP is connectionless; TCP is based on connections.
The UDP message header is much simpler than TCP's. It is shown in Figure 4.10. Padding can be added to the datagram to ensure that the message is a multiple of 16 bits.
The fields are as follows:
The UDP checksum field is optional, but if it isn't used, no checksum is applied to the data segment because IP's checksum applies only to the IP header. If the checksum is not used, the field should be set to 0.
Today, I looked at TCP in reasonable detail. Combined with the information in the last three days, you now have the theory and background necessary to better understand TCP/IP utilities, such as Telnet and FTP, as well as other protocols that use or closely resemble TCP/IP, such as SMTP and TFTP.
Define multiplexing and how it would be used to combine three source machines to one destination machine. Relate to port numbers.
Multiplexing was explained in some detail on Day 1. It refers to combining several connections into one. Three machines could each establish source ports to one machine using only one receiving port. The port numbers for the sending machines would all be different, but all three would use the same destination port number. This was shown in Figure 4.4.
What one word best describes the difference between TCP and UDP?
Connections. TCP is connection-based, whereas UDP is connectionless.
What are port numbers and sockets?
A port number is used to identify the type of service provided. A socket is the address of the port on which a connection is established. There is no inherent physical relationship between the two, although many machines assign certain sockets for particular services (port numbers).
Describe the timers used with TCP.
The retransmission timer is used to control the resending of a datagram. The quiet timer is used to delay the reassignment of a port. The persistence timer is used to test a receive window. Keep-alive timers send empty data to keep a connection alive. The idle timer is the amount of time to wait for a disconnection to be terminated after no datagrams are received.
What are the six data transport subservices offered by TCP?
The Workshop provides quiz questions to help you solidify your understanding of the material covered. Some Workshop sections of this guide also contain exercises to provide you with experience in using what you have learned. Try to understand the quiz and exercise answers before continuing on to the next chapter. Answers are provided in Appendix F, "Answers to Quizzes." | <urn:uuid:7503a7f2-bcab-4a63-a273-367822b9fac7> | CC-MAIN-2019-47 | https://www.softlookup.com/tutorial/tcp_ip/ch04.asp | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00019.warc.gz | en | 0.912699 | 6,823 | 3.640625 | 4 |
The HPC Challenge benchmark consists at this time of 7
benchmarks: HPL, STREAM, RandomAccess, PTRANS, FFTE, DGEMM and b_eff
Latency/Bandwidth. HPL is the Linpack TPP benchmark. The test stresses the floating point performance of a system. STREAM is a benchmark that measures sustainable memory bandwidth (in GB/s), RandomAccess measures the rate of random updates of memory. PTRANS measures the rate of transfer for larges arrays of data from multiprocessor’s memory. Latency/Bandwidth
measures (as the name suggests) latency and bandwidth of
communication patterns of increasing complexity between
as many nodes as is time-wise feasible.
Latency/Bandwidth measures latency (time required to send an 8-byte message
from one node to another) and bandwidth (message size divided by the time it
takes to transmit a 2,000,000 byte message) of network communication using
basic MPI routines. The measurement is done during non-simultanous
(ping-pong benchmark) and simultanous communication (random and natural ring
pattern) and therefore it covers two extreme levels of contention (no
contention and contention caused by each process communicates with a
randomly chosen neighbor in parallel) that might occur in real application.
Download the benchmark software, link in MPI and the BLAS, adjust the input file, and run the MPI program on your parallel system. See the README.txt file in the
benchmark distribution archive for the details.
A “base run” is defined as compiling and running the supplied program along with a version of MPI and the BLAS. No changes to the source code are allowed for the base run. For the base run, the MPI and the BLAS must be the ones in common use on the system. A user may adjust the input to the benchmark to accommodate their system.
Tflop/s is a rate of execution - trillion (ten to the
12th power) of floating point operations per second.
Whenever this term is used it will refer to 64-bit floating
point operations and the operations will be either addition
or multiplication (a “fused” multiply/add is counted as two
floating point operations). GB/s stands for Giga (ten to the 9th power) bytes per
second and is a unit of bandwidth - a rate of transfer of
data between the processor and memory and also over the
network. Two types of measurements may be reported for
network bandwidth: per CPU and accumulated (for all nodes).
Gup/s is short for Giga updates per second. An update
is the basic operation performed by RandomAccess benchmark:
read an integer from memory, change it in the processor, and
write it back to memory. The location of read and write is
the same by each time is selected at random. Therefore,
there is no relation between Gup/s and GB/s because the
latter implicitly refers to contiguous transfers. Such
transfers may benefit from prefetching while Gup/s transfers
The term usec is a common abbreviation of micro (ten
to the -6th power) seconds and is used to measure latency
The theoretical peak is based not on an actual performance from a benchmark run, but on a paper computation to determine the theoretical peak rate of execution of floating point operations for the machine. This is the number manufacturers often cite; it represents an upper bound on performance. That is, the manufacturer guarantees that programs will not exceed this rate-sort of a "speed of light" for a given computer. The theoretical peak performance is determined by counting the number of floating-point additions and multiplications (in full precision) that can be completed during a period of time, usually the cycle time of the machine. For example, an Intel Itanium 2 at 1.5 GHz can complete 4 floating point operations per cycle or a theoretical peak performance of 6 GFlop/s.
Why is my performance results below the theoritical peak?
The performance of a computer is a complicated issue, a function of many interrelated quantities. These quantities include the application, the algorithm, the size of the problem, the high-level language, the implementation, the human level of effort used to optimize the program, the compiler's ability to optimize, the age of the compiler, the operating system, the architecture of the computer, and the hardware characteristics. The results presented for this benchmark suite should not be extolled as measures of total system performance (unless enough analysis has been performed to indicate a reliable correlation of the benchmarks to the workload of interest) but, rather, as reference points for further evaluations.
Why are the performance results for my computer different than some other machine with the same characteristics?
There are many reasons why your results may vary from results recorded in previous machines. Issues such as load on the system, accuracy of the clock, compiler options, version of the compiler, size of cache, bandwidth from memory, amount of memory, etc can effect the performance even when the processors are the same.
There are quite a few reasons. First off, these options are useful to determine what matters and what does not on your system. Second, HPL is often used in the context of early evaluation of new systems. In such a case, everything is usually not quite working right, and it is convenient to be able to vary these parameters without recompiling. Finally, every system has its own peculiarities and one is likely to be willing to empirically determine the best set of parameters. In any case, one can always follow the advice provided in the HPL tuning section.
For HPL input, what problem size (matrix dimension N) should I use?
In order to find out the best performance of your system, the largest problem size fitting in memory is what you should aim for. The amount of memory used by HPL is essentially the size of the coefficient matrix. So for example, if you have 4 nodes with 256 Mb of memory on each, this corresponds to 1 Gb total, i.e., 125 M double precision (8 bytes) elements. The square root of that number is 11585. One definitely needs to leave some memory for the OS as well as for other things, so a problem size of 10000 is likely to fit. As a rule of thumb, 80 % of the total amount of memory is a good guess. If the problem size you pick is too large, swapping will occur, and the performance will drop. If multiple processes are spawn on each node (say you have 2 processors per node), what counts is the available amount of memory to each process.
How benchmark stores data If multiple values of N were chosen?
The benchmark code process each value of N in turn. For the first value of N, the memory is allocated, matrix data is generated, the linear system is solved and timed, the solution is verified, and the memory is deallocated.
HPL uses the block size NB for the data distribution as well as for the computational granularity. From a data distribution point of view, the smallest NB, the better the load balance. You definitely want to stay away from very large values of NB. From a computation point of view, a too small value of NB may limit the computational performance by a large factor because almost no data reuse will occur in the highest level of the memory hierarchy. The number of messages will also increase. Efficient matrix-multiply routines are often internally blocked. Small multiples of this blocking factor are likely to be good block sizes for HPL. The bottom line is that "good" block sizes are almost always in the [32 .. 256] interval. The best values depend on the computation / communication performance ratio of your system. To a much less extent, the problem size matters as well. Say for example, you emperically found that 44 was a good block size with respect to performance. 88 or 132 are likely to give slightly better results for large problem sizes because of a slighlty higher flop rate.
For HPL what process grid ratio P x Q should I use?
This depends on the physical interconnection network you have. Assuming a mesh or a switch HPL "likes" a 1:k ratio with k in [1..3]. In other words, P and Q should be approximately equal, with Q slightly larger than P. Examples: 2 x 2, 2 x 4, 2 x 5, 3 x 4, 4 x 4, 4 x 6, 5 x 6, 4 x 8 ... If you are running on a simple Ethernet network, there is only one wire through which all the messages are exchanged. On such a network, the performance and scalability of HPL is strongly limited and very flat process grids are likely to be the best choices: 1 x 4, 1 x 8, 2 x 4 ...
HPL has been designed to perform well for large problem sizes on hundreds of nodes and more. The software works on one node and for large problem sizes, one can usually achieve pretty good performance on a single processor as well. For small problem sizes however, the overhead due to message-passing, local indexing and so on can be significant.
Certainly. There is always room for performance improvements (unless you've reached the theoretical peak of you machine). Specific knowledge about a particular system is always a source of performance gains. Even from a generic point of view, better algorithms or more efficient formulation of the classic ones are potential winners.
You need to modify the input data file HPL.dat. This file should reside in the same directory as the executable hpl/bin//xhpl. An example HPL.dat file is provided by default but is not optimal for any practical system. This file contains information about the problem sizes, machine configuration, and algorithm features to be used by the executable.
The ping pong benchmark is executed on two processes.
From the client process a message (ping) is sent to the server process
and then bounced back to the client (pong). MPI standard blocking send
and receive is used.
The ping-pong patterns are done in a loop.
To achieve the communication time of one message,
the total communication time is measured on the client process
and divided by twice the loop length.
Additional startup latencies are masked out by starting the measurement
after one non-measured ping-pong.
The benchmark in hpcc uses 8 byte messages and loop length = 8 for
benchmarking the communication latency.
The benchmark is repeated 5 times and the shortest latency is reported.
To measure the communication bandwidth, 2,000,000 byte messages with
loop length 1 are repeated twice.
How is ping pong measured on more than 2 processors?
The ping-pong benchmark reports the maximum latency and minimum bandwidth
for a number of non-simultaneous ping-pong tests.
The ping-pongs are performed between as many as possible
(there is an upper bound on the time it takes to complete this test)
distinct pairs of processors.
Which parallel communication pattern is used in the random and natural ring benchmark?
For measuring latency and bandwidth of parallel communication,
all processes are arranged in a ring topology
and each process sends and receives a message from its left and its right
neighbor in parallel.
Two types of rings are reported:
a naturally ordered ring (i.e., ordered by the process ranks in
and the geometric mean of ten different randomly chosen process orderings in the ring.
The communication is implemented
(a) with MPI standard non-blocking receive and send, and
(b) with two calls to MPI_Sendrecv for both directions in the ring.
Always the fastest of both measurements are used.
For benchmarking latency and bandwidth, 8 byte and 2,000,000 byte long
messages are used.
With this type of parallel communication, the bandwidth per process is
as total amount of message data divided by the number of processes
and the maximal time needed in all processes.
This part of the benchmark is based on patterns studied in the effective
How does the parallel ring bandwidth relate to vendors' values?
Vendors are often reporting a duplex network bandwidth
(per CPU or accumulated)
by counting each message twice, i.e., as incoming and outgoing
data i.e., they are reporting parallel bandwidth values that are twice
of the values reported by this ring benchmark (because in this hpcc
benchmark, each transferred message is counted only once).
How do I change data size (matrix and vector dimensions) for the tests?
Only HPL and PTRANS matrix sizes can be changed directly in the hpccinf.txt or hpccmemf.txt input files. The remaining tests use the size of the largest HPL matrix to adjust the size of their input data. For example, in a sequential run, if the size of the HPL matrix is 1 GiB then each of the three vectors used by STREAM Triad will be 0.333 GiB, PTRANS matrix will be 0.5 GiB, the FFT vector size will be 125 MiB, and each of the three matrix sizes in DGEMM will be 333 MiB.
To summarize what the above papers say: the dimensions Px and Py of the virtual process grid for PTRANS have to have small GCD (Greatest Common Divisor) and small LCM (Least Common Multiple) to achive good performance. The number of steps to do the transpose is LCM(Px,Py)/GCD(Px,Py). And the number of communicating pairs is GCD(Px,Py).
HPCC has not been designed for running invidual tests. Quite the opposite. It's a harness that ties multiple tests together. Having said that, it is possible to comment out calls to individual tests in src/hpcc.c
Is there an easier way to specify input configuration?
Yes, there is. HPCC code looks for file "hpccmemf.txt". It is very minimalistic and allows for a quick specification of the input parameters. It takes only a single line that specifies the amount of memory for the run. The amount of memory can be specified per thread, per MPI process or the total memory for the entire machine. For example, if HPCC should use 1048576 bytes (1 MiB) per thread, the "hpccmemf.txt" should contain a line "Thread=1". If 2 MiB should be allocated per MPI process, the single line in the file should read: "Process=1". And finally, if the total memory used should be use 3 MiB, then the single line should be: "Total=3".
How do I set problem size for HPL, FFT, PTRANS, RandomAccess, or DGEMM?
HPCC does not allow change the problem size for any test other than HPL and PTRANS. The input data sizes are deterimined based on what was chosen for HPL. This is motivited by the fact that the input size has the opposite effect on HPL compared with the other tests: the larger the size the better the resulting performance. PTRANS input size can be changed but only the large enough matrices are taken into consideration for the final reporting.
See the Overview section, which contains the rules. In particular, no code changes are allowed but use of general purpose libraries is allowed through compiler directives and options as well as linker flags.
Can I use my own libraries that implement portions of HPCC?
Yes, provided you do not voilate the
The base runs, the code cannot be changed but you can
use optimized libraries to speed up some sections of the
code. These libraries have to be generally available
on your system for others to use.
For optimized runs, you can replace some portions of
the benchmark with your propriatary code.
First name of the person that submits the results. The name is kept private, it is only stored internally. It is only used in correspondence with the submitter (if at all) and we also use it for the award announcements.
Last name of the person that submits the results. The name is kept private, it is only stored internally. It is only used in correspondence with the submitter (if at all) and we also use it for the award announcements.
Email address is used for the submission confirmation. It is not possible to submit an entry without a valid e-mail address. This email is not used for any other purposes and is not listed in the publicly accessible documents.
Why do I need to submit my institution or affiliation?
For each result submission, we collect and report either institution that owns the machine or with which the machine is affiliated with. This helps in determining the ownership of the machine and may serve as an outreach opportunity. Example entry: University of Tennessee.
The "Theoretical peak" field should contain the computational rate (even if only theoretical) of all the processors/cores used by the benchmark expressed in Tflop/s (trillion floating-point operations per second or 1012 flop/s). Typically, it is a product of the number of floating-point operations per cycle, clock frequency, and the number of processors/cores. For the multi-core chips, a common practice is to refer to a single core as processor. So the theoretical peak should be multiplied by the number of cores. The table below gives the number of floating-point operations per cycle for common processors/cores. Please keep in mind that the recent processors from AMD and Intel utilize frequency scaling. The nominal frequency from the specification might be lower than the maximum frequency, that the processor might be able to use under some circumstances. For the theoretical peak, the maximum frequency should be used.
What is the relation of this benchmark to the Linpack benchmark?
The Linpack benchmark called the Highly Parallel Computing Benchmark can be found in Table 3 of the Linpack Benchmark Report (PDF). This benchmark attempts to measure the best performance of a machine in solving a system of equations. The problem size and software can be chosen to produce the best performance. HPL is the benchmark used for the TOP500 report.
What does the HPC Challenge Benchmark have to do with the Top500?
The HPC Challenge Benchmark is an attempt to broaden the scope of benchmarking high performance systems. The Top500 uses one metric, the Linpack Benchmark (HPL), to rank the 500 fastest computer systems in use today. The HPC Challenge does not produce a ranking for systems, but provides a set of metrics for evaluations and comparisons.
The Top500 lists the 500 fastest computer systems being used today. In 1993 the collection was started and has been updated every 6 months since then. The report lists the sites that have the 500 most powerful computer systems installed. The best Linpack benchmark performance achieved is used as a performance measure in ranking the computers. The TOP500 list has been updated twice a year since June 1993.
To be listed on the Top500 list you have to run the software that can be found at http://www.netlib.org/benchmark/hpl/ and the performance of the benchmark run must be within the range of the 500 fasted computers for that period of time.
What is the relation of this benchmark to the STREAM benchmark?
The version of STREAM in HPCC has been modified so that the
source and destination arrays are allocated from the heap with
a dynamic size instead of having static storage with a
constant size. This allows the size of the arrays to be scaled
appropriately according to the memory size (derived from the
HPL parameters). From the compiler stand-point, it removes
information about pointer aliasing, alignment, and data size
- all of which might be crucial for efficient code generation.
Optimized run of HPCC is expected to deal with these
What is the relation of this benchmark to the Effective Communication Bandwidth (b_eff) benchmark?
In this benchmark, latency and bandwidth are measured
mainly with three communication patterns (ping-pong, random
ring, natural ring) and two message sizes (8 byte for
latency and 2,000,000 bytes for bandwidth measurements)
and these different results are reported independently.
The buffer memory is always reused in a loop of measurements.
The goal of b_eff is to compute an average bandwidth value that represents several ring patterns (sequentially and
randomly ordered) and 21 different message sizes. Memory
reuse is prohibited. | <urn:uuid:4581b133-016b-498e-9ecb-8a501b15f8cd> | CC-MAIN-2019-47 | http://icl.cs.utk.edu/hpcc/faq/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00020.warc.gz | en | 0.91516 | 4,325 | 2.609375 | 3 |
Sunday, December 23, 2018
Monday, December 17, 2018
The Risks of Spinal Deformity Surgery
In order to minimize the risks of spinal deformity surgery for scoliosis, kyphosis, spondylolisthesis, spondylolysis, etc….. Obtaining a thorough medical history is necessary, along with a physical examination, blood work and imaging studies. Shared decision-making is important during discussion about surgery, along with the alternatives to surgery, the benefits of surgery, potential complications and risks. Below is the first part of the routine preoperative discussions on spinal deformity surgery in children and adolescents.
Overall there are two layers of risks in spinal deformity surgery: 1) Those common to any surgical procedure, and 2) those unique to spinal deformity surgery. The below information are those risks common to all surgical procedures which require general anesthesia.
1. Risks common to any surgical procedure: general anesthesia and the need to incise the skin for the surgery
a. General Anesthesia = being asleep for the entire procedure
i. Statistically there is a greater risk of a fatal car crash over a year (1 in 4,000 to 8,000) than there is of a catastrophic event due to general anesthesia in a healthy adolescent or child (1 in 100,000 surgeries). The riskiest part of the day may be the drive to the hospital, so drive safely and buckle up.
ii. Nausea and vomiting are common after general anesthesia, in up to 30% of surgical case. The Anesthesiology team give medications during surgery to minimize nausea and vomiting after surgery
iii. Being under general anesthesia is like taking a nap, only medication-produced. You won’t know how much time you have been asleep until you wake up and see a clock or someone tells you….it is just like taking a nap.
b. Surgery requires creating an incision on the skin, which means the area of surgery can develop a bacterial infection.
i. In medical terms this is call a SSI or Surgical Site Infection
ii. There are many things the surgical team will do to minimize the chance of an infection.
1. The use of antibiotics is very important. They are given before incision is made, during the surgical procedure and after surgery to minimize the risk of a SSI. It is also important the correct antibiotic is give, at the correct dose and at the optimal time.
2. Before surgery, usually at the preoperative visit, a nasal swab will be performed. This is to try to identify people with MRSA on their bodies, specifically their noses. MRSA = Methicillin-Resistant Staphylococcus Aureus, which is a very bad infection to get due to its being resistant to most antibiotics which can be given to prevent infection. It is important to know who carries MRSA before surgery to make sure the correct antibiotic is given before an incision is being made.
3. And there are many things you will not see the surgical team doing, such as the sterile skin preparation, surgical draping, sterile surgical technique, etc…. which also help minimize surgical infections.
iii. Due to the large amount of metal used in spine deformity surgery a deep infection of SSI (an infection which is on the spinal metal) is a major problem. It can be difficult to get rid of a deep infection around the metal since many bacteria strongly adhere to the metal. In addition bacteria can put up a protective wall around itself which prevents antibiotics from reaching the bacteria and killing it. If a SSI occurs it usually requires several surgical procedures (called irrigation and débridement) to wash out the wound in order to remove the bacterial load. The use of antibiotics is essential, with the initial treatments given intravenous (through tubing) and then switched over to oral.
iv. The surgical site infection rate for patients with idiopathic scoliosis (which means no known cause) is at Washington University is much lower than the average for pediatric hospitals in the U.S. However we will not be satisfied until ALL infections are prevented.
The next post will be on risks unique to spinal deformity surgery…..such as neurologic deficit, failure of fusion, implant breakage or dislodgement, need for repeat surgery, etc…..
Sunday, December 9, 2018
Wednesday, August 29, 2018
Shilla Growth Guidance Procedure
1. What is the Shilla procedure? The Shilla technique is one which passively guides spine growth, rather than actively distracting across like a growing rod system. I refer to it as a "track and trolley" system.
2. Who is a candidate for the Shilla technique? Many patients who are candidates for traditional growing rods (GR) are also candidates for the Shilla technique. The decision between GR and Shilla technique will be made between the surgeon and parents/caregivers focusing on what is best for the child.
3. How is it different that traditional growing rods or MAGEC? The Shilla fixes the worst part of the deformity, that part of the spine which is growing more sideways than vertically. By straightening out the severely curved part and then fusing the apex of the deformity the apex will be permanently improved. A traditional growing rod or MAGEC system fixates above and below the worst part of the spine deformity and creates small fusions in the part of the spine which is growing more normally. The growing rods are then forcefully distracted to put the spine under tension. See reference #1 below for more details.
4. Is Shilla better than traditional growing rods or MAGEC? To this surgeon, if a Shilla can be used over a traditional growing rod or MAGEC then it is my first choice. The question is why? The initial surgery is similar between all three surgeries in terms of recovery. The benefit of the Shilla is there is no need for repetitive surgeries, unlike traditional growing rods, or frequent clinic visits, as is necessary for MAGEC. Overall, when Shilla is compared to growing rods the overall outcome on x-rays is nearly identical. Another benefit is Shilla patients undergo 1/3 the number of surgeries/anesthesia when compared to traditional growing rods. See reference #2 below.
5. Is the spine fused in a Shilla procedure? A spine fusion is performed typically over 2-4 vertebra where the scoliosis is at its worst. These vertebra were not growing normally anyways. Even if the patient had received a traditional growing rod or MAGEC that very curved part of the spine would not grow normally.
6. How long will the Shilla procedure last? In the 2nd paper listed below it was demonstrated that they average patient underwent 3 surgeries: initial, one revision and then one final surgery. The average patient underwent a revision surgery at 3-3.5 years after the first surgery and then final surgery at 6-7 years after the first surgery. Every patient is different.
7. When growth is completed what happens with the Shilla construct? At or near the end of spine growth a decision is made to either convert the Shilla to a definitive spine fusion or removal all the implants with the idea to restore spine motion and not need a fusion surgery.
8. Whose decision is it to remove the implants or convert to a spine fusion? It is a decision made by the patient, family and surgeon. Most patients are undergoing definitive fusion surgery thus far because they wanted to improve their body position permanently.
9. How many patients have had their implants removed? Thus far between our center and Little Rock there are less than 10 patients.
10. If the implants are removed can a fusion surgery be done in the future? Yes.
1. Luhmann SJ, McCarthy RE. A Comparison of SHILLA™ GROWTH GUIDANCE SYSTEM and Growing Rods in the Treatment of Spinal Deformity in Children Less than 10 Years of Age. J Pediatr Orthop 37(8):e567-e574, 2017.
2. Luhmann SJ, Smith JC, McClung A, et al. Radiographic outcomes of Shilla Growth Guidance System and traditional growing rods through definitive treatment. Spine Deformity, 5:277-282, 2017.
3. Luhmann SJ, McAughey EM, Ackerman SJ, Bumpass DB, McCarthy R: Cost analysis of a growth guidance system compared with traditional and magnetically controlled growing rods for early-onset scoliosis: a US-based integrated health care delivery system perspective. ClinicoEconomics & Outcomes Research 1:179-187, 2018.
Wednesday, June 6, 2018
Vertebral Body Stapling (VBS) Part 2
1. What is the success rate for VBS? Two centers have reported their outcomes of VBS in patients (Washington University in St. Louis, and the Philadelphia Shriner’s). Overall success rates for lumbar and thoracic curves are around 70%, which means the curve improved, did not change or changed less than 6 degrees.
2. What is the success rate of bracing? The highest level of evidence on bracing for idiopathic scoliosis is from the BRIAST study, which was a prospective study of scoliosis patients who wore a scoliosis brace and those who did not. 48% of non-braced patients did not have progression of their scoliosis and 72% of braced patients did not progress.
3. So to conclude: the success rates for VBS and bracing appear to be equivalent.
4. Why would a VBS be offered or performed if the results are similar to bracing? Bracing is not an easy treatment for anyone. VBS provides an option for those patients who cannot or will not wear a brace to control their scoliosis. It is important to note VBS is not better than bracing, it is just a different treatment option.
5. Do the staples have to be removed in the future? The answer is no. The staples do not have to be removed. The body will cover over the staple with scar tissue and the staple will gradually loosen in the vertebral body. Due to the tines of the staple curving in the staples will not back out.
6. What happens if the VBS does not control the scoliosis and a posterior spinal fusion is needed? Do the staples have to be removed at that time? Again, no. Since the staples are placed in the front of the spine they will not interfere with the instrumentation placed in the posterior (back) part of the spine. Also the staples do not interfere with the ability to correct the scoliosis at the time of fusion.
7. Who is a candidate for Vertebral Body Stapling?
a. Skeletally immature: since scoliosis mainly progresses due to growth, the use of VBS only is indicated during the growth. There is no benefit of VBS in skeletally mature individuals
b. Scoliosis who Cobb measure is:
i. </= 35 degrees in the thoracic spine
ii. </= 40 degrees in the lumbar spine
c. Patients who cannot or will not wear a brace to halt the progression of scoliosis.
d. Diagnosis of idiopathic scoliosis or patients who have idiopathic-like scoliosis
Who is not a candidate for VBS?
a. Skeletally mature patients: Risser >/=3
b. Diagnoses with poor bone quality, increased muscular tone, neurogenic scoliosis (patients with Chiari or syrinx), etc…
c. Cobb measures >35 in the thoracic spine and >40 in the lumbar spine.
d. Increased kyphosis of the thoracic spine >40 degrees (since the staples induce kyphosis)
e. Those patients whose spine is excessively malrotated due to the scoliosis. VBS will not significantly change this for the better.
Wednesday, March 21, 2018
Vertebral Body Stapling (VBS) Part 1
- What is Vertebral Body Stapling? How is it different from Vertebral Body Tethers? In a previous blog the surgical technique of Vertebral Body Tethering was presented. This technique places a compressive force over the convex side of the spine (slowing down growth), to permit the concave side of the spine to relatively grow more and create a straighter spine. Prior to the introduction of the Vertebral Body Tether, which uses screws placed into the vertebral body, modulating growth of the concave and convex side of the spine was accomplished with staples. These staples were also placed anteriorly, but instead of being placed in the middle of the vertebral body they were placed across the disc spaces between each vertebral body.
- Is VBS a new procedure? This surgical technique was first reported in the 1950s but, due to the lack of an adequate implant, the technique did not work as designed. It wasn’t until the 2000s that an appropriate implant was identified, and this technique began to show promise. The staples used at that time, and currently, are made of Nitinol which is a memory-shape alloy. When the staples are placed in an ice bath, the tines of the staples can be straightened. After placement across the disc space the staple warms up to body temperature and the tines curve back inward.
- What is the purpose of VBS? To halt or improve scoliosis in the skeletally immature patient.
- What research has been done on VBS? There have been animal studies and clinical studies over the last 15 years.
- Are there any potential complications of VBS? As with any surgical procedure there can be complications related to the surgical procedure or the patient’s underlying medical condition. The potential complications includes, but is not limited to:
- Anesthetic (anaphylaxis, airway, etc…)
- Excessive bleeding
- For thoracic stapling: Injury to the lung, heart, great vessels, thoracic duct, etc…
- For lumbar stapling: injury to the great vessels, ureter, psoas dysfunction, etc…
- Painful postoperative surgical scar
- Staple dislodgement
- Staple breakage
- Failure to control the scoliosis
- Need for definitive spinal fusion
Sunday, February 11, 2018
Vertebral Body Tethering (Part 5 and the last one on this topic)
In earlier posts VBT has been extensively detailed. One question that commonly is asked during discussion of VBT with patients and caregivers is: “What are the long-term issues with VBT?”
The simplistic answer is: “We don’t know”.
One layer to this question is what happens to the actual tether?
- If we look at other implant systems used in the spine and other bones of the body over the last 50+ years we can roughly sketch out some possible scenarios for the system currently used for VBT. The fixation in the vertebra are screws which, as a group, have a long history of safety and efficacy. However the screw used in VBT are designed for use in the posterior spine, and for VBT they are placed anterior through a minimally-invasive or thoracoscopic approach. The question is will they function with the same efficacy and safety profile. Based on the collective experience it appears the screws have good purchase and few issues with prominence, migration or pullout.
- The other aspect of VBT is the tether which is made of braided polypropylene. This is the workhorse of the system, which compresses across the convex discs and growth plates to modulate spine growth. Since there is no fusion across the vertebral bodies there will be constant motion on the tether. Like any non-regenerating material which is constantly moving, the tether is subject to fatigue, which can lead to failure or breakage of the tether. It makes sense that the tether will eventually break, considering it is implanted in adolescents and will be stressed for over 60+ years (or more!). Over the last year there have been reports of segmental failure of the tether (between two screws), so it is reasonable to assume that in the long-term the tether will likely break in multiple locations. For the sake of the aim of VBT to modulate growth in the immature spine, we only need it to last until the completion of spinal growth. What is not desired is for the tether to break prior to this time and permit the spine deformity to get worse.
A second layer is what the tether does to the vertebral bodies, and more importantly, to the disc between the vertebral bodies. The implications of long-term compression of the instrumented disc and the presence of anterior instrumentation in a non-fusion technique is unknown. Changes to the intervertebral discs may occur and, if this happens, may cause axial thoracic back pain or possible disc herniations in the future. Also, it is unknown if increased motion, such as after the tether breaks, through a previously VBT-compressed motion segment is significant. Will this cause back pain? At the present time we just don’t know.
More research is necessary on VBT safety, timing of VBT placement, VBT tensioning, intervertebral disc health, and long-term patient reported and radiographic outcomes of VBT.
Newton PO, Fricka KB, Lee SS, et al. Asymmetrical flexible tethering of spine growth in an immature bovine model. Spine 2002;27(7):689-93.
Braun JT, Ogilvie JW, Akyuz E, et al. Fusionless scoliosis correction using a shape memory alloy staple in the anterior thoracic spine of the immature goat. Spine 2004;29(18):1980-9.
Newton PO, Farnsworth CL, Faro FD, et al. Spinal growth modulation with an anterolateral flexible tether in an immature bovine model: disc health and motion preservation. Spine 2008;33(7):724-33.
Chay E, Patel A, Ungar B, et al. Impact of unilateral corrective tethering on the histology of the growth plate in an established porcine model for thoracic scoliosis. Spine 2012;37(15):E883-9.
Crawford CH 3rd, Lenke LG. Growth modulation by means of anterior tethering resulting in progressive correction of juvenile idiopathic scoliosis: a case report. J Bone Joint Surg [Am] 2010;92(1):202-9.
Samdani AF, Ames RJ, Kimball JS, et al. Anterior vertebral body tethering for immature adolescent idiopathic scoliosis: one-year results on the first 32 patients. Eur Spin J 2015;24:1533-9.
Friday, January 26, 2018
The World Pediatric Projects winter fundraiser called "Treasures in Paradise" happened last week, Friday January 26th,
The keynote speaker was Erickson Hernandez, a wonderful young man who Drs. Manke and Goldfarb, and myself treated at our Shriner's Hospital.
Check out the web address (copy and paste in your browser) below for a video of Erickson's speech.
Thursday, January 25, 2018
Vertebral Body Tethering (Part 4)
Primum non nocere or “do no harm” is a basic tenet of medicine. This is why for surgical procedures, such as Vertebral Body Tethering or VBT, safety is the pre-eminent concern, even more so than its efficacy or how well it works. If a surgical procedure is safe (infrequent, minor complications, with no significant long-term problems) but only demonstrates mild to moderate efficacy then it may be viewed as a reasonable treatment. However if the procedure cannot be demonstrated to have reasonable safety it is unlikely any level of efficacy will be able to make this a reasonable treatment. This is especially the case for diseases which are not life-threatening, such as scoliosis.
As patients and caregiver potentially contemplate if VBT as a possible treatment (as detailed in an earlier post) it is important that the potential complications or adverse outcomes are detailed and well-understood as to their likelihood, severity and long-term implications. The list of complications which may occur with VBT are:
Anesthetic problem (such as allergic reaction or airway problem)
Injury to the great vessels, heart, lungs
Surgical site infection
Screw pullout or symptomatic migration
Failure of VBT modulate growth
Over-correction of spinal deformity
Pleural scarring secondary to surgical approach and presence of screw heads/tether in chest
Irritation of the diaphragm or psoas due to screws
Back or chest pain
In the next blog post the long-term issues of VBT will be presented.
Tuesday, January 16, 2018
Vertebral Body Tethering for Scoliosis (Part 3)
In Medicine, and in particular the area of spine deformity, the development of new treatments and technologies which can demonstrate improved outcomes, lower frequencies of complications, and/or faster recovery can create a “buzz” and enthusiasm depending on its potential of improvement. Physicians typically see these innovations earlier than the general public at medical meetings and read about them in peer-reviewed medical journals. Slightly later the medical media, followed by mainstream media, begin to report on the new medical technologies, especially if these treatments have developed some traction amongst physicians. One such technology is Vertebral Body Tethering (VBT) for scoliosis in the growing spine.
As detailed in previous postings on this blog there is significant potential for this technology, but little scientific evidence of its efficacy in humans. At present there are no approved implant systems in the U.S. which are FDA-approved for scoliosis. Spine implants used for VBT are being used in an off-label or unlabeled manner in the U.S. It is important to understand that innovations, especially in area of surgical spine deformity treatment, advances typically occur faster than does FDA approval. So innovations without FDA approval does not categorically mean they are unsafe or do not work, rather there is an absence of sufficient high-level of medical evidence to prove these devices are safe and efficacious to the FDA, who demands very high level of scientific proof. Prior to FDA approval implant systems, such as VBT, exist in a “grey” area. This can be frustrating to patients and caregivers who are anxious for advances in medicine, yet there is scant medical literature to help them navigate treatment options.
In the next blog post the complications of VBT will be presented. | <urn:uuid:3cc8397f-7165-4b9f-a6fe-e0a2f4527f25> | CC-MAIN-2019-47 | http://growingspineblog.wustl.edu/2018/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00377.warc.gz | en | 0.931826 | 4,799 | 2.671875 | 3 |
Source: Editor's Introduction to The Revolutionary Writings of John Adams, Selected and with a Foreword by C. Bradley Thompson (Indianapolis: Liberty Fund, 2000).
Fiat Justitia ruat Coelum
[Let justice be done though the heavens should fall]
John Adams to Elbridge Gerry
December 6, 1777
To Henry, Samuel, and Islay
Modern scholars of the American Revolution have published countless books on Thomas Jefferson, James Madison, Alexander Hamilton, Benjamin Franklin, and George Washington. Surprisingly, John Adams has not fared so well. On the whole, historians have neglected Adams’s Revolutionary thought, and a one-volume collection of his political writings has not been available for several decades. This anomaly in the scholarly literature is curious because Adams is often regarded as the most learned and penetrating thinker of the founding generation, and his central role in the American Revolution is universally recognized. Benjamin Rush thought there was a consensus among the generation of 1776 that Adams possessed “more learning probably, both ancient and modern, than any man who subscribed the Declaration of Independence.” Another contemporary is reported to have said that “The man to whom the country is most indebted for the great measure of independence is Mr. John Adams. … I call him the Atlas of American independence.”1
John Adams witnessed the American Revolution from beginning to end: he assisted James Otis in the Writs of Assistance case in 1761, and he participated in negotiating the peace treaty with Britain in 1783. As a Revolutionary statesman, he will always be remembered as an important leader of the radical political movement in Boston and as one of the earliest and most principled voices for independence in the Continental Congress. Likewise, as a public intellectual, Adams wrote some of the most important and influential essays, constitutions, and treatises of the Revolutionary period. If Samuel Adams and Patrick Henry represent the spirit of the independence movement, John Adams exemplifies the mind of the American Revolution.
Despite his extraordinary achievements, Adams has always posed a genuine problem for historians. From the moment he entered public life, he always seemed to travel the road not taken. Americans have rarely seen a political leader of such fierce independence and unyielding integrity. In debate he was intrepid to the verge of temerity, and his political writings reveal an utter contempt for the art of dissimulation. Unable to meet falsehoods halfway and unwilling to stop short of the truth, Adams was in constant battle with the accepted, the conventional, the fashionable, and the popular. He would compromise neither with Governor Thomas Hutchinson nor with the Boston mob. From his defense of English soldiers at the Boston Massacre trial to his treaty with the French in 1800, he had a way of shocking both his most ardent supporters and his most partisan opponents. To some, however, the complexity of the man and his thought are the very reasons why he is worth studying.
John Adams was born on October 19, 1735, in Braintree, Massachusetts. His father, Deacon John Adams, was a fifth-generation Massachusetts farmer, and his mother, the former Susanna Boylston, descended from another old New England family. The young man’s sense of life and moral virtues were shaped early by the manners and mores of a Puritan culture that honored sobriety, industry, thrift, simplicity, and diligence.
After graduating from Harvard College, Adams taught school for three years and began reading for a career in the law. To that end, he adopted a strict daily regimen of hard work and Spartan-like austerity. In his diary, he implored himself to “Let no trifling Diversion or amuzement or Company decoy you from your Books, i.e., let no Girl, no Gun, no cards, no flutes, no Violins, no Dress, no Tobacco, no Laziness, decoy from your Books.” He was always demanding of himself that he return to his study to tackle the great treatises and casebooks of the law.
Labour to get Ideas of Law, Right, Wrong, Justice, Equity. Search for them in your own mind, in Roman, grecian, french, English Treatises of natural, civil, common, Statute Law. Aim at an exact Knowledge of the Nature, End, and Means of Government. Compare the different forms of it with each other and each of them with their Effects on Public and private Happiness. Study Seneca, Cicero, and all other good moral Writers. Study Montesque, Bolingbroke [Vinnius?], &c. and all other good, civil Writers, &c.2
Adams was admitted to the Boston bar in 1758 and soon settled into a successful career in the law. In 1764 he married Abigail Smith to whom he was devoted for fifty-four years. Despite many years of separation because of his duties to the American cause at home and abroad, theirs was a love story of almost fictional quality. Together they had five children.
The passage of the Stamp Act in 1765 thrust Adams into the public affairs of colony and empire. In that year, he published his first major political essay, A Dissertation on the Canon and Feudal Law, and he also composed the influential “Braintree Instructions.” Both pieces attacked the Stamp Act for depriving the American colonists of two basic rights guaranteed to all Englishmen by Magna Carta: the rights to be taxed only by consent and to be tried only by a jury of one’s peers.
Adams’s understanding of the Patriot cause is revealed in two decisions that he made during the early years of the imperial crisis. In 1768 he refused a request from Governor Bernard to accept the post of advocate general of the court of admiralty. Despite the lucrative salary and “Royal Favour and promotion” associated with the position, he declined to accept on the grounds that he could not lay himself “under any restraints, or Obligations of Gratitude to the Government for any of their favours.” Nor would he sanction a government that persisted “in a System, wholly inconsistent with all my Ideas of Right, Justice and Policy.” Two years later, Adams risked falling out of favor with the Patriot movement by accepting the legal defense of Captain Preston in the Boston Massacre trial. He took the case in order to defend the rule of law and because “Council ought to be the very last thing that an accused Person should want in a free Country.” Every lawyer, he wrote, must be “responsible not only to his Country, but to the highest and most infallible of all Trybunals.”3 In word and deed, Adams always chose to act in ways that he thought right and just, regardless of reward or punishment.
Between 1765 and 1776, Adams’s involvement in radical politics ran apace with the escalation of events. In 1770, he was elected to the Massachusetts House of Representatives, and he later served as chief legal counsel to the Patriot faction and wrote several important resolutions for the lower house in its running battle with Governor Thomas Hutchinson. He also wrote a penetrating essay on the need for an independent judiciary, and his Novanglus letters are generally regarded as the best expression of the American case against parliamentary sovereignty. By the mid-1770s, Adams had distinguished himself as one of America’s foremost constitutional scholars.
The year 1774 was critical in British-American relations, and it proved to be a momentous year for John Adams. With Parliament’s passage of the Coercive Acts, Adams realized that the time had come for the Americans to invoke what he called “revolution-principles.”4 Later that year he was elected to the first Continental Congress. Over the course of the next two years no man worked as hard or played as important a role in the movement for independence. His first great contribution to the American cause was to draft, in October 1774, the principal clause of the Declaration of Rights and Grievances. Adams also chaired the committee that drafted the Declaration of Independence, he drafted America’s first Model Treaty, and, working eighteen-hour days, he served as a one-man department of war and ordnance. In the end, he worked tirelessly on some thirty committees. “Every member of Congress,” Benjamin Rush would later write, “acknowledged him to be the first man in the House.”5
Shortly after the battles at Lexington and Concord, Adams began to argue that the time had come for the colonies to declare independence and to constitutionalize the powers, rights, and responsibilities of self-government. In May 1776, in large measure due to Adams’s labors, Congress passed a resolution recommending that the various colonial assemblies draft constitutions and construct new governments. At the request of several colleagues, Adams wrote his own constitutional blueprint. Published as Thoughts on Government, the pamphlet circulated widely and constitution makers in at least four states used its design as a working model.
Adams’s greatest moment in Congress came in the summer of 1776. On June 10, Congress appointed a committee to prepare a declaration that would implement the following resolution: “That these United Colonies are, and of right ought to be free and independent states; that they are absolved, from all Allegiance to the British Crown; and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved.” On July 1, Congress considered final arguments on the question of independence. John Dickinson argued forcefully against independence. When no one responded to Dickinson, Adams rose and delivered a passionate but reasoned speech that moved the assembly to vote in favor of independence. Years later, Thomas Jefferson recalled that so powerful in “thought & expression” was Adams’s speech, that it “moved us from our seats.” Adams was, Jefferson said, “our Colossus on the floor.”6
In the fall of 1779, Adams was asked to draft a constitution for Massachusetts. Subsequently adopted by the people of the Bay State, the Massachusetts Constitution of 1780 was the most systematic and detailed constitution produced during the Revolutionary era. It was copied by other states in later years, and it was an influential model for the framers of the Federal Constitution of 1787.
Adams spent much of the 1780s in Europe as a diplomat and propagandist for the American Revolution. He succeeded in convincing the Dutch Republic to recognize American independence and he negotiated four critical loans with Amsterdam bankers. In 1783, he joined Benjamin Franklin and John Jay in Paris and played an important role in negotiating the Treaty of Peace with England. Adams completed his European tour of duty as America’s first minister to Great Britain.
It was during his time in London that Adams wrote his great treatise in political philosophy, the three-volume A Defence of the Constitutions of Government of the United States of America (1787–88). Written as a guidebook for American and European constitution makers, the Defence is a sprawling historical survey and analysis of republican government and its philosophic foundations. The Defence represents a unique attempt in the history of political philosophy to synthesize the classical notion of mixed government with the modern teaching of separation of powers. We know that the book was influential at the Constitutional Convention in 1787 and that it was used by French constitution makers in 1789 and again in 1795.
After his return to America in 1788, Adams was twice elected vice president of the United States. The eight years that he served as Washington’s second in command were the most frustrating of his life. He played virtually no role in the decision-making processes of the administration and he was forced daily to quietly preside over the Senate. In fact, Adams’s most notable accomplishment during this period was the publication in 1790–91 of his philosophical Discourses on Davila. His purpose in these essays was to lampoon the initial phase of the French Revolution and the influence that its principles were then having in America.
Adams’s elevation to the presidency in 1796 was the culmination of a long public career dedicated to the American cause. Unfortunately, the new president inherited two intractable problems from George Washington: an intense ideological party conflict between Federalists and Republicans, and hostile relations with an increasingly belligerent French Republic. This last, known as the Quasi-War, became the central focus of his administration. Consistent with his views on American foreign policy dating back to 1776, Adams’s guiding principle was “that we should make no treaties of alliance with any European power; that we should consent to none but treaties of commerce; that we should separate ourselves as far as possible and as long as possible from all European politics and war.” However, in order to protect American rights, Adams was forced to walk a hostile gauntlet between pro-French Republicans and pro-English Federalists.
Adams angered Republicans first by proposing a series of recommendations for strengthening the American navy and for a provisional army in response to France’s insulting treatment of American diplomats and its depredations of her commerce. He then delivered a stinging rebuke to the high Federalists of his own party by announcing the appointment of an American commissioner to negotiate a new peace treaty with France. The crowning achievement of his presidency was the ensuing peace convention of 1800 that reestablished American neutrality and commercial freedom. When Adams left office and returned to Quincy in 1801, he could proudly say that America was stronger and freer than the day he took office.
The bitterness of his electoral loss to Thomas Jefferson in 1800 soon faded as Adams spent the next twenty-five years enjoying the scenes of domestic bliss and a newfound philosophic solitude. During his last quarter century he read widely in philosophy, history, and theology, and in 1812 he reconciled with Jefferson and resumed with his friend at Monticello a correspondence that is unquestionably the most impressive in the history of American letters. In his final decade Adams experienced both tragedy and triumph. On October 28, 1818, his beloved Abigail died, a loss from which he would never quite recover. His only consolation during his last years— indeed, it was a moment of great pride—was the election in 1824 of his son, John Quincy, to the highest office in the land.
As the fiftieth anniversary of the Declaration of Independence approached, the ninety-one-year-old Adams was asked to provide a toast for the upcoming celebration in Quincy. He offered as his final public utterance this solemn toast: “Independence Forever.” These last words stand as a signature for his life and principles. John Adams died on July 4, 1826, fifty years to the day after the signing of the Declaration of Independence.
A great many books have been published in this century on the causes of the American Revolution. The important question that most attempt to address is why the colonists acted as they did. What drove this remarkably free and prosperous people to react so passionately and violently to the seemingly benign if not well-intended actions of English imperial officials? One obvious place to look for answers to these questions is in the major speeches and pamphlets of the Revolutionary era. But abstruse arguments derived from natural and constitutional law are no longer thought to have determined the outcome of the Revolution one way or the other. John Adams thought otherwise. During his retirement years, he was fond of saying that the War for Independence was a consequence of the American Revolution. The real revolution, he declared, had taken place in the minds and hearts of the colonists in the fifteen years prior to 1776. According to Adams, the American Revolution was first and foremost an intellectual revolution.
To assist us in recovering this forgotten world of John Adams, we might begin by considering several questions: Why did Adams think there was a conspiracy by British officials to enslave America? What evidence did he produce to demonstrate a British design against American liberties? Was Adams an irrational revolutionary ideologue, or did his political thought represent a reasoned response to a real threat? How did he understand the constitutional relationship between colonies and Parliament? Was Adams a conservative defender of traditional colonial liberties or was he a revolutionary republican advancing Enlightenment theories of natural law? What principles of liberty and equality, justice and virtue, did he think worth defending?
Central to Adams’s political philosophy is the distinction that he drew between “principles of liberty” and “principles of political architecture.” The first relates to questions of political right and the second to constitutional design. The chronology of Adams’s writings during the Revolutionary period mirrors this distinction. In the years before 1776, he debated with American Loyalists and English imperial officials over the principles of justice and the nature of rights. In the years after Independence, he turned to the task of designing and constructing constitutions. Because he wrote so much over the course of sixty years and because it is important that his writings be read unabridged, the selections in this volume have been limited to those essays and reports written during the imperial crisis and the war for independence.
John Adams had an enormous influence on the outcome of the American Revolution. He dedicated his life, his property, and his sacred honor to the cause of liberty and to the construction of republican government in America. The force of his reasoning, the depth of his political vision, and the integrity of his moral character are undeniable. From the beginning of his public career until the very end he always acted on principle and from a profound love of country. In his later years, though, Adams lamented that “Mausoleums, statues, monuments will never be erected to me. … Panegyrical romances will never be written, nor flattering orations spoken, to transmit me to posterity in brilliant colors.”7 The present volume erects no statues to Adams nor does it portray his life in brilliant colors. Readers must judge for themselves whether he is deserving of such accolades. We can say with confidence, however, that no study of the American Revolution would be complete without confronting the political ideas of John Adams. He was, after all, the “Atlas of American independence.”
C. Bradley Thompson
Department of History and Political Science
[1. ]Benjamin Rush quoted in Joseph J. Ellis, The Passionate Sage: The Character and Legacy of John Adams (New York: W. W. Norton, 1994), 29; Richard Stockton quoted in The Works of John Adams, Second President of the United States, ed. Charles Francis Adams, 10 vols. (Boston: Little, Brown and Co., 1850–56), 3:56.
[2. ]The Diary and Autobiography of John Adams, ed. L. H. Butterfield et al. 4 vols. (Cambridge, Mass.: The Belknap Press of Harvard University Press, 1961), 1:72–73.
[3. ]Ibid., 3:287, 292.
[4. ]Adams, “Letters of Novanglus,” in Papers of John Adams, ed. Robert Taylor et al. (Cambridge, Mass.: The Belknap Press of Harvard University Press, 1977–), 2:230.
[5. ]The Autobiography of Benjamin Rush: His “Travels Through Life” Together with His Commonplace Book for 1789–1813, ed. George W. Corner (Princeton: Princeton University Press, 1948), 140.
[6. ]6. “Notes on a Conversation with Thomas Jefferson,” in The Papers of Daniel Webster: Correspondence, ed. Charles M. Wiltse (Hanover, N.H.: The University Press of New England, 1974), 1:375.
[7. ]John Adams to Benjamin Rush, March 23, 1809, in The Spur of Fame: Dialogues of John Adams and Benjamin Rush, 1805–1813, ed. John Schutz and Douglass Adair (San Marino, Calif.: The Huntington Library, 1966), 139.
Last modified April 13, 2016 | <urn:uuid:e8ccef11-a0bf-4a50-bed8-fa435c5160b7> | CC-MAIN-2019-47 | https://oll.libertyfund.org/pages/adams-on-the-american-revolution | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00300.warc.gz | en | 0.968508 | 4,160 | 3.5 | 4 |
Being able to maintain appropriate nutrition in ill patients is a fundamental part of caring for both surgical and medical patients. The literature supports that malnutrition is associated with adverse clinical outcomes. Therefore, every effort should be made to support nutritional status when a patient is acutely or chronically ill. A study by Naber et al. found that the severity of malnutrition in patients can predict the occurrence of complications during their hospital admission (1). Tappenden et al. found evidence to suggest that early nutritional support can reduce complication rates, length of hospital stay, readmissions, mortality, and costs of care (2).
In patients who have an accessible and functional gastrointestinal system, enteral nutrition is the preferred route. For patients who require permanent or long-term (greater than four weeks) access to nutrition, enteral nutrition through an enterostomy tube is a good option that can be done by endoscopy, radiologically or surgically (3). Enteral nutrition reduces the risk of morbidity and cost when compared to parental nutrition (4-6). Compared to parenteral nutrition, enteral nutrition via a feeding tube has been associated with a decreased risk of infection (6). Also, in a systematic review, authors found evidence to suggest that there is an economic advantage of enteral nutrition over parental nutrition (4). In patients that require enteral nutrition for less than four weeks, nasoenteric tubes should be considered. However, a study comparing percutaneous endoscopic gastrostomy tubes (PEG) versus nasogastric tubes (NGT) found that PEG was associated with a lower risk of intervention failure, was more effective, and safe when compared to NGT (7).
Endoscopic techniques have been associated with faster recovery, lower costs, and can be less invasive when compared to open surgical procedures (4). Grant et al. identified that the overall complication rate after PEG was lower when compared to those undergoing open gastrostomies (8). Furthermore, PEG was also associated with shorter operative time, no need for general anesthesia, lower cost, a lower incidence of complications, and required less recovery time when compared to open surgery (8,9). Al-Abboodi et al. found that there was no difference in bleeding, surgical site infection or mortality in patients undergoing PEG placement who have cirrhosis (10). Endoscopic enteral access is a safe and practical procedure for patients who require nutritional support. This article reviews the indications, contraindications, pre- and post-procedure care, and procedure techniques for endoscopic enteral access.
Before placing enteral access, patients should be unable to maintain adequate nutrition via oral feeds alone. Typically, nutritional support should be considered if the recommended daily dietary requirements will not be met for more than seven days in an adult. However, early nutritional interventions may be indicated if the patient is malnourished. Additionally, patients must have a functional gastrointestinal system and able to tolerate intraluminal feeding.
When starting enteral feeds, one should consider the best route of providing nutritional support. Most patients can tolerate intragastric feeding although there are times when other routes should be entertained. When considering intragastric feeding, the lower esophageal sphincter should function properly to avoid gastric reflux into the esophagus. Furthermore, the stomach should be able to work as a reservoir and propulsive organ. Jejunal feeding should be considered if the patient has recurrent aspiration of gastric content, delayed gastric emptying or esophageal dysmotility with regurgitation (3,4). The disadvantage of jejunal tubes, particularly ones that are placed via a gastric extension, are the technical difficulty of placing and maintaining the tube post-pyloric. There may also be an element of feeding intolerance when feeds are directly entered into the jejunum. Gastric feeding has the advantage of being more convenient (can be given as a bolus), straightforward (does not require a pump) and physiologic. Intragastric feeds can buffer gastric acid and help regulate gastric emptying by humoral and neural pathways better than when receiving jejunal feeds (11,12).
Common indications for enteral tube access for gastric or jejunal feeding includes physiologic anorexia, neuromuscular swallowing disorders, gastrointestinal (GI) malabsorption, decreased consciousness, and upper GI tract obstruction. Other indications include injuries that increase the catabolic state of a patient such as severe burns or illness, malignant or benign tumors, cystic fibrosis, mental health issues, and intraabdominal fistulas (Table 1) (4).
Patients who have a previous past medical history of delayed gastric emptying, severe reflux or esophagitis, or pulmonary aspiration, jejunal enteral feeds should be considered (3,4). The advantages of jejunal enteral feeding include minimizing the risk of aspiration, provide nutrition for patients with gastric outlet obstruction, pancreatitis, infiltrative gastric cancer, variations of gastric anatomy (gastric bypass or post gastrectomy), gastroparesis, severe gastroesophageal reflux disease (GERD), or intraabdominal fistulas (4).
Patients with oropharyngeal or esophageal obstruction or multiple facial injuries are a relative contraindication for oral and enteral nutrition. Therefore, percutaneous enterotomy should be considered (3,4). Other relative contraindications include abdominal wall hernias, extreme obesity, previous upper GI surgery, ascites, peritoneal dialysis (PD), carcinomatosis peritonei, gastric ulcers, ventriculoperitoneal (VP) shunts, pregnancy, portal hypertension with gastric varices, the presence of a stoma, or surgical scars that interfere with enteric tube placement. Absolute contraindications for enteral access include having no informed consent, hemodynamic instability, and uncorrectable coagulopathy (Table 2) (3,4).
Informed consent should be obtained from the patient or surrogate prior to the procedure. Patients should be advised to not eat solid food or liquids, six and three hours, respectively, before their procedure. The risks and benefits of stopping anticoagulants or antiplatelet medications should be weighed to the risk of bleeding before the procedure. The use of these medications might increase the risk of bleeding during and after the endoscopic procedure. The patient’s unique characteristics and preexisting comorbidities should be considered before discontinuing antithrombotic medications for a short period, taking into consideration the patients risk of having a thromboembolic event (deep venous thromboembolism, pulmonary embolism, and cerebral vascular accident) (13).
Antibiotic prophylaxis should be given 30 minutes before percutaneous endoscopic enterostomy (PEE) to reduce the risk of peristomal infection (14). Antiseptic skin agents should also be used to reduce the risk of infection. A systematic review found that patients who received prophylactic antibiotics before PEG tube placement had decreased the odds of developing an infection compared to those who did not receive antibiotics (15).
Endoscopic enteral tube access techniques
Endoscopy guided nasoenteric tube placement
The technical success rate of endoscopy guided nasoenteric tube (ENET) placement is higher than 90% (3). The ENET can be placed at the bedside with or without sedation. There are several placement methods which can be used. The most common include the pull and drag method, which has been used the longest. First, a suture is placed at the distal end of a feeding tube and inserted through the nose down into the stomach. The suture is then grasped using either a forcep or hemostatic clip and dragged from the stomach into the jejunum (3,4). To help avoid migration of the enteral tube, when the endoscope is removed, a hemostatic clip can be used to secure the suture to the mucosa of the jejunum (4). ENETs can also be placed using an over-the-wire technique. First, a wire is placed in the biopsy channel of the endoscope and placed into the stomach or jejunum. Next, the scope is removed while maintaining the wire in place. Finally, the enteral tube is passed over the wire and directed into the stomach or jejunum and then repositioned at the back end to exit the mouth (nasal transfer) (3,4). Fluoroscopy is often used to aide and confirm the placement of the tube. Another approach is to use an ultrathin endoscope that can be passed directly through the nose into the stomach. A guide wire can then be advanced into the jejunum through the endoscope. Next, you remove the endoscope while the guide wire is “exchanged” for the scope or left in place. The tube can then be passed over the wire into the jejunum (3,4). Next, a therapeutic endoscope is guided into the small bowel, and a feeding tube (8- or 10-F) is advanced through the therapeutic channel (3.7-mm). The endoscope is removed while maintaining the end of the tube in position. The last steps are to perform a nasal transfer and attach the feeding adapter to the end of the enteral tube (3,4).
Percutaneous endoscopic gastrostomy
The technical success rate of PEG tube placement ranges between 76% and 100% (16). The benefit of using the endoscopic approach (over a surgical or radiologic approach) is the ability to do it at the bedside. Common causes for the unsuccessful placement of a PEG tube is inadequate transillumination, complete obstruction of the oropharynx or esophagus, and previous gastric surgery (16). Studies have found that PEGs can be safely passed through oropharyngeal or esophageal obstructions using an ultrathin endoscope (17-19). PEG tubes can be placed either trans-orally or trans-abdominally. Studies which have compared techniques for placing PEG tubes have found comparable success rates and length of time of the procedure (4,20,21).
The first technique we will be discussing is the trans-oral route. First, the stomach is insufflated with air or carbon dioxide using the endoscope. The gastric and abdominal wall is then indented with a finger while visualizing the indention endoscopically (Figure 1). The abdomen is prepared and draped in a sterile fashion, and local anesthesia is injected into the abdominal wall and peritoneum. Next, a needle on a syringe filled with saline is inserted into the abdominal cavity while simultaneous aspirating using the “safe tract” method (4). Afterward, a scalpel is used to make an incision at the previously designated site, followed by placing an introducer needle under direct visualization using endoscopy. A guide wire is then passed through the introducer needle into the abdominal cavity, and a snare or forceps is used to grasp the guide wire via the endoscopes working channel. The wire is then removed along with the endoscope. With the remaining wire that is exiting the mouth, a feeding tube is attached to the external end and pulled back together from the mouth into the stomach and through the abdominal wall. The final step is to place an external bumper approximately one centimeter above the abdominal skin to allow movement of the tube into the stoma to help avoid pressure necrosis. A repeat endoscopy may be completed to check for hemostasis and determine the best location of the tube (3,4).
In the transabdominal approach, the initial steps are similar to the previous technique. A needle on a syringe is inserted through the abdominal wall into the gastric cavity under direct endoscopic visualization, and a guide wire is introduced through the needle. However, in this technique, the track is dilated over the wire to allow for direct placement of the tube under endoscopic visualization. Next, using a peel-away sheath, a balloon-tip gastrostomy catheter is positioned into the stomach (4). Gastric insufflation is usually lost during the introducing phase. Therefore, gastropexy with T-fasteners or a gastropexy device can be helpful (3,4). The primary benefit of this technique is the avoidance of pushing or pulling the endoscope through the oral cavity. Also, you avoid tumor seeding of head and neck cancers at the stoma site, and there may be an infectious benefit.
Percutaneous endoscopic gastrostomy with jejunal extension
The success rate of percutaneous endoscopic gastrostomy with jejunal extension (PEGJ) is around 90–100% (3). An endoscope is passed trans-orally into the stomach, and a guide wire is inserted into a previously placed gastrostomy tube. The tube is usually of greater diameter (26 French), to allow the jejunal tube to fit through the gastrostomy tube. The wire is grasped with forceps and directed into the proximal small bowel. A jejunal tube is then advanced over the guide wire to the desired position, followed by removing the wire and forceps. Next, the jejunal extension is fitted into the gastrostomy tube. A similar technique that can be used is to pass a guide wire into a mature gastrostomy tract into the proximal tract using the previous method described. This would allow the use of one device, instead of two, which incorporates both the gastrostomy and jejunal components into one tube. Also, a 5–6 mm diameter ultrathin endoscope can be introduced through the mature gastrostomy or gastrostomy tube and placed in the jejunum. The final step involves removing the endoscope and passing the jejunal tube over the guide wire (3,4).
Direct percutaneous endoscopic jejunostomy
The technical success rate of direct percutaneous endoscopic jejunostomy (DPEJ) has been reported between 68% and 100% (4). Poor transillumination is the most common cause of failure when placing a DPEJ (3). Poor transillumination can be caused by increased thickness of the abdominal wall or omentum, therefore, the procedure has higher success rates in thin patients (22). Also, in native anatomy, getting beyond the ligament of Treitz can be challenging using a standard gastroscope. Some authors have suggested the use of a balloon-assisted overtube, fluoroscopy and leaving the overtube in place during the entire procedure with a reported success rate of 96% (23). Other techniques and maneuvers include using a stiff scope, such as, a pediatric colonoscope and using a stiff guidewire (24). Also, DPEJ can be more technically demanding than PEGJ, however, it is more durable and can have decreased need for re-intervention (3). It is imperative to have an accurate understanding of the patient’s anatomy when undergoing this procedure. Patients who require DPEJ often have altered anatomy (i.e., bypass or esophagojejunostomy). The stomach is insufflated with air. Next, the endoscope is passed into the jejunum and transilluminated. Next, under direct endoscopic visualization of the jejunum, the abdominal wall is indented with a finger while visualizing the indention. The abdomen is prepared and draped in a sterile fashion. Local anesthesia is injected into the abdominal wall and peritoneum. Next, insert a needle on a syringe filled with saline into the abdominal cavity while simultaneous aspirating using the “safe tract” method. An incision is made with a scalpel at the previously determined site. Next, under direct endoscopic visualization place an introducer needle into the jejunum. In patients who have had a prior Billroth II, the efferent limb should be identified by using fluoroscopy or identifying the Ampulla of Vater in the afferent limb by using the endoscope prior (22). Next, place a guide wire through the introducer needle and grasping it using snares or forceps via the working channel of the endoscope. The wire and endoscope are then removed together, and the remaining guide wire is attached to the outer end of the feeding tube. Next, the tube and guide wire are pulled back together from the mouth into the stomach and through the abdominal wall. A bumper is then placed externally about one centimeter above the skin of the abdomen to allow for movement of the tube into the stoma which can help circumvent pressure necrosis (Figure 2). A repeat endoscopy may be done to determine the best placement of the DPEJ tube and to evaluate for hemostasis (3,4). This technique often requires two skilled endoscopists, fluoroscopy and a thorough understanding of anatomy. The chosen location for direct jejunal tube placement is often close to the ligament of Treitz to avoid jejunal volvulus around the feeding tube. Patients with known adhesions are often easier to place this type of enteral tube as the small intestine will have less movement and allow for safer placement.
Difficult enteral access
Alimentary tract cancer
Patients with oropharyngeal or esophageal malignancy may develop malnutrition due to obstruction. The use of endoscopic guided tube placement can be used when the blind placement of nasogastric tube placement fails. Endoscopic guided nasoenteric tube placement can be considered for patients who will only require short term feeding. A 5–6 mm diameter endoscope is passed through the nose and tumor into the stomach with subsequent tube placement over a guide wire (3,4). Dilation of the esophageal obstruction is required at times to pass the endoscope through the obstruction. However, there is an increased risk of perforating the esophagus (3,4). A feeding tube nasal transfer may be needed if using the oral route. Patients who will require long term nutritional support or if an ENET cannot be performed may benefit from a PEG (3,4).
PEG tube placement is a safe procedure without significant complications that be performed during pregnancy between 8 to 29 weeks of gestation (25). Although PEG tube placement is rare during pregnancy, it may be required when patients have severe hyperemesis gravidarum leading to inadequate oral intake and nutritional deficiencies which may lead to fetal morbidity or mortality (26). It is recommended to define the dome of the uterus using ultrasonography (25). To separate the PEG site from the rib cage and uterus, ultrasound indentation and transillumination may be used. Patients should undergo fetal monitoring throughout the procedure. As the uterus enlarges attention should be paid to the external bumper to ensure there is no pressure which may lead to pressure necrosis from the tension of the internal and external bumpers (25). Often, the loosening of the PEG is required as the fetus enlarges.
Patients with ascites can pose a challenge when enteral access is required and was historically one of the few relative contraindications to PEG tube placement, but more recent literature has demonstrated it to be a safe procedure (4). Al-Abboodi et al. examined patients with liver cirrhosis and ascites that required PEG tube placement. The authors found that patients with ascites had no difference in bleeding, surgical site infections, urinary tract infections, and mortality when compared to patients without ascites (10). Ultrasound-guided paracentesis and gastropexy can be used to decrease peri catheter leakage and dislodgement (27). Often the aspiration of ascites is done before the procedure to decrease the risk of bacterial contamination of the fluid.
Peritoneal dialysis and ventriculoperitoneal patients
In patients who have VP shunts, PEG can also be placed safely; however, there is a risk of infection (28). PEGs should be placed as far away as possible from the VP shunt (29). Studies have found that patients who have PEGs placed after PD placement have a high rate of developing peritonitis (30,31). It is recommended that patients receive antibiotic and antifungal prophylaxis in addition to withholding peritoneal dialysis for two to three days or longer around the time of enteral tube placement (30,31).
The care of enteral tubes is similar to that of a nasogastric tube. Enteral feeds could be started right after ENET placement if there were no complications during the procedure. Typically, in patients who had a PEE placed, tube feeds are delayed 12–24 hours due to the concern of bleeding or intraabdominal leakage; however, studies have found that early feeding is safe and well tolerated (3,4). Every attempt should be made to irrigate the enteral tubes with water before and after each use to prevent clogging. If a tube becomes clogged, one should consider flushing with water, pancreatic enzymes, or bicarbonate solution. Other unclogging maneuvers include using a Fogarty balloon, biopsy brush, or commercial tube de-clogger (4). Tube replacement should be considered as the last resort.
Complications after enteral tube placement include pain at the site of tube insertion, pressure ulcer, esophageal perforation, reflux esophagitis, epistaxis, tube malposition, tube occlusion, tube dislodgement, leaking, bleeding, pneumoperitoneum, and diarrhea. Bleeding can occur up to 1% of the time, which is caused by injury to surrounding vessels and coagulopathy. Preventative measures can be taken such as identifying abdominal wall vessels using transillumination. Bleeding can be managed by temporarily tightening the external bumper, using endoscopy to identifying bleeding vessels or correcting any underlying coagulopathy (4). Bleeding can be managed endoscopically at the time of enteral tube placement, by using standard endoscopic methods such as hemostatic clips, energy and injection agents. Pneumoperitoneum can occur up to 56% of cases. The management will depend on the patient’s symptoms. In an asymptomatic patient, close observation is warranted. If the pneumoperitoneum persists over 72 hours or the patient develops worrisome symptoms, then a CT scan with water-soluble oral and enteral tube contrast can be used to evaluate for any contrast extravasation. If the patient develops peritonitis, then surgery would be indicated (4). The use of CO2 during enteral access (instead of room air) allows for faster absorption of the gas and may decrease the overall complications of pneumoperitoneum. In cases where there is migration of the jejunal tube into back into the duodenum tube, redirection is recommended. Lastly, peristomal granulomas can occur up to 27% of cases which can be prevented by proper wound care and can be managed by applying topical antimicrobials, low dose steroids, or silver nitrate (4). Many of these complications are relatively uncommon if the appropriate technique is maintained during placement and good tube and skin care are followed post placement (3,4) (Table 3).
Endoscopic enteral access is a safe and practical procedure for patients who require nutritional support. There are various techniques with their relative safety profile and success rates as described. The technical approach should be individualized to each patient, taking into consideration patient anatomy, disease status, anatomic variances, and the practitioner’s skill level.
Conflicts of Interest: The authors have no conflicts of interest to declare.
- Naber TH, Schermer T, de Bree A, et al. Prevalence of malnutrition in nonsurgical hospitalized patients and its association with disease complications. Am J Clin Nutr 1997;66:1232-9. [Crossref] [PubMed]
- Tappenden KA, Quatrara B, Parkhurst ML, et al. Critical role of nutrition in improving quality of care: an interdisciplinary call to action to address adult hospital malnutrition. JPEN J Parenter Enteral Nutr 2013;37:482-97. [Crossref] [PubMed]
- Itkin M, DeLegge MH, Fang JC, et al. Multidisciplinary practical guidelines for gastrointestinal access for enteral nutrition and decompression from the Society of Interventional Radiology and American Gastroenterological Association (AGA) Institute, with endorsement by Canadian Interventional Radiological Association (CIRA) and Cardiovascular and Interventional Radiological Society of Europe (CIRSE). Gastroenterology 2011;141:742-65. [Crossref] [PubMed]
- Yolsuriyanwong K, Chand B. Update on endoscopic enteral access. Tech Gastrointest Endosc 2018;20:172-81. [Crossref]
- Pritchard C, Duffy S, Edington J, et al. Enteral nutrition and oral nutrition supplements: a review of the economics literature. JPEN J Parenter Enteral Nutr 2006;30:52-9. [Crossref] [PubMed]
- Braunschweig CL, Levy P, Sheean PM, et al. Enteral compared with parenteral nutrition: a meta-analysis. Am J Clin Nutr 2001;74:534-42. [Crossref] [PubMed]
- Gomes CA Jr, Andriolo RB, Bennett C, et al. Percutaneous endoscopic gastrostomy versus nasogastric tube feeding for adults with swallowing disturbances. Cochrane Database Syst Rev 2015.CD008096. [PubMed]
- Grant JP. Comparison of percutaneous endoscopic gastrostomy with Stamm gastrostomy. Ann Surg 1988;207:598-603. [Crossref] [PubMed]
- Ho C-S, Yee AC, McPherson R. Complications of surgical and percutaneous nonendoscopic gastrostomy: review of 233 patients. Gastroenterology 1988;95:1206-10. [Crossref] [PubMed]
- Al-Abboodi Y, Ridha A, Fasullo M, et al. Risks of PEG tube placement in patients with cirrhosis-associated ascites. Clin Exp Gastroenterol 2017;10:211-4. [Crossref] [PubMed]
- Valentine RJ, Turner JW, Borman KR, et al. Does nasoenteral feeding afford adequate gastroduodenal stress prophylaxis? Crit Care Med 1986;14:599-601. [Crossref] [PubMed]
- Gauderer MW, Ponsky JL, Izant RJ. Gastrostomy without laparotomy: a percutaneous endoscopic technique. J Pediatr Surg 1980;15:872-5. [Crossref] [PubMed]
- Acosta RD, Abraham NS, Chandrasekhara V, et al. The management of antithrombotic agents for patients undergoing GI endoscopy. Gastrointest Endosc 2016;83:3-16. [Crossref] [PubMed]
- Jain NK, Larson DE, Schroeder KW, et al. Antibiotic prophylaxis for percutaneous endoscopic gastrostomy: a prospective, randomized, double-blind clinical trial. Ann Intern Med 1987;107:824-8. [Crossref] [PubMed]
- Lipp A, Lusardi G. Systemic antimicrobial prophylaxis for percutaneous endoscopic gastrostomy. Cochrane Database Syst Rev 2013.CD005571. [PubMed]
- Kwon RS, Banerjee S, Desilets D, et al. Enteral nutrition access devices. Gastrointest Endosc 2010;72:236-48. [Crossref] [PubMed]
- Takeshita N, Uesato M, Shuto K, et al. A 3-step gradual dilation method: a new safe technique of percutaneous endoscopic gastrostomy for obstructive esophageal cancer. Surg Laparosc Endosc Percutan Tech 2014;24:e140-2. [Crossref] [PubMed]
- Chadha KS, Thatikonda C, Schiff M, et al. Outcomes of percutaneous endoscopic gastrostomy tube placement using a T-fastener gastropexy device in head and neck and esophageal cancer patients. Nutr Clin Pract 2010;25:658-62. [Crossref] [PubMed]
- Yagishita A, Kakushima N, Tanaka M, et al. Percutaneous endoscopic gastrostomy using the direct method for aerodigestive cancer patients. Eur J Gastroenterol Hepatol 2012;24:77-81. [Crossref] [PubMed]
- Horiuchi A, Nakayama Y, Tanaka N, et al. Prospective randomized trial comparing the direct method using a 24 Fr bumper-button-type device with the pull method for percutaneous endoscopic gastrostomy. Endoscopy 2008;40:722-6. [Crossref] [PubMed]
- Maetani I, Tada T, Ukita T, et al. PEG with introducer or pull method: a prospective randomized comparison. Gastrointest Endosc 2003;57:837-41. [Crossref] [PubMed]
- Ginsberg GG. Direct percutaneous endoscopic jejunostomy. Tech Gastrointest Endosc 2001;3:42-9. [Crossref]
- Velázquez-Aviña J, Beyer R, Díaz-Tobar CP, et al. New method of direct percutaneous endoscopic jejunostomy tube placement using balloon-assisted enteroscopy with fluoroscopy. Dig Endosc 2015;27:317-22. [Crossref] [PubMed]
- Palmer LB, McClave SA, Bechtold ML, et al. Tips and tricks for deep jejunal enteral access: modifying techniques to maximize success. Curr Gastroenterol Rep 2014;16:409. [Crossref] [PubMed]
- Senadhi V, Chaudhary J, Dutta S. Percutaneous endoscopic gastrostomy placement during pregnancy in the critical care setting. Endoscopy 2010;42:E358-9. [Crossref] [PubMed]
- Savas N. Gastrointestinal endoscopy in pregnancy. World J Gastroenterol 2014;20:15241-52. [Crossref] [PubMed]
- Lee MJ, Saini S, Brink J, et al. Malignant small bowel obstruction and ascites: not a contraindication to percutaneous gastrostomy. Clin Radiol 1991;44:332-4. [Crossref] [PubMed]
- Oterdoom LH, Oterdoom DM, Ket JC, et al. Systematic review of ventricular peritoneal shunt and percutaneous endoscopic gastrostomy: a safe combination. J Neurosurg 2017;127:899-904. [Crossref] [PubMed]
- Vui HC, Lim WC, Law HL, et al. Percutaneous endoscopic gastrostomy in patients with ventriculoperitoneal shunt. Med J Malaysia 2013;68:389-92. [PubMed]
- Fein PA, Madane SJ, Jorden A, et al. Outcome of percutaneous endoscopic gastrostomy feeding in patients on peritoneal dialysis. Adv Perit Dial 2001;17:148-52. [PubMed]
- von Schnakenburg C, Feneberg R, Plank C, et al. Percutaneous endoscopic gastrostomy in children on peritoneal dialysis. Perit Dial Int 2006;26:69-77. [PubMed]
Cite this article as: Eguia E, Chand B. Endoscopic enteral access. Ann Laparosc Endosc Surg 2019;4:50. | <urn:uuid:7933e14f-2408-43cd-85a0-91e4fd1c142a> | CC-MAIN-2019-47 | http://ales.amegroups.com/article/view/5203/html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00300.warc.gz | en | 0.856071 | 6,775 | 2.515625 | 3 |
County Wexford is an eastern county in Ireland, bordered by the Irish Sea. It is part of the South-East Region, it is named after the town of Wexford and was based on the historic Gaelic territory of Hy Kinsella, whose capital was Ferns. Wexford County Council is the local authority for the county; the population of the county was 149,722 at the 2016 census. The county is rich in evidence of early human habitation. Portal tombs exist at Newbawn -- and date from the Neolithic period or earlier. Remains from the Bronze Age period are far more widespread. Early Irish tribes formed the Kingdom of Uí Cheinnsealaig, an area, larger than the current County Wexford. County Wexford was one of the earliest areas of Ireland to be Christianised, in the early 5th century. From 819 onwards, the Vikings invaded and plundered many Christian sites in the county. Vikings settled at Wexford town near the end of the 9th century. In 1169, Wexford was the site of the invasion of Ireland by Normans at the behest of Diarmuid Mac Murrough, King of Uí Cheinnsealaig and king of Leinster.
This was followed by the subsequent colonisation of the country by the Anglo-Normans. The native Irish began to regain some of their former territories in the 14th century in the north of the county, principally under Art MacMurrough Kavanagh. Under Henry VIII, the great religious houses were dissolved, 1536–41. On 23 October 1641, a major rebellion broke out in Ireland, County Wexford produced strong support for Confederate Ireland. Oliver Cromwell and his English Parliamentarian Army captured it; the lands of the Irish and Anglo-Normans were confiscated and given to Cromwell's soldiers as payment for their service in the Parliamentarian Army. At Duncannon, in the south-west of the county, James II, after his defeat at the Battle of the Boyne, embarked for Kinsale and to exile in France. County Wexford was the most important area in which the Irish Rebellion of 1798 was fought, during which significant battles occurred at The Battle of Oulart Hill during the 1798 rebellion. Vinegar Hill and New Ross.
The famous ballad "Boolavogue" was written in remembrance of the Wexford Rising. At Easter 1916, a small rebellion occurred on cue with that in Dublin. During World War II, German planes bombed Campile. In 1963 John F. Kennedy President of the United States, visited the county and his ancestral home at Dunganstown, near New Ross. Wexford is the 13th largest of Ireland's thirty-two counties in area, 14th largest in terms of population, it is the largest of Leinster's 12 counties in size, fourth largest in terms of population. The county is located in the south-east corner of the island of Ireland, it is bounded by the sea on two sides—on the south by the Atlantic Ocean and on the east by St. George's Channel and the Irish Sea; the River Barrow forms its western boundary. The Blackstairs Mountains form part of the boundary to the north, as do the southern edges of the Wicklow Mountains; the adjoining counties are Waterford, Kilkenny and Wicklow. County Town: Wexford Market Town: Gorey County Wexford is known as Ireland's "sunny southeast" because, in general, the number of hours of sunshine received daily is higher than in the rest of the country.
This has resulted in Wexford becoming one of the most popular places in Ireland in. The county has a changeable, oceanic climate with few extremes; the North Atlantic Drift, a continuation of the Gulf Stream, moderates winter temperatures. There is a meteorological station located at Rosslare Harbour. January and February are the coldest months, with temperatures ranging from 4–8 °C on average. July and August are the warmest months, with temperatures ranging from 12–18 °C on average; the prevailing winds are from the south-west. Precipitation falls throughout the year. Mean annual rainfall is 800–1,200 millimetres; the county receives less snow than more northerly parts of Ireland. Heavy snowfalls are rare, but can occur; the one exception is Mount Leinster, visible from a large portion of the county, covered with snow during the winter months. Frost is frequent in coastal areas. Low-lying fertile land is the characteristic landscape of the county; the highest point in the county is Mount Leinster at 795 metres, in the Blackstairs Mountains in the north-west on the boundary with County Carlow.
Other high points: Black Rock Mountain, 599 m. It is located within County Wexford. Croghan Mountain on the Wexford-Wicklow border - 606 m Annagh Hill, 454 m, near the Wicklow border Slieveboy, 420 m Notable hills include: Carrigbyrne Hill; the major rivers are the Barrow. At 192 km in length, the river Barrow is the second-longest river on the island of Ireland. Smaller rivers of note are the Owenduff, Corrock, Boro, Owenavorragh and Bann rivers. There are no significant fresh-water lakes in the county. Small seaside lakes or lagoons exist at two locations – one is called Lady's Island Lake and the other Tacumshin Lake; the Wexford Cot is a flat bottomed boat used for fishing on the tidal mudflats in Wexford a canoe shaped
An Garda Síochána, more referred to as the Gardaí or "the Guards", is the police service of the Republic of Ireland. The service is headed by the Garda Commissioner, appointed by the Irish Government, its headquarters are in Dublin's Phoenix Park. Since the formation of the Garda Síochána in 1923, it has been a predominantly unarmed force, more than three-quarters of the force do not carry firearms; as of 31 July 2018, the police service had 2,310 civilian staff. Operationally, the Garda Síochána is organised into six geographical regions: the Eastern, Southern, South-Eastern and Dublin Metropolitan Regions. In addition to its crime detection and prevention roles, road safety enforcement duties, community policing remit, the police service has some diplomatic and witness protection responsibilities and border control functions; the service was named the Civic Guard in English, but in 1923 it became An Garda Síochána in both English and Irish. This is translated as "the Guardian of the Peace". Garda Síochána na hÉireann appears on its logo but is used elsewhere.
The full official title of the police service is used in speech. How it is referred to depends on the register being used, it is variously known as An Garda Síochána. Although Garda is singular, in these terms it is used like police. An individual officer is called a garda, or, informally, a "guard". A police station is called a Garda station. Garda is the name of the lowest rank within the force. "Guard" is the most common form of address used by members of the public speaking to a garda on duty. A female officer was once referred to as a bangharda; this term was abolished in 1990, but is still used colloquially in place of the now gender-neutral garda. The service is headed by the Garda Commissioner, whose immediate subordinates are two Deputy Commissioners – in charge of "Policing and Security" and "Governance and Strategy" – and a Chief Administrative Officer with responsibility for resource management. There is an Assistant Commissioner for each of the six geographical Regions, along with a number dealing with other national support functions.
The six geographical Garda Regions, each overseen by an Assistant Commissioner, are: Dublin Metropolitan Region Eastern Northern Southern South-Eastern WesternAt an equivalent or near-equivalent level to the Assistant Commissioners are the positions of Chief Medical Officer, Executive Director of Information and Communications Technology, Executive Director of Finance. Directly subordinate to the Assistant Commissioners are 40 Chief Superintendents, about half of whom supervise what are called Divisions; each Division contains a number of Districts, each commanded by a Superintendent assisted by a team of Inspectors. Each District contains a number of Subdistricts, which are commanded by Sergeants; each Subdistrict contains only one Garda station. A different number of Gardaí are based at each station depending on its importance. Most of these stations employ the basic rank of Garda, referred to as the rank of Guard until 1972; the most junior members of the service are students, whose duties can vary depending on their training progress.
They are assigned clerical duties as part of their extracurricular studies. The Garda organisation has 2,000 non-officer support staff encompassing a range of areas such as human resources, occupational health services and procurement, internal audit, IT and telecommunications and fleet management, scenes-of-crime support and analysis, training and general administration; the figure includes industrial staff such as traffic wardens and cleaners. It is ongoing government policy to bring the level of non-officer support in the organisation up to international standards, allowing more officers to undertake core operational duties; the Garda Síochána Act 2005 provided for the establishment of a Garda Reserve to assist the force in performing its functions, supplement the work of members of the Garda Síochána. The intent of the Garda Reserve is "to be a source of local strength and knowledge". Reserve members are to carry out duties defined by the Garda Commissioner and sanctioned by the Minister for Justice and Equality.
With reduced training of 128 hours, these duties and powers must be executed under the supervision of regular members of the Service. The first batch of 36 Reserve Gardaí graduated on 15 December 2006 at the Garda College, in Templemore; as of October 2016, there were 789 Garda Reserve members with further training scheduled for 2017. Special Crime Operations consists of: Garda National Bureau of Criminal Investigation Criminal Assets Bureau Garda National Drugs and Organised Crime Bureau Garda National Economic Crime Bureau Garda National Cyber Crime Bureau Garda National Immigration Bureau Garda National Protective Services Bureau Technical Bureau Special Tactics & Operations Command: Emergency Response Unit Armed Support Units Operational Support Services that consists of: Air Support Unit Water Unit Dog Uni
County Monaghan is a county in Ireland. It is in the province of Ulster, it is named after the town of Monaghan. Monaghan County Council is the local authority for the county; the population of the county is 60,483 according to the 2011 census. The county has existed since 1585, when the Mac Mathghamhna rulers of Airgíalla agreed to join the Kingdom of Ireland. Following the 20th-century Irish War of Independence and the signing of the Anglo-Irish Treaty, Monaghan was one of three Ulster counties to join the Irish Free State rather than Northern Ireland. Monaghan is the fifth smallest of the Republic's 26 counties in area and fourth smallest by population, it is the smallest of Ulster's nine counties in size and the smallest in terms of population. Cremorne Dartree Farney Monaghan Trough 1. Monaghan = 7,452 2. Carrickmacross = 4,925 3. Castleblayney = 3,634 4. Clones = 1,761 5. Ballybay = 1,461 Notable mountains include Mullyash Mountain and Coolberrin Hill. Lakes include Lough Avaghon, Dromore Lough, Drumlona Lough, Lough Egish, Emy Lough, Lough Fea, Inner Lough, Muckno Lough and White Lough.
Notable rivers include the River Glyde, the Ulster Blackwater and the Dromore River. Monaghan has a number including Rossmore Forest and Dartrey Forest. Managed by Coillte since 1988, the majority of trees are conifers. Due to a long history of intensive farming and recent intensive forestry practices, only small pockets of native woodland remain; the Finn Bridge is a border crossing point over the River Finn to County Fermanagh. It is close to Scotshouse. Lead was mined in County Monaghan. Mines included Lisdrumgormley Lead Mines. In 1585, the English lord deputy of Ireland, Sir John Perrot, visited the area and met the Irish chieftains, they requested that Ulster be divided into counties and land in the kingdom of Airgíalla be apportioned to each of the McMahon chiefs. A commission was established to accomplish this and County Monaghan came into being; the county was subdivided into five baronies: Farney, Dartrey and Truagh, left under the control of the McKenna chieftains. After the defeat of the rebellion of Hugh O'Neill, The O'Neill and the Ulster chieftains in 1603, the county was not planted like the other counties of Ulster.
The lands were instead left in the hands of the native chieftains. In the Irish Rebellion of 1641 the McMahons and their allies joined the general rebellion of Irish Catholics. Following their defeat, some colonisation of the county took place with Scottish and English families. County Monaghan is traversed by the derelict Ulster Canal, however Waterways Ireland are embarking on a scheme to reopen the canal from Lough Erne into Clones; the Ulster Railway linked Monaghan with Armagh and Belfast in 1858 and with the Dundalk and Enniskillen Railway at Clones in 1863. It became part of the Great Northern Railway in 1876; the partition of Ireland in 1922 turned the boundary with County Armagh into an international frontier, after which trains were delayed by customs inspections. In 1957 the Government of Northern Ireland made the GNR Board close the line between Portadown and the border, giving the GNRB no option but to withdraw passenger services between the border and Clones as well. CIÉ took over the remaining section of line between Clones and Glaslough in 1958 but withdrew goods services between Monaghan and Glaslough in 1959 and between Clones and Monaghan in 1960, leaving Monaghan with no railway service.
Monaghan is divided into four local electoral areas: Carrickmacross, Castleblayney and Monaghan. The towns of Ballybay, Castleblayney and Monaghan are represented by nine-member town councils which deal with local matters such as the provision of utilities and housing. For the purposes of elections to Dáil Éireann, the county is part of the Cavan–Monaghan Constituency which elects five T. D.s. In the 2011 general election, there was a voter turnout of 72.7%. For elections to the European Parliament, the county is part of the Midlands–North-West constituency. Politically, the county is considered a stronghold for Sinn Féin, the largest party in the county, followed by Fine Gael. County Monaghan is the birthplace of the poet and writer Patrick Kavanagh, who based much of his work in the county. Kavanagh is one of the most significant figures in 20th-century Irish poetry; the poems "Stony Grey Soil" and "Shancoduff" refer to the county. Monaghan has produced several successful artists. Chief among these is George Collie, born in Carrickmacross and trained at the Dublin Metropolitan School of Art.
He was a prolific exhibitor at the Royal Hibernian Academy throughout his lifetime and is represented by works in the collection of the National Gallery of Ireland and the Ulster Museum. Monaghan was the home county of the Irish writer Sir Shane Leslie, 3rd Baronet of Glaslough, who lived at Castle Leslie in the north-east corner of the county. A Catholic convert, Irish nationalist and first cousin of Winston Churchill, Prime Minister of the United Kingdom, Leslie became an important literary figure in the early 1900s, he was a close friend of many politicians and writers of the day including the American novelist F. Scott Fitzgerald, who dedicated his second novel, The Beautiful and Damned, to Leslie. Monaghan County Museum is recognised as one of the l
Confession, in many religions, is the acknowledgment of one's sins or wrongs. Buddhism has been from its inception a tradition of renunciation and monasticism. Within the monastic framework of the sangha regular confession of wrongdoing to other monks is mandatory. In the suttas of the Pali Canon Bhikkhus sometimes confessed their wrongdoing to the Buddha himself; that part of the Pali Canon called the Vinaya requires that monks confess their individual sins before the bi-weekly convening for the recitation of the Patimokkha. In Catholic teaching, the Sacrament of Penance is the method of the Church by which individual men and women confess sins committed after baptism and have them absolved by God through the administration of a Priest; the Catholic rite, obligatory at least once a year for serious sin, is conducted within a confessional box, booth or reconciliation room. This sacrament is known by many names, including penance and confession. While official Church publications refer to the sacrament as "Penance", "Reconciliation" or "Penance and Reconciliation", many laypeople continue to use the term "Confession" in reference to the Sacrament.
For the Catholic Church, the intent of this sacrament is to provide healing for the soul as well as to regain the grace of God, lost by sin. A perfect act of contrition, wherein the penitent expresses sorrow for having offended God and not out of fear of eternal punishment outside of confession removes the eternal punishment associated with mortal sin but a Catholic is obliged to confess his or her mortal sins at the earliest opportunity. In theological terms, the priest acts in persona Christi and receives from the Church the power of jurisdiction over the penitent; the Council of Trent quoted John 20:22-23 as the primary Scriptural proof for the doctrine concerning this sacrament, but Catholics consider Matthew 9:2-8, 1 Corinthians 11:27, Matthew 16:17-20 to be among the Scriptural bases for the sacrament. The Catholic Church teaches that sacramental confession requires three "acts" on the part of the penitent: contrition, disclosure of the sins, satisfaction; the basic form of confession has not changed for centuries, although at one time confessions were made publicly.
The penitent begins sacramental confession by saying, "Bless me Father, for I have sinned. It has been since my last confession." The penitent must confess what he/she believes to be grave and mortal sins, in both kind and number, in order to be reconciled with God and the Church. The sinner may confess venial sins. According to the Catechism, "without being necessary, confession of everyday faults is strongly recommended by the Church. Indeed the regular confession of our venial sins helps us form our conscience, fight against evil tendencies, let ourselves be healed by Christ and progress in the life of the Spirit. By receiving more through this sacrament the gift of the Father's Mercy, we are spurred to be merciful as He is merciful". "When Christ's faithful strive to confess all the sins that they can remember, they undoubtedly place all of them before the divine mercy for pardon." As a result, if the confession was good, "the sacrament was valid" the penitent inadvertently forgot some mortal sins, which are forgiven as well.
As a safeguard not to become something like "subconsciously inadvertent" to avoid saying some sins, these must be confessed in the next confession. It is allowed, however allowed, except for certain devotional purposes sensible to concentrate in one's examination of conscience on the time since the last Confession. In general, Eastern Catholic and Orthodox Christians choose an individual to trust as his or her spiritual guide. In most cases this may be a starets; this person is referred to as one's "spiritual father". Once chosen, the individual turns to their spiritual guide for advice on their spiritual development, confessing sins, asking advice. Orthodox Christians tend to confess only to this individual and the closeness created by this bond makes the spiritual guide the most qualified in dealing with the person, so much so that no one can override what a spiritual guide tells his charges. What is confessed to one's spiritual guide is protected by the same seal as would be any priest hearing a confession.
Only an ordained priest may pronounce the absolution. Confession does not take place in a confessional, but in the main part of the church itself before an analogion set up near the iconostasion. On the analogion is placed a Gospel Book and a blessing cross; the confession takes place before an icon of Jesus Christ. Orthodox understand that the confession is not made to the priest, but to Christ, the priest stands only as witness and guide. Before confessing, the penitent venerates the Gospel Book and cross, places the thumb and first two fingers of his right hand on the feet of Christ as he is depicted on the cross; the confessor will read an admonition warning the penitent to make a full confession, holding nothing back. As with administration of other sacraments, in cases of emergency confession may be heard anywhere. For this reason in the Russian Orthodox Churc
Dundalk is the county town of County Louth, Ireland. It is on the Castletown River, which flows into Dundalk Bay, is near the border with Northern Ireland, halfway between Dublin and Belfast, it has associations with the mythical warrior hero Cú Chulainn. The Dundalk area has been inhabited since at least 3500 BC during the Neolithic period. A tangible reminder of this early presence can still be seen in the form of the Proleek Dolmen, the eroded remains of a megalithic tomb located in the Ballymascanlon area to the north of Dundalk. Celtic culture arrived in Ireland around 500 BC. According to the legendary historical accounts, the group settled in North Louth were known as the Conaille Muirtheimne and took their name from Conaill Carnagh, legendary chief of the Red Branch Knights of Ulster, their land now forms lower Dundalk. Dundalk had been developed as an unwalled Sráid Bhaile; the streets passed along a gravel ridge which runs from the present day Bridge Street in the North, through Church Street to Clanbrassil Street to Earl Street, to Dublin Street.
In 1169 the Normans set about conquering large areas. By 1185 a Norman nobleman named Bertram de Verdun erected a manor house at Castletown Mount and subsequently obtained the town's charter in 1189. Another Norman family, the De Courcys, led by John de Courcy, settled in the Seatown area of Dundalk, the "Nova Villa de Dundalke". Both families assisted in the fortification of the town, building walls and other fortification in the style of a Norman fortress; the town of Dundalk was developed as it lay close to an easy bridging point over the Castletown River and as a frontier town, the northern limit of The Pale. In 1236 Bertram's granddaughter, Rohesia commissioned Castle Roche to fortify the region, to offer protection from the Irish territory of Ulster; the town was sacked during the Bruce campaign. After taking possession of the town Edward Bruce proclaimed himself King of Ireland and remained here for nearly a whole year before his army was defeated and himself slain after being attacked by John de Birmingham.
Dundalk had been under Royalist control for centuries, until 1647 when it became occupied by The Northern Parliamentary Army of Colonel George Monck. The modern town of Dundalk owes its form to Lord Limerick in the 17th century, he commissioned the construction of streets leading to the town centre. In addition to the demolition of the old walls and castles, he had new roads laid out eastwards of the principal streets; the most important of these new roads connected a newly laid down Market Square, which still survives, with a linen and cambric factory at its eastern end, adjacent to what was once an army cavalry and artillery barracks. In the 19th century, the town grew in importance and many industries were set up in the local area, including a large distillery; this development was helped by the opening of railways, the expansion of the docks area or'Quay' and the setting up of a board of commissioners to run the town. The partition of Ireland in May 1921 turned Dundalk into a border town and the Dublin–Belfast main line into an international railway.
The Irish Free State opened customs and immigration facilities at Dundalk to check goods and passengers crossing the border by train. The Irish Civil War of 1922–23 saw a number of confrontations in Dundalk; the local Fourth Northern Division of the Irish Republican Army under Frank Aiken, who took over Dundalk barracks after the British left, tried to stay neutral but 300 of them were detained by the National Army in August 1922. However, a raid on Dundalk Gaol freed over 100 other anti-treaty prisoners. Aiken did not try to hold the town and before withdrawing he called for a truce in a meeting in the centre of Dundalk; the 49 Infantry Battalion and 58 Infantry Battalion of the National Army were based in Dundalk along with No.8 armoured locomotive and two armoured cars of their Railway Protection Corps. For several decades after the end of the Civil War, Dundalk continued to function as a market town, a regional centre, a centre of administration and manufacturing, its position close to the border gave it considerable significance during the "Troubles" of Northern Ireland.
Many people were sympathetic to the cause of the Provisional Irish Republican Sinn Féin. It was in this period that Dundalk earned the nickname'El Paso', after the Texan border town of the same name on the border with Mexico. In December 2000, Taoiseach Brian Cowen welcomed US president Bill Clinton to Dundalk to mark the conclusion of the Troubles and the success of the Northern Ireland peace process. Cowen said: Dundalk is a meeting point between Dublin and Belfast, has played a central role in the origin and evolution of the peace process. More than most towns in our country, Dundalk, as a border town, has appreciated the need for a lasting and just peace. On 1 September 1973, the 27 Infantry Battalion of the Irish Army was established with its Headquarters in Dundalk barracks, renamed Aiken Barracks in 1986 in honour of Frank Aiken. Dundalk suffered economically when Irish membership of the European Economic Community in the 1970s exposed local manufacturers to foreign competition that they were ill-equipped to cope with.
The result was the closure of many local factories, resulting in the highest unemployment rate in Leinster, Ireland's richest province. High unemployment produced serious s
Colm O'Gorman is the Executive Director of Amnesty International Ireland. He is founder and former director of One in Four, he is a survivor of clerical sexual abuse, first came to public attention by speaking out against the perpetrators. O'Gorman subsequently founded One in Four, an Irish charity which supports men and women who have been sexually abused and/or suffered sexual violence, he was a Senator in 2007, representing the Progressive Democrats. Colm O'Gorman was born in County Wexford, his father was Seán O'Gorman, of Adamstown, County Wexford – a farmer and local Fianna Fáil politician. Seán O'Gorman was a member of Wexford County Council, moved with his family to live in Wexford town, he twice stood unsuccessfully as a Fianna Fáil candidate in general elections: in 1969 and 1973. In 2002, Colm O'Gorman settled near County Wexford, he is raising two children with his husband Paul. When this was revealed it generated debate on fosterships in the Irish media; as an adolescent in County Wexford – between the age of 15 and 18 – O'Gorman was sexually abused by Fr Seán Fortune.
The abuse occurred between 1981 and 1983. He became the first of Fortune's many victims to come forward and report the assaults to the Irish police. In 1998, he sued the Bishop of the Roman Catholic Diocese of Ferns and the Dublin Papal Nuncio, inter alia the Pope, John Paul II, who claimed diplomatic immunity, his case against the Catholic Diocese of Ferns was settled in 2003 with an admission of negligence and the payment of damages – in April 2003, O'Gorman was awarded €300,000 damages. O'Gorman documented his lawsuit in the BBC documentary Suing the Pope, he campaigned to set up the Ferns Inquiry, the first Irish state inquiry into clerical sexual abuse. He founded the charity One in Four in London in 1999 and established its sister organisation in Ireland in 2002, he is a well-known figure in Irish media as an advocate of child sexual abuse victims and a commentator and campaigner on sexual violence. He was named one of the ESB/Rehab People of the Year and received a TV3/Daily Star "Best of Irish" award in 2002, one of the Sunday Independent/Irish Nationwide People of the Year in 2003 and in the same year he was awarded the James Larkin Justice Award by the Labour Party for his contribution to social justice in Ireland.
In 2006 O'Gorman filmed Sex Crimes and the Vatican for the BBC Panorama documentary series, which claimed that the Vatican has used Crimen sollicitationis secret document to silence allegations of sexual abuse by priests and claimed Crimen sollicitationis was enforced for 20 years by Cardinal Joseph Ratzinger before he became Pope Benedict XVI. In April 2006, he announced that he would stand for the Progressive Democrats, a pro-free market liberal political party, in the 2007 general election in the Wexford constituency. On 3 May 2007, he was appointed to the Senate by the Taoiseach to fill the vacancy caused by the death of Senator Kate Walsh, he was not elected in the 2007 general election in Wexford polling 3% of the vote. He was not re-appointed to the 23rd Seanad in July 2007, he is the Executive Director of Amnesty International Ireland, appears to talk about human rights in Ireland and around the world. Official Colm O'Gorman site One In Four official site One In Four UK Apology and settlement from the Church Telegraph interview with O'Gorman
The Eucharist is a Christian rite, considered a sacrament in most churches, as an ordinance in others. According to the New Testament, the rite was instituted by Jesus Christ during the Last Supper. Through the Eucharistic celebration Christians remember both Christ's sacrifice of himself on the cross and his commission of the apostles at the Last Supper; the elements of the Eucharist, sacramental bread and sacramental wine, are consecrated on an altar and consumed thereafter. Communicants, those who consume the elements, may speak of "receiving the Eucharist", as well as "celebrating the Eucharist". Christians recognize a special presence of Christ in this rite, though they differ about how and when Christ is present. While all agree that there is no perceptible change in the elements, Roman Catholics believe that their substances become the body and blood of Christ. Lutherans believe the true body and blood of Christ are present "in, under" the forms of the bread and wine. Reformed Christians believe in a real spiritual presence of Christ in the Eucharist.
Others, such as the Plymouth Brethren and the Christadelphians, take the act to be only a symbolic reenactment of the Last Supper and a memorial. In spite of differences among Christians about various aspects of the Eucharist, there is, according to the Encyclopædia Britannica, "more of a consensus among Christians about the meaning of the Eucharist than would appear from the confessional debates over the sacramental presence, the effects of the Eucharist, the proper auspices under which it may be celebrated"; the Greek noun εὐχαριστία, meaning "thanksgiving", appears fifteen times in the New Testament but is not used as an official name for the rite. Do this in remembrance of me"; the term "Eucharist" is that by which the rite is referred to by the Didache, Ignatius of Antioch and Justin Martyr. Today, "the Eucharist" is the name still used by Eastern Orthodox, Oriental Orthodox, Roman Catholics, Anglicans and Lutherans. Other Protestant or Evangelical denominations use this term, preferring either "Communion", "the Lord's Supper", "Memorial", "Remembrance", or "the Breaking of Bread".
Latter-day Saints call it "Sacrament". The Lord's Supper, in Greek Κυριακὸν δεῖπνον, was in use in the early 50s of the 1st century, as witnessed by the First Epistle to the Corinthians: When you come together, it is not the Lord's Supper you eat, for as you eat, each of you goes ahead without waiting for anybody else. One remains hungry, another gets drunk; those who use the term "Eucharist" use the expression "the Lord's Supper", but it is the predominant term among Evangelical and Pentecostal churches, who avoid using the term "Communion". They refer to the observance as an "ordinance"; those Protestant churches avoid the term "sacrament".'Holy Communion' are used by some groups originating in the Protestant Reformation to mean the entire Eucharistic rite. Others, such as the Catholic Church, do not use this term for the rite, but instead mean by it the act of partaking of the consecrated elements; the term "Communion" is derived from Latin communio, which translates Greek κοινωνία in 1 Corinthians 10:16: The cup of blessing which we bless, is it not the communion of the blood of Christ?
The bread which we break, is it not the communion of the body of Christ? The phrase appears in various related forms five times in the New Testament in contexts which, according to some, may refer to the celebration of the Eucharist, in either closer or symbolically more distant reference to the Last Supper, it is the term used by the Plymouth Brethren. The "Blessed Sacrament" and the "Blessed Sacrament of the Altar" are common terms used by Catholics and some Anglicans for the consecrated elements when reserved in a tabernacle. "Sacrament of the Altar" is in common use among Lutherans. In The Church of Jesus Christ of Latter-day Saints the term "The Sacrament" is used of the rite. Mass is used in the Latin Rite of the Catholic Church, the Lutheran Churches, by many Anglicans, in some other forms of Western Christianity. At least in the Catholic Church, the Mass is a longer rite which always consists of two main parts: the Liturgy of the Word and the Liturgy of the Eucharist, in that order; the Liturgy of the Word consists of readings from scripture (the | <urn:uuid:c3d918b7-b454-4bf8-949b-69e30e99cbf9> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Ferns_Report | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670135.29/warc/CC-MAIN-20191119093744-20191119121744-00379.warc.gz | en | 0.966622 | 7,386 | 3.25 | 3 |
There seem to be many misconceptions about feelings. Learning about feelings isn’t something we’re often taught in school, even though they play a significant role in our lives. We generally don't talk much in casual conversation about how we experience our feelings or the beliefs we have surrounding feelings. However, we receive plenty of messages about feelings in our culture, some of which aren't accurate or helpful. Ergo, here's a list of eight truths about feelings.
1. A well-educated individual who practices in the field of psychology recently posted on LinkedIn that you can only have one emotion at a time; many others agreed. I’m sure we can all remember times when we felt both sad and angry simultaneously. Years ago, I went to Europe by myself to meet up with a friend from college. I was both exuberant and anxious at the same time. We can also have conflicting feelings simultaneously. Often, after the death of a very ill loved one, a caregiver will feel relief and sadness. We can feel more than one feeling at a time, more than two, etc. Sometimes this can be overwhelming or confusing. It's also normal.
2. Today I saw a sign that read, “How do you feel today?” Really, it’s much more accurate to ask, “How do you feel right now?” Our feelings often change many times throughout the day. Our feelings are not static; they are constantly in flux. Learning to be aware of your feelings and how they change can help you be more in tune with the reality of your life. Psychologist Rick Hanson has said "The brain is like Velcro for negative experiences and like Teflon for positive ones." Similarly, negative emotions can stand out more to us and have a greater impact than positive emotions. When we are aware of our feelings and how they change, we realize that "negative" feelings are often less permanent and more short-lived, even though we may attend to them more and "hang onto them" more. By being more aware of your feelings throughout the course of a day, you will have a more accurate appraisal of your experience. Instead of having a "bad day," you are more likely to recognize positive emotions and experiences as well as the negative emotions and experiences.
3. Years ago, I went to a seminar presented by a well-respected psychologist. Somewhat jokingly, he stated that there’s one emotion a therapist won’t say he or she feels: anger. It reminded me of another class I went to on anger, where the presenter seemingly was feeling angry after an incident transpired, a participant shared with the presenter and class that the presenter seemed angry, and the presenter vehemently objected. Anger is a normal and healthy emotion, yet it seems feeling angry can be taboo, particularly for women. It might be more accurate to say our society tends to be more accepting of men feeling angry compared to feeling hurt or sad. Any feeling is "okay" to have; there are not "wrong" feelings. Any behavior or action as a result of certain feelings might not be healthy or constructive, but the feeling itself is okay. Everyone is entitled to their feelings.
4. You are responsible for how you deal with your feelings—acting out on a feeling can have significant consequences. Discomfort, pain, and stress are part of life and we all need to "deal with it" in some way. Learning to cope in ways that are healthy and constructive is necessary for better quality of life and wellbeing.
5. Thoughts aren’t any “better” than feelings or vice versa; both thoughts and feelings are important. Marsha Linehan often discusses the wise mind, where the reasonable or logical mind and emotional mind overlap; this is the integration of thoughts and feelings.
6. Feelings have value and utility. When we have strong feelings, they are telling us something: what we enjoy, what we want, what we don’t want, what we need, what we think is “wrong,” what we think is “right,” etc. Feelings might not always be an accurate reflection of the reality of a situation, but they are important indicators of what is going on for us internally, our personal codes of conduct or ideals, and what we perceive is happening in our world. Feelings can have great value in helping us see where we are and where we want to be (or don't want to be).
7. Sometimes we understand why we feel a certain way, other times, our feelings might not make sense to us at all; this is normal! Often, with some time, thought, and talking with others, we are able to develop insight as to why we feel how we feel. Generally, the stronger the feeling and the more it impacts us, the more important it is to understand what the feeling is telling us. Once we understand our emotional reaction, we can better evaluate how to cope or respond to the triggering situation.
8. We are not always aware of the feelings we have. Often, people have much difficulty identifying feelings when they have them. There is actually a word for having a limited emotional vocabulary: alexithymia. Since there are so many feelings, simply remembering the basic mad, sad, glad, or anxious can make it easier to identify feelings. A study by Dr. Michelle Craske at UCLA looked at people who were phobic of spiders, evaluating coping methods when the participants were exposed to the spider. The group that labelled their emotions during exposure to the spider experienced less physiological reactivity, meaning they had a less severe stress response.
Being aware of your feelings, managing them, and being responsive to them are skills; they can be learned. Practicing mindfulness facilitates developing these skills. Try not to judge your feelings or yourself for feelings you have; practice self-compassion, particularly being kind toward yourself and not critical. Be mindful of how you feel, aware and observing it in a non-judgmental way without over-identifying with the feeling or letting the feeling “consume” you. This will also allow space to choose how you respond to the feeling(s) with intention. Practicing mindfulness can be a powerful way to help be more aware of how you feel and facilitate being more intentional with how you respond to the feelings you experience.
For many people, Valentine’s Day is a reminder of how single we are and what we don’t have. Starting mid-January, walking by the holiday section in the grocery store can trigger those “Ugh, this time of year” thoughts. Following are the announcements of exclusive couple events for V-day, ads for flowers, and eventually the day of the 14th. This isn’t eloquent, but: it can sorta suck.
We hear it all the time: eat well, get enough rest, exercise.
How often do we prioritize these self-care habits and give them the attention they are due? How often do the environments in our lives—and people—respect the boundaries we must assert to regularly eat healthy, sleep well, and exercise?
Eating healthy, sleeping well, and exercising often come at the expense of other things our lives: working an extra hour, a social activity, a house that is constantly tidy, spending time with your significant other, time on social media, etc. Self-care will come last, or as it is convenient, unless you choose to put if first and work in other demands and responsibilities around it.
At what point did it become okay to not feed yourself because of working? Go sleep deprived because of x, y, or z? Neglect your body and mental health because of a hectic schedule? There will always be life demands to interfere with taking care of yourself—if you let them!
Ironically, by setting limits, learning how to say “No,” and making time for self-care, we show up in a more productive and constructive way for the demands in our life—and we’re able to enjoy them more. Quality of life starts with the basics.
As a life coach, I’m all about the positive aspects of life: enjoying it, being productive, feeling grateful, etc. At the same time, there is a place for feeling sad, angry, frustrated, etc. I’ve made it my mission to help people get the most out of their life, overcome the negatives, increase positives, meet their goals, and make the changes they need. At the same time, people need to know that it is okay to feel down sometimes. It is normal to feel angry sometimes. It is okay to have negative feelings and be in a “lull” at points. You do not need to feel happy all of the time. Part of being human is experiencing both happiness and sadness.
If it is normal to feel sad, angry, etc—and it is “okay,” then how do you know when feeling those feelings are part of the typical negative feelings that accompany life versus something that need more attention? When do negative feelings cross over from experiencing them as a healthy part of the human condition to not serving you and benefiting from an adjustment? How do you know you’re feelings aren’t a flag for getting extra help or learning new skills?
You’re allowed to have a few crappy weeks and be in a bad mood. It’s part of life. Of course, there are ways to make those weeks feel better, but do not pressure yourself by believing that you always have to be 90% in the best mood. Life is full of ups and downs. It’s okay to feel sad or angry. Embrace it. Cope with it. Learn from what those feelings are communicating. Pick yourself up by your bootstraps. Don’t pressure yourself by thinking you *need* to feel happy. Of course being happy feels better, but negative feelings can have as much value in our lives as positive feelings. Learn how to accept, express, cope, and work with your feelings—positive and negative—so they benefit you.
When we start something—a relationship, a job, a project, anything—we typically do so with purpose and intention. For myself, I recently started Paragon Life Coaching LLC with the purpose of doing something I love and giving back. Since I’m doing life coaching as my career, it is also important that I am able to make a sustainable income.
After we start anything, we are in it. We experience it almost every day (or at least frequently), and we have a bidirectional relationship with it. A “bidirectional relationship” means that we do whatever we do with our something-we-started, and then there is a consequence, so we respond. We do whatever we do with our something, and we learn new info as a result, so our thoughts or conceptualization related to our something change. We influence the something we are involved with, and that something influences how we continue to interact with it. I’m characterizing these aforementioned kinds of interactions as a bidirectional relationship, since influences go both ways. When we are “in” something, experiencing it regularly and interacting with it, inundated by it, it is easy to lose that perspective we had at the beginning. It is very easy to lose objectivity. We are pushed and pulled by the experience. This is not good or bad, it just is. Without `truly engaging with what we are involved in, we wouldn’t learn, we wouldn’t improve, we wouldn’t make progress, we wouldn’t revise, and we wouldn’t have the emotional rewards.
When we are “in” something, especially if we experience intense emotions or are unfamiliar with aspects of whatever we are “in,” we are vulnerable to losing sight of the big picture—why we started and what we want from it. It’s easy to get lost when we are finding our way in an unfamiliar maze or lose perspective when we are affected by deep feelings. For instance, with starting a life coaching business, I know psychology extremely well and am a skilled helper, so the work of helping people to create meaningful change in their lives is smooth sailing. It’s what I love. The business aspect of starting a business is foreign territory. I never even had a business 100 course.
I am learning as I go, using the resources I find, experimenting, consulting with others who have business experience, watching webinars, etc.
I have absorbed info during the past few months (from many sources: individuals I know, experts in business, articles in Forbes, Inc, Entrepreneur, successful life coaches, tech gurus, etc ) about scalability (growing your business, reaching thousands and thousands of people, etc). Every message about business pretty much boils down to scalability. Apparently, it’s the Holy Grail in business. I’ve been racking my brain the past couple of weeks about how I’m going to reach the people I’m going to help, how I can make enough money to sustain myself so I can do what I what I want to do—keep doing life coaching, how I’m going to reach the masses, how I can “scale” my business, how I can create successful webinars and pull in thousands of dollars... HOLD UP!!! I realized three days ago my thoughts: “how I’m going to reach the masses,” “how I can make thousands from webinars." I was trying to problem solve when in the midst these thoughts sort of smacked me across the face (thankfully!). THOSE ARE NOT THE REASONS I STARTED! I had been a little brainwashed. This mega superstar life coach who wants to reach 1 million women, that ridiculously successful tech person who wants to reach 100,000 people to make incredible webinars, the dozens of authors for business publications of authority who preach scalability, several “how to make your first webinar and bring in 6 figures” webinars, etc. They had gotten to me. I had gotten lost in foreign territory.
First, I was surprised my attention was so easily swayed from my intention to the dogma of the business community. It happened over the course of a few months, so it was slow. Slow things creep; they’re more difficult to see happening. I consider myself to be a strong, independent thinker, and to generally have clarity. I don’t just buy any message I’m sold. But, it happened! I was sent a similar message from several different sources of authority, this occurred over 2-3 months, and it was regarding something about which I was ignorant. However, being surprised was not my strongest feeling. I felt empowered: I was reconnected with my intention.
My goal is not to make 6 figures (it would be nice, but it’s not what I want). My goal is not to reach 100,000 people. My goal is not to have a slew of webinars to choose from so I can be on the beach, hit a button, and remotely bring in thousands as I enjoy the surf. Knowing what you don’t want can be as important as knowing what you do want. It is way too easy once you are “in” something, to be swayed by what you are “in.” What do I want? I want to help individuals make meaningful changes in their lives—this is my passion, so this is my goal. And I want to be able to support myself financially so I can make a career of doing that. Scalability is not my business; making gobs of money is not my intention. Personal interactions with people and deep change is my work. Can I add on some webinars, group coaching, and publications to better educate people? Of course! But it is not my main squeeze. So, when you step back, what do you want? What did you want when you started? How has that changed, if at all?
I do not want my message to be misunderstood: it is okay to change what we want in the middle of being “in” something. Flexibility and being responsive to what we glean from interactions in a bidirectional relationship is important. However, we must change our path mindfully—we must do so with awareness and intention.
We all have our strengths and areas for improvement. We all have our needs (and wants). Any of these change, from minute to minute to time periods during our lifespan.
Go after what you want. I fervently believe you should.
At the same time, you will only best serve yourself when what you want complements what you need. If you’re pursuing what you want when it interferes with what you need, you will do yourself harm, and the benefit of what you want will be lost, likely replaced by pain, disappointment, wasted energy, etc. (This is not known fact; this is a hypothesis of mine. Challenge it, try it on with different scenarios, see if it fits. It has passed my tests so far. Let me know what you think.)
If what you want is at odds with your current needs or where you are at this point, do not give up on your want! DO NOT! Is there another way to satisfy your want? Can you “backburner” your want until your want and needs are aligned? So much of life is timing. Very little in life requires the immediacy we often assign it.
Know your needs: know what you need at this time in your life. Know what you want. Be aware enough to know the difference, smart enough to know which needs and wants align, and disciplined enough to choose to pursue those wants which complement your needs now.
Even John Rampton, contributor to Forbes and Entrepreneur, recognizes in his recent article* that “discipline…is the most challenging part” of budgeting. All of the financial knowledge and expertise in the world will not help you manage your money if your thoughts and behavior get in the way. As a life coach, my expertise is clearly not with stocks, finance management, etc. My expertise gives you the tools and skills to foster discipline, excel with decision making, manipulate your environment so it is conducive to engaging in behaviors that move you closer to your goal, and develop thoughts in-the-moment that align with your goal—which any financial growth hinges upon.
Aside from teaching strategies and supporting skill development, a life coach collaborates with the client to develop an individualized behavior plan based on a specific budgeting or financial goal (not just “I’m going to save $x a day and not spend money on y”); a person’s success is highly contingent on a quality behavior plan. Clearly, if a person is having challenges with discipline, then adhering to a behavioral plan will likely be challenging, too. As a specialist in behavior, emotions, and cognition, I don’t just create any behavior plan, but I consider habits, motivation, challenges, lifestyle, strengths, and personality. Any behavior plan I develop is constructed in a collaborative nature, *realistic,* and designed specifically for the individual, which is how it can be effective—despite challenges.
If financial management and adhering to a budget is a goal that you’ve had but consistently aren’t meeting, it might not mean that you have an unrealistic goal, but may indicate that you need something more to reach your goal. That “something more” is a skill set and approach that a competent life coach can provide.
*"For a Debt-Free New Year Set a Budget and Stick to It" on Entrepreneur.com posted December 22, 2015
This is the third and last installment of a series of posts to better introduce myself, beyond my role as life coach. As I explained previously, this is a post with an interview-style structure.
What are your greatest strengths?
I’m very compassionate and am skilled at understanding other people and perspective-taking. I am also good with evaluating situations, decision making, and problem solving. My knowledge-base in psychology and ability to apply what I know in novel situations for a certain outcome (two very different things) is also a strength. I’m resourceful and creative. I also have a keen sense of style/design. I’d like to think my sense of humor is a strength too. I've taken the VIA Survey of Character Strengths, and according to that assessment, these are my top 5 strengths: 1.) creativity, ingenuity, and originality 2.) judgment, critical thinking, and open-mindedness 3.) honesty, authenticity, and genuineness 4.) fairness, equity, and justice 5.) leadership.
What are your greatest weaknesses?
My messiness has been a weakness, but I’ve improved with that! Although I can be extremely self-disciplined, my motivation is something that I have to work with at times to make sure I do certain things. Although these are personally areas I work on, I think they benefit me professionally in so far as giving me an edge with helping clients who have the same points of improvement. So much of life hinges on motivation!
Name 5 traits that you think describe yourself.
I pretty much did this above. I think my friends would describe me as intelligent, straightforward, funny, outgoing, and caring. Now I need to ask them!
If you weren’t a life coach or therapist, what would you be doing?
I would probably either be working in the fashion industry or rehabilitating wild animals in Africa, India, or Asia.
"It is very common for people to be aware of what they should do or need to do, but struggle with doing it."
Ultimately, life coaching is about learning and developing. Whether you are developing yourself and your identity, learning new behaviors, developing new perspectives, or developing behaviors to reach a goal, engaging in growth requires you are aware and take the necessary steps to move forward.
It is very common for people to be aware of what they should do or need to do, but struggle with doing it. In this case, there is often a lack of knowledge about what specifically to do every day or in the moment to make a change in the given direction, a lack of skills necessary to make those specific changes, or a lack of motivation. A life coach can help with these hurdles. Even though a life coach can give a client tools and help with skill development and increasing motivation, growth relies on the client making the choices to take the steps to move forward. Noone will be successful with taking steps if he or she isn’t recognizing and accepting the challenges in the way and using strategies to overcome them. Learning requires much.
Even though I am a life coach, I am obviously a person. I am highly driven by my perceptions and feelings (feelings are really the motivators of behavior; logic functions as a motivator only as much as the thoughts or outcomes are associated with value. Feelings are what give value). I am fortunate to have many “tools in my toolbox” to help me with life. However, if I refuse to see my own patterns, know my own traps, or ignore the flashing lights in front of me that should be signaling me to do something different, none of my “action” tools are going to help me. In other words, denying reality is not going to help me—or you—to move forward. Denying reality will leave you running in circles, thinking you might be making progress. Denying reality can feel really good in the short-term, but you’ll stay stuck where you are. Chances are, if you are trying to move forward, there’s a reason you don’t want to be where you are.
Sometimes, the most difficult aspect of learning is accepting the hard truths, the ones that are painful, disappointing, or uncomfortable. For example, if you stay at the job where you are now (quite comfortably), there is no room to move up in the company and you will never be doing something you love (both which you value). Recognizing this and accepting it might bring a host of implications and uncomfortable feelings: uncertainty about what you would do if you stopped working in your current position; anxiety or apprehension about your employability, losing benefits, or a potential decrease in salary; disappointment because you really enjoy your current colleagues; and overwhelmed because it might mean relocating, which you really don’t want to do. In this scenario, in the short-term, it can be easier to ignore the fact that you don’t enjoy what you’re doing and there’s no potential for upward movement. In the long-term, denying those aspects are going to keep you stuck in a position that is ultimately not satisfying until something else happens to motivate you to change. Accepting reality and taking action—which can mean confronting uncomfortable feelings and disappointment—is the only thing that is going to move you closer to where you want to be. Depending, this can take much emotional strength and determination.
Taking the plunge into aspects of reality that are loaded with uncomfortable feelings is anything but desirable. Not to mention, once you take that plunge, you need to be able to adjust your perspective and problem solve so you have hope and a plan to change those aspects you don’t want to move forward with you. Much of this entire process requires emotional intelligence, coping, mindfulness, decision making, perspective taking, problem solving, and self-discipline. You need to balance being cognizant of the past (likely why you do not want to continue as you have been, which is motivating), living in the present (being conscious of and intentional with choices, coping with feelings, focusing on the positives, using your tools), and occasionally reminding yourself how you want your future to be (which should be motivating). Clearly, learning is not easy. But, it can be very worth it.
This is the continuation of a post from a couple of weeks ago in response to a request to know more of who I am, outside of a life coach. I decided to structure this task in an interview format, addressing what I would want to know about someone I'm just meeting.
What don’t you enjoy?
Eww. I don’t like cleaning up and putting things away. This is something I’ve made a conscious and concerted effort to improve within the past year, and I have been successful--but it is a work in progress. I’ve discovered and created techniques and tricks that facilitate being neater, and they really help! I dislike cardio workouts. I am not athletic, and tend to be sedentary. If something requires a lot of physical exertion with subpar payoff, I’m not a fan. I have a limited diet, which I really dislike with my love of food; there are a lot of things I can’t eat that I wish I could. Although emails are a necessity and can be convenient, I don’t enjoy tending to them.
What are three important lessons you’ve learned?
This is a biggie! 1.) If you commit, persevere, try, and put in the time, you can overcome weaknesses and things that are a problem and actually turn them into a skill and a strength. I’ve experienced this a couple of times: I was diagnosed with a non-verbal learning disorder in high school (terribly late because I was able to compensate for it until that point). I worked terribly hard at my writing, and was fortunate to have skilled mentors who helped me overcome my challenges with writing. When I took my GRE’s (the test you take after college to get into grad school), I had a 4.5 on the writing section, and 4.5-5 was the highest rating you could get.
2.) Self-love is so important. At some point, people will likely reject you, exclude you, hurt you, judge you, make fun of you, etc. Ultimately, you are the one person you have, from the beginning of your life to the end of your life. What other people say and do can and will affect you, but at the end of the day, other people don’t matter because you are the executive of your life and you are the person experiencing your life. If you don’t love yourself and only look outward for acceptance and love, you are giving who-knows-who a lot of power over your life experience. Know who you are, know who you want to be, and love yourself- flaws and all. The best life is going to be the one that you choose based on your values, not one that you create based on being pushed, pulled, and controlled by others. If you don’t love yourself, you can’t love someone else; you will be too consumed looking for and needing that person’s love to be able to give freely and accept the love that they have for you. You have to know you’re worthy of being loved! 3.) You gotta dig in and face it. Avoiding whatever it is will not make it go away. Distracting yourself is not going to make it go away. It will only fester, maybe get worse, or have additional negative consequences. There is a time and place for avoidance and distraction, but it is not a long-term solution. Being able to tolerate and “work with” uncomfortable feelings at times is part of a healthy life. AND I’m adding a fourth: 4.) It is okay to say “No” and set limits!
What is your philosophy of life?
I’ve spent much time since I was very young thinking about the meaning of life. Although I still don’t know, I’ve logically concluded for now that a good life is one that you enjoy and one where you give back. That logical reasoning also vibes with me at an emotional level, and it’s what guides much of what I do.
There are still a few questions as part of my interview, which I will save for a later post. If there are any questions you'd like to add to my interview, please let me know, and I will include them! | <urn:uuid:2e8e3c09-dd6f-406c-9a08-48f3f647b05b> | CC-MAIN-2019-47 | https://www.paragonlifecoaching.com/blog | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00139.warc.gz | en | 0.967746 | 6,348 | 2.890625 | 3 |
When the Second Continental Congress met in June 1775, they were not prepared for what they found. Several months earlier on April 19 the war of words with Great Britain had become a shooting war. The individual colonies found themselves at war with one of the greatest military powers of the age. It would fall on the delegates of the Continental Congress to lead them the best they could with a strong united voice that would see them through the crisis, or maybe not. Congress was not really prepared to become a governmental body. These men who were sent to discuss issues and send petitions suddenly found themselves placed in the position of having to create a united front from thirteen separate entities. They would be tasked with coming up with a military response, building an army, and finding some way to pay for all of it. They were, to say the least, not always up to that task. While many of the men that served in congress had experience running business or even colonial government, the task set ahead of them was more than they had ever done before. In many of the tasks set before it, Congress either failed or nearly failed, nearly causing the still birth of the great republic.
Nowhere did Congress fail as abysmally as it did in trying to create some way to generate money that would support the war. There were several sources they would look to in an effort to pay the bills. Getting support from the states and foreign powers was one path they took. Steps were even taken to try and build a real economy that would see them through the war and perhaps thereafter. Each came with its own set of difficulties.
From the States
Not even a week after starting his term in Congress, James Madison wrote a letter to his friend, mentor and governor of Virginia, Thomas Jefferson, where he summed up what he found.
From a lack of adequate Statesmen, [Congress is] more likely to fall into wrong measures and of less weight to enforce right ones, recommending plans to several states for execution and the states separately rejudging the expediency of such plans, whereby the same distrust of concurrent exertions that has dampened the ardor of patriotic individuals must produce the same effect among the States themselves.
The Congress was in a very unique position when it came to the states. While Congress was acting on behalf of the thirteen former colonies, they had no power to compel them into action. As such, no matter how much Congress needed money or supplies, they could not simply take them from the states and they had no power to create any sort of national tax to raise revenue. It was this lack of being able to raise revenue that caused Congress to decide on a paper-based currency for their use.
While the states themselves did have the power to impose taxes, many chose not to. It was difficult to fight a war that started because of taxes and then turn around and impose new ones on the people. Congress would have to rely on most of the support for the army being provided by the states themselves.
What had started as a Congress that spoke with one voice for all the colonies had been defeated by its thirteen partners, each of whom now stood as if on their own. In effect, after a brief period of actually having power, Congress was rendered subservient to thirteen masters. This lack of authority placed the nation and the war on seriously shaky ground.
Part of the problem came from the fact that as the war went on, the reputation of Congress was so threadbare that many of the states preferred to put their resources into maintaining their militias rather than supporting the “national” army. This desire to ensure their own safety over that of the United States was very hard to overcome. Militias had the pick of the food and supplies from each state and often the Continental Army would find itself competing with the militias for recruits because the states could pay more. More importantly they could actually pay. Congress had a very difficult time actually paying their soldiers, something that would have horrible consequences as the war progressed. To add insult to injury, when the Continental Army did find itself in a position where militia units could be useful, they were forced to provide for their supplies out of their dwindling stocks. Many, many times General Washington was forced to send sorely needed men in militia units home because they could not be fed from Congress’s meager supplies.
As stated, state militias often could and would pay recruits more than Congress was able to. This not only had the effect of the Continental Army constantly searching for men, unable to fill the ranks, but it also led to another sort of drain. As men and officers fulfilled their enlistments in the Continental Army, they would go home, rest and recuperate and instead of re-enlisting would join their state militias because the pay was better and actually existed. This sort of dichotomy had a very strange effect, an almost good and bad. The bad was that not only could the Continental Army not fulfill its quota for new troops, it could not keep the ones it had. The good was that as the war progressed, the militias were able to take advantage of this influx of talent and become a much more effective fighting force. The victories and pseudo victories in the south later in the war, such as Cowpens and Guilford Courthouse saw the benefits of this talent drain into the militia.
If anything could be said to have come from this conflict with the states it is that the idea of a confederation was shown to be a shallow husk. Washington himself, in the years after the war, would be one to take up the cause of a strong central government. Without the experience that he had in the war, dealing with what amounted to fourteen separate governing bodies, he may have been one of the many that saw a strong central government as anathema to what the revolution had been fought for. After all, the colonists had rebelled from just such a structure and if left to their own devices would not have willingly moved back towards it.
Numerous times, Congress attempted to raise money from the states by issuing resolutions for the raising of men, money and supplies. As late as 1781, Congress was still passing these resolutions:
Resolved, That it be recommended to the several states to lay taxes for raising their quotas of money for the United States, separate from those laid for their own particular use, and to pass acts directing the collectors to pay the same to the commissioner of the loan office, or such other person as shall be appointed by the superintendent of finance, to receive the same Within the State, and to authorize such receiver to recover the moneys of the collectors, for the use of the United States, in the same manner, and under the same penalties, as state taxes are recovered by the treasurers of the respective states, to be subject only to the orders of Congress, or the superintendent of finance.
Notice the wording of the resolution. Congress could only recommend that the states carry out the actions in the resolution. They could choose not to and most did. At the time of that resolution being passed, the southern states were occupied, the middle states (New York specifically) played host to a large British army embedded into New York City and even though the French were becoming more involved, the states were more concerned with providing for themselves.
As Congress struggled in getting cooperation from the states, the value of continental currency collapsed. The only way out would have been for the states to cooperate. In a debate on the floor of Congress, this was discussed and laid out by one delegate.
To raise the value of our paper money and to redeem it, will not, we are persuaded, be difficult, nor to check and defeat the pernicious currency of counterfeits impracticable; both require a far less share of public virtue and public vigilance than have distinguished this arduous conflict. Without public inconvenience or private distress, the whole of the debt incurred in paper emissions to this day may be cancelled by taxes; it may be cancelled in a period so limited as must leave the possessor of the bills satisfied with his security.
For want of cooperation from the States, the national economy very nearly became still born.
It did not take Congress very long to realize that they would need the help of foreign allies if they were going to defeat the British. The colonies themselves did not have the resources to create the war machine necessary. Immediately, Congress dispatched agents to Europe to start rounding up as much support as possible. The early search for support was difficult as most European powers were not prepared to go to war with the English over what seemed like an internal struggle. This was one of the reasons that the push for independence was so important. Once Congress passed the independence resolution and declared themselves free of Britain, the task became much easier. In October 1776, Congress sent the following instructions to the agents in France.
You shall endeavor, when you find occasion fit and convenient, to obtain from them a recognition of our independency and sovereignty, and to conclude treaties of peace, amity and Commerce between their princes or states and us, provided, that the same be not inconsistent with the treaty you shall make with his most Christian majesty, that they do not oblige us to become a party in any war which may happen in consequence thereof, and that the immune ties, exemptions, privileges, protection, defense and advantages, or the contrary, thereby stipulated, be equal and reciprocal. If that cannot be effected, you shall, to the utmost of your power, prevent their taking part with Great Britain in the war which his Britannic majesty prosecutes against us, or entering into offensive alliances with that king, and protest and present remonstrance’s against the same, desiring the interposition, mediation and good offices on our behalf of his most Christian majesty, the king of France, and of any other princes or states whose dispositions are not hostile towards us. In case overtures be made to you by the ministers or agents of any European princes or states for commercial treaties between them and us, you may conclude such treaties accordingly.
From the early days of the conflict, Congress put itself into the position of dealing with potential foreign allies in order to gain the supplies and funds that it needed to carry out the war.
In 1777, John Adams took the time to lay out for his wife the possibilities of getting more loans from Europe to help fund the war:
As to America, in the present state of affairs, it is not probable that a loan is practicable, but should it appear evident that we are likely to support our in dependency, or should either France or Spain acknowledge it, in either of these cases, we might have money, and when it shall be seen that we are punctual in our first payments of the interest, we shall have as much as we please.
In the same letter he breaks down all the possible sources of potential revenue that the Congressional agents in Europe were hoping for. While the issuance of loans was a trickle into the treasury, many foreign powers projected interest in potential commercial treaties. None, however, would move before a major power officially recognized the United States.
While the idea of asking for help from France, the natural enemy of all Englishmen, stuck in the throats of many of the delegates, it was necessary. Congress appointed Silas Deane of Connecticut to that task and he was very successful in that endeavor. At first his mission was a secret one, as the goal of the revolution was still up in the air. He could not be seen requesting aid from the French and for their part the French government could not be seen supplying the rebels for fear of ending up in a war with England.
A system of covert aid was set up with the help of the French playwright Beaumarchais, who was acting with the wink and nod support from the French court. Beaumarchais used much of his own money to set up a fake merchant company that would provide arms, powder and other military supplies to the Americans. This material would be “purchased” from the armories of the French Army under the table. For his part Beaumarchais and Deane arranged a fair trade, military stores for tobacco and other goods.
Such an agreement was an incredible boon to the Congress and it was always struggling to come up with hard currency that deals like this would normally require. Beaumarchais used his profits to purchase more stores in return for more crops and this cycle helped supply the Americans and ostensibly keep the French king’s hands clean. Unfortunately Congress had a knack for making such easy things incredibly difficult.
The difficulty came based on false information provided by Arthur Lee, Congress’s appointed representative to Prussia. Lee spent time in London and Paris feuding with not only Silas Deane but Benjamin Franklin. In a series of reports to Congress Lee was able to convince them that the military stores Beaumarchais was providing were a gift to the new country and urged them to not pay for the “gifts.”Lee was lying and did much to undermine his “rivals” and to make himself look good to the Congress. Being more of a political animal than a diplomat, Lee had many supporters in Congress who fell for what he said. Eventually his duplicity was uncovered, but the damage was done.
Over the course of the agreement and prior to the official alliance, Beaumarchais had arranged for almost $46 million in goods, partially financed from his own pocket and partially from the French court, to be sent to the United States to support the war. After many years of fretting and delay and promise of some sort of payment, Congress never sent one shipment of tobacco or anything else to Beaumarchais as payment and eventually the entire scheme came crumbling down. For his pains, Beaumarchais gained nothing and lost everything.
As for the French, the amount of support they provided to the Americans was one of the worst kept secrets on the continent. When word got around of the American victory at Saratoga, which led to the surrender of an entire British army, France was ready to come out of the darkness and support the Americans with men, troops, ships, supplies and, most importantly, money. Millions and millions in hard currency were loaned to the United States during the war. Eventually Spain and the Netherlands would also provide loans to the new nation in an effort, more than anything, to cause injury to Great Britain. How did Congress repay these loans? They did not. In fact, France spent so much money on the Continental Army that it caused economic problems that only a few years after the war, the French found themselves in a revolution of their own.
A question that was hotly debated in Congress was the amount of trade that should be conducted with other countries during the war. The more ships they sent out for trade, the more revenue they could generate to pay for the war, but there was a catch: fighting against one of the largest navies in the world meant losing many ships, many goods and much revenue. As Roger Sherman of Rhode Island said in the course of a debate on the subject on February 16, 1776, “I fear we shall maintain the armies of our enemies at our own expense with provisions. We can’t carry on a beneficial trade, as our enemies will take our ships. A treaty with a foreign power is necessary, before we open our trade, to protect it.”
The support that was gained from the foreign powers, especially France, was indispensable. It allowed the army to fight and survive and kept the nation going. While Congress and its committees did much to secure what support they could, it was eventually the spirit and victories of the army that created an atmosphere that allowed other nations to want to invest in the cause. Washington’s victories at Trenton and Princeton, along with those of Gates and Arnold at Saratoga, did more to help the cause with foreign powers that Congress ever did. As far as diplomacy goes, because of the distance involved the agents in Paris often were left to their own designs and as such were able to arrange for an incredible amount of support, much of which was done without any input from Congress. In fact it could be said that thanks to men like Lee, Congress really had very little control over the support that was gained from across the sea.
Build an Economy
One of the best things that Congress could have done to provide the men and material necessary to carry on the war effort would have been to create and maintain an economic system that instilled confidence in the people of the country. This confidence was lacking and only became worse as the war raged on. In December 1778 General Washington observed to a member of Congress that action had to be taken to strengthen the economy:
That party disputes & personal quarrels are the great business of the day whilst the momentous concerns of an empire— a great & accumulated debt—ruined finances— depreciated money— & want of credit (which in their consequence is the want of everything) are but secondary considerations & postponed from day to day—from week to week as if our affairs wore the most promising aspect
This lack of confidence found several outlets. The decision was made early on in the conflict that Congress would carry out all of its financial business using paper scrip that Congress itself produced. Paper currency is only as valuable as what stands behind it. In the best case scenario, the paper is backed by hard specie, such as gold. As long as there is gold available to back up the currency, the currency has value. The worst case scenario is when there is nothing backing up the value of the currency, except for the confidence of the people that it can be exchanged. Congress based its currency on the promise that once the war was over, it would be able to generate enough revenue to pay off the scrip. In effect, they attempted to fund the war with promissory notes, the value of which was almost totally dependent on how the fortunes of war were going for the American side. As the Continental Army faced defeat after defeat and the war dragged on, the value of congressional scrip dropped and dropped. This created a vicious circle where the army could not be supplied or presented in numbers to win battles, but by not winning battles, there was not enough confidence to support the currency and as such, the currency would lose value making it more difficult to field and fund the army.
In December 1776, George Washington began planning his foray against the Hessians stationed in Trenton. He knew that to do it he would need men. The problem was that at the end of the month the majority of his army had expiring enlistments. Washington asked for volunteers to extend their enlistments for several weeks until replacements could be found. None came forward. Finally he offered ten dollars each if they would stay on six more weeks. This was enough for many, but they would not accept the Continental currency for their bounty; instead Washington promised to pay them out of his own fortune. Such was the confidence in congressional scrip even that early in the war.
As the value dropped, Congress had no choice but to print more, driving the value down even further. By 1781 the exchange rate was $225 to $1 ($225 in paper money for $1 of hard specie). This was at a time when the average Continental Army private made $5 a month in Continental scrip, when they were paid at all.
Joseph Plumb Martin, a soldier from Connecticut, relayed in his memories a story where to earn a little extra (having not been paid in many months) he assisted in a roundup of runaway slaves that had fled to British service after the siege at Yorktown in 1781:
the fortune I acquired was small, only one dollar; I received what was then called its equivalent, in paper money, if money it might be called, it amounted to twelve hundred (nominal) dollars, all of which I afterwards paid for one single quart of rum; to such a miserable state had all paper stuff, called-money- depreciated.
This devaluation meant that many people who had goods that Congress and the army needed would not take the paper money. Instead they were more than happy to sell their goods to the British army who paid in specie. While it would be easy to hold these people in contempt for not supporting the Congress or at least the Continental Army, it should be remembered that these people had families to support regardless of who won the conflict. Faced with the choice of taking scrip or selling to the enemy they really had no choice. As the war progressed the Continental Army had to adapt to these issues. At first it was the desire of Washington that any supplies taken from the people be paid for with specie. As the supply of specie dried up the Continental scrip became the preferred method of payment. As the value of the scrip bottomed out the army was forced to simply take what was needed, pillaging their own people. Of course they were gave receipts, which were as worthless as the scrip.
In August 1777, John Adams addressed the specter of scrip-driven inflation in a letter to his wife, “We are contriving every way we can to redress the evils we feel and fear from too great a quantity of paper. Taxation as deep as possible is the only radical cure. I hope you will pay every tax that is brought you, if you sell my books, or clothes, or oxen, or your cows to pay it.” Congress had no answer to problem that they created and if the men of Congress were starting to panic, the people on the streets were beginning to feel it even worse.
By 1781, they had had enough and in the May 12 edition of the Rivington’s New York Gazette the following story took over the front page.
The Congress is finally bankrupt! Last Saturday a large body of the inhabitants with paper dollars in their hats by way of cockades, paraded the streets of Philadelphia, carrying colors flying, with a dog tarred, and instead of the usual appendage and ornament of feathers, his back was congress covered with the Congress’ paper dollars. This Bankrupt. example of disaffection, immediately under the eyes of the rulers of the revolted provinces, in solemn session at the State House assembled, was directly followed by the jailer, who refused accepting the bills in purchase of a glass of rum, and afterwards by the traders of the city, who shut up their shops, declining to sell any more goods but for gold or silver. It was declared also by the popular voice that if the opposition to Great Britain was not in future carried on by solid money instead of paper bills, all further resistance to the mother country were vain, and must be given up.
This was a telling account of the effect of the poor value of the scrip and the effect that it had on the people. Though this newspaper was published with a loyalist leaning, there is much to be seen from the perspective of one’s enemy. This sentiment was felt all across the fledgling nation. Had Congress been able to stop the rampant inflation and build confidence in its own brand, the economy and the war effort would have been stronger.
One attempt that Congress made in order to keep the economy “under control” was price fixing. It was hoped that by attempting to control prices for certain items, everything from foodstuff to clothing, they would be able to stop prices from rising in response to increased demand and low supply. Being part of a colonial system was hurting the country right out of the gate. Before the war, raw material was sent on to England and finished goods sold back in return. As such, the industrial capacity of the colonies was not that strong, leading to shortages of finished goods.
This led to resolutions from Congress, such as one from October 1775 recommending that since the manufactory of cloth could not keep up with demand, people should supplement their wardrobe with leather, starting to be fair, with members of Congress themselves. The resolution made strong recommendation that “dealers” in the skins continue to sell at the usual rate to avoid inflation.
On May 30 1776, they added salt to the list of items that would be held to a regulated price:
Whereas it hath been represented to Congress, that avaricious, ill designing men, have taken advantage of the resolve of Congress, passed the 30th of April, for withdrawing, from the committees of inspection, the power of regulating the price of goods, to extort from the people a most exorbitant price for salt: Resolved, That it be recommended to the committees of observation and inspection in the United Colonies, so to regulate the price of salt, as to prevent unreasonable exactions on the part of the seller, having due regard to the difficulty and risqué of importation; subject, however, to such regulations as have been, or shall hereafter be made, by the legislatures of the respective colonies.
During one particular debate over where to purchase supplies for the army, it was pointed out that merchants in Philadelphia had raised their prices fifty percent above the “market rate.” A proposal was made to purchase said products in New York to punish Philadelphia for breaking the association. The debate ended with a resolution to try to convince Philadelphia to lower its price, or the items would be purchased elsewhere. It was through methods like this that Congress tried to keep the prices steady.
By 1780, Congress was left with trying to limit the prices of not only goods inside the country, but goods being imported into the country. Then the price of labor started to fall under the price fixing scheme.
Resolved, therefore, that it be recommended to the Legislatures of the respective States to pass laws for limiting the prices of labour and all commodity foreign and domestic (salt and military stores excepted) so as not to exceed twenty prices of what the same article sold for in the year allowing such additional price for imported articles, for insurance, freight and other charges as the nature of the trade of each State shall justify and allowing also as an encouragement for country manufactures the same price as a foreign commodity of like quality shall sell for, deducting the price of insurance.
Resolved that for the more certain limitation of the price of foreign articles and for the encouragement of importations it may be expedient for the several States to open offices for insuring the importations of their respective inhabitants.
The issue with price fixing is that it is normally done with an eye towards limited shortages of goods, and in some cases labor. What usually ends up happening is that if the price of a good is not allowed to increase the shortage of said item does not go away, but in fact it tends to get worse. If the cost of an item is allowed to go up, demand will decrease and cause the price will fall. Trying to short that circuit causes two possible outcomes. One, the shortage get worse because demand only increases. Second, a black market forms around those that are willing to pay more to end-around the shortage. Congress, by turning to price fixing, only made the economy worse, but it was something that they could not control that would bring the economy to its knees.
Counterfeiting was a major issue for Congress and helped to lead the major inflation that threatened to ruin the economy. Not only criminals, but the British themselves took to creating fake Continental scrip that flooded the market. John Adams addressed this in a letter home in 1777. “Their principal dependence is not upon their arms, I believe, so much as upon the failure of our revenue. To think they have taken such measures, by circulating counterfeit bills, to depreciate the currency, that it cannot hold its credit longer than this campaign. But they are mistaken.”
The only good thing with the counterfeit money was that it was usually easily uncovered, especially since it was of higher quality than what Congress produced! The paper was usually a higher quality and the engraving was of a much higher quality. The $30 bill that was produced by congressional engravers even misspelled Philadelphia as “Philidelpkia” while the counterfeit version had it spelled correctly.
Even members of Congress personally had to deal with the specter of counterfeit scrip. John Adams admonished his wife: “How could it happen that you should have counterfeit New Hampshire money? Can’t you recollect who you had it of? Let me entreat you not to take a shilling of any but continental money or Massachusetts, and be very careful of that. There is a counterfeit continental bill abroad sent out of New York, but it will deceive none but fools, for it is copper plate, easily detected”
In 1780 Congress passed a resolution attempting to strike at the heart of the counterfeiters by offering a bounty on them, “two thousand dollars in the present Continental currency to any person or persons who take and prosecute to conviction.” It was worth about $10 specie at the time. Even with counterfeiting being a capital offense, by the end of the war almost half of all Continental scrip in circulation was fake.
In the end the war was won, so it would be easy to say that Congress was successful in finding ways to pay for the war. The problem with saying that is that while it may seem that way on the outside, on the inside it just was not true. The new nation started off in deep debt with a shaky economy that would not recover for years. Foreign debts were piled high, and relief was only found eventually by not repaying the French what they had loaned. The Continental soldiers were not paid, or were only paid a fraction of what they were owed. Many held out for the promise of what Congress owed them, only to fall victim to speculators and soaring prices. Some were even forced into outright rebellion when they could no longer afford the very land they fought for. Eventually it would take a new government, a new version of Congress, to set the country on the right path economically.
Editor’s Note: This is the third in a series of articles by Mr. Hatfield on the Continental Congress vs. Continental Army. See also, Continental Congress vs. Continental Army: The Officer Corps and Continental Congress vs. Continental Army: Strategy and Personnel Decisions. | <urn:uuid:3d1befb5-9bd3-4071-b38b-b0f398d32b13> | CC-MAIN-2019-47 | https://allthingsliberty.com/2019/01/continental-congress-vs-continental-army-paying-for-it-all/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00498.warc.gz | en | 0.984191 | 6,244 | 3.65625 | 4 |
There are five existing species of rhinoceros: two that live in Africa (white rhino and black rhino) and three that live in Asia (Indian rhino, Javan rhino, Sumatran rhino). These species are further divided into four subspecies of black rhino, two of white rhino, one surviving subspecies of Javan, and three for the rare Sumatran rhino.
The two African species, as well as the Asian Sumatran species, each have two horns. The front horn is the largest and most prominent above the nose. The Indian and Javan species have a single horn, located above their nose. Male and female rhino of the same species have the same number of horns. These horns are primarily made of keratin, a fibrous protein and structural material also found in human skin, fingernails, bird beaks, porcupine quills, and the scales of the endangered pangolin.
Africa’s white rhino species is the largest of any living rhinoceros, weighing up to 3,600 kilograms (7,920 pounds), and is the continent’s third-largest species after the African bush elephant and African forest elephant. The black rhino, which is not actually black, can weigh up to 1,400 kg (3,100 pounds). Horns of the African rhino weigh on average 1.5-3.0 kilograms (3.3-6.6 pounds), with white rhino having the heaviest front horn which on average weighs 4.0 kilograms (8.8 pounds). Horns of the Asian rhino species are substantially smaller, averaging 0.27 kilograms (0.59 pounds) to 0.72 kilograms (1.58 pounds) (page 3).
Rhino horn has been highly prized by several cultures for over a thousand years and trade records suggest that the intercontinental trade in African rhino horn to the Far East has existed for centuries (pages 7-8). While rhinos have been killed in the past for their meat or solely for their skin (Run Rhino Run, page 79), from which shields were made, its horn is the most valuable product obtained. Other parts of their body have seen limited use as remedies in traditional Chinese medicine including dried blood, penis, and toenails (page 14).
As a result of both legal and illegal hunting both rhinoceros species in Africa have faced extinction before. With little or no hunting regulation in southern African nations during the 1800s the exceedingly common southern white rhinoceros subspecies was hunted nearly to extinction by European settlers, sport hunters, and opportunists cashing in on the rhino horn trade. The northern white rhinoceros subspecies, which once lived in Central Africa, had its numbers cut down dramatically by the early 1900s. By 1960 there were more than 2,000 individuals, however this population declined again during the following decades due to poaching. Today there are only three left. Black rhino populations also suffered from dramatic losses in countries like Kenya which saw its populations drop from an estimated 20,000 in 1960 to 330-530 (pages 5, 6).
Recent census estimates suggest that there are roughly 20,700 southern white rhino and 4,885 black rhino in Africa, however this may not factor in the more than 2,200 rhino lost in South Africa during 2013 and 2014, nor population growth since the census. It is thought that black rhino populations are trending upward, even while select populations continue to diminish. Of the Asian species, an estimated 3,333 greater one-horned rhino (Indian rhino) and only 58-61 Javan rhino are left in the wild, with official estimates of the Sumatran rhino at fewer than 100, but possibly as low as just 30 in the wild. The lifespan of a wild rhinoceros is unknown, but is expected to be 35-50 years for any African or Asian species. There is no known upper limit to a rhinoceros’ breeding age, however the mating “prime” range is thought to be 17-30 years old.
Asia has been considered the leading consumer of rhino horn and other rhino parts for decades. However in the past 200 years many countries around the world have acted as major consumers of raw horn as well as carvings. With little or no hunting regulation in colonial African nations during the 1800s the exceedingly common southern white rhinoceros subspecies was hunted nearly to extinction by European settlers, sport hunters, and opportunists cashing in on the rhino horn trade. Simultaneously the southern white rhinoceros was being wiped out and by the end of the 1800s would be completely eradicated throughout its home range with the exception of a small, remote population in South Africa.
Among the most intense periods of rhino exploitation was from 1849-1895 when it was estimated that around 11,000 kilograms of rhino horn were exported from East Africa each year (page 6). This represents between 100,000 and 170,000 rhinoceros killed through the 47-year period (page 6). As the black rhino was the most common rhino species in that region (page 2) it’s likely that their populations took the largest toll. To supply the world’s demand East Africa had become the world’s dominant source and re-exporter of rhino horn.
Even Japan, whose imports during the early-mid 1880s came largely from Asian rhino species in Thailand and Indonesia, turned to buying African rhino horn by 1888 (page 11). India became a prominent supplier of Japan and India chose to sell the less expensive rhino horn from Africa rather and keep it’s Asian rhino horn for local Asian markets. From 1887 through 1903 officially imported more than 1,000 kilograms (2,200 pounds) of rhino horn each year, with more than 2,200 kilograms (4,840 pounds) declared in 1897 and again in 1898 (page 11). This trend of buying African horn would continue for Japan, even as its suppliers changed, throughout the 1900s. By 1951 the country was the largest known consumer of rhino horn in Asia, averaging imports of 488 kilograms (1,073 pounds) each year through 1980 (page 11).
Little is known about historical black rhino populations, but in 1960 there were an estimated 100,000 black rhino in sub-Saharan Africa. Ten years later their population had decreased to an estimated 65,000 (page 4). From 1959-1974 the United States and the United Kingdom imported at least 2,642 kg and 1,686 kg, respectively (page 9). This was only a fraction of East Africa’s rhino horn exports during that period (page 8). A document recounting official Kenyan exports states that 28,570 kilograms (62,986 pounds) of rhino horn were legally exported from the country in the eight years from 1969 through 1976. Averaging 3.32 kg of horns per rhino this accounted for 8,585 northern white rhino and black rhino. About 8.5% of those exports occurred in 1976 alone.
Records of a 18 June, 1976 sale of 668 rhino horns weighing 985.9 kilograms showed an average price of 605.50 Kenyan Shillings ($87-105) per kilogram and $85,773-103,519 for all the pieces in the 9 lots. This sale, approved by the department Minister, significantly undercut both the retail and wholesale prices being asked for rhino horn abroad. The same document cites wholesale rhino horn prices in India as $375 per kilogram and a retail price of $875. Other discrepancies also came to light including a very limited number of buyers being invited to make bids and the winning bidder buying all lots in spite of not having the highest bid. Other lots were also sold at a fraction of wholesale prices and had their quality downgraded to reduce the reported worth of the rhino horn. Suspicious sales, open corruption at national and international levels, and virtually checked poaching had ushered in a “catastrophic” (page 27) period of poaching that was dubbed the “white gold rush.”
Almost every year from 1964 through 1971, with a surge in 1976, Aden and South Yemen were the largest importers of raw rhino horn exported from East Africa (page 8). While the legality of all of the horn is unknown, the rhino horn was either documented and officially exported from East African countries, officially imported by the Middle Eastern states, or both. This trade supplied Yemeni men with janbiya, traditional daggers that utilized cheap or exotic materials for their handles. From 1969 through 1977 an average of 3,235 kilograms of raw rhino horn was imported into North Yemen (page 9). In the years 1972 through 1978 North Yemen imported approximately 40% of the African horn on the world’s market (page 28). Rhino horn imports were banned by North Yemen in 1982, but small-scale trade continued until the 1990 unification (pages 9, 11) and even after Yemen’s Grand Mufti announced that it was unethical to kill rhinoceros under Islamic law (page 28). After unification Yemen’s imports of rhino horn may have increased over prior years (page 9), but it’s uncertain whether the country remained a significant importer as a result of economic impacts due to the Gulf War (1990-1991).
It’s estimated that the world market for rhino horn traded roughly 8,000 kilograms each year from 1970 through 1979 (page 9). The largest Asian importers during this period were Hong Kong (1,225 kg), China (1,025 kg), Japan (800 kg), Taiwan (580 kg), and South Korea (200 kg), accounting for an estimated 3,830 kilograms, or roughly 47% of the total world trade (page 9). Since customs records are incomplete (page 9), and the rhino horn trade was banned internationally during this period, it is likely that rhino horn was illegally imported to these and neighboring nations in addition to what is officially recorded.
The international commercial trade in African rhino parts was banned in 1977 when the two species were added to Appendix I of CITES (page 1). The Asian species had been listed on Appendix I in July of 1975 (page 1). The goal was to negatively impact demand for rhino horn and in turn allow rhino populations to recover, but this agreement did not affect domestic hunting or trade within nations and many rhino consuming countries did not take action to regulate trade. It wasn’t until 1993 that China officially banned rhino horn sales within its own borders. Rhino horn sale wasn’t prohibited by law in Vietnam until 2006. In many western nations laws are very specific in prohibiting imports of rhino horn unrelated to sports trophies, but less specific about the sale of rhino horn, and antiques containing rhino horn, within the country’s borders. In the United States new laws have caught Chinese nationals buying antique rhino horn and smuggling it into the Chinese market.
Whole rhinoceros horn is a material which has been used for special, traditional carved items in the Middle East and Asia. Records suggest that rhino parts were transported through the Middle East to China as far back as 2,000 years ago (page 7) and that these parts were used throughout the history of Imperial China. However it’s unknown whether rhino horn was specifically sought after during this period. It isn’t until as early as China’s Tang dynasty (618-907) that bowls and cups (page 7) made of Asian rhino horn were in use. China’s demand for rhino horn and carving of decorative items continued for the next thousand years (page 7), through the Qing dynasty (1644-1912), and became a statement of wealth for those who were able to afford to give such a luxurious item to the Chinese emperors on their birthday (Run, Rhino Run, page 53).
Due to availability, and possibly lower cost, researchers believe that items made of rhino horn during the Ming dynasty (1368-1643) were primarily made from African horn, brought to the East via long-established trade networks between East Africa and Asia (page 4). During the middle of the Ming Dynasty (1368-1644) the Pen-tsao Kang-mu (Compendium of Materia Medica) was written by Li Shizhen. This work became the most comprehensive text on flora, fauna, and minerals that were believed to have medicinal properties and includes information on the perceived properties of rhino horn. However it does not list rhino horn as a treatment for cancer.
It is unknown when other parts of the world began using goblets and cups crafted from rhino horn or when the legend emerged that poison would be neutralized when poured into such a container. Although certain alkaloid poisons are thought to react with the keratin and other proteins in rhino horn there is not enough evidence of this actually being used in the past to support the legend (pages 4, 5). But myths involving supernatural properties imbuing the horn of the rhinoceros would eventually penetrate a variety of cultures throughout the Middle East, Asia, and parts of Europe and North Africa (Run Rhino Run, page 54). Since then poison-detecting properties of the rhino have been mythologized and traditions among the wealthy have developed as a result. In the 1800s it was fashionable for wealthy North Africans to own one or more rhino horn cups (Run, Rhino Run, page 54). This may have contributed to the ornate, decorative cup carving industries in Ethiopia and Sudan that developed in the 1800s and persisted well into the 1900s (page 54). It was during this period that products partially utilizing rhino horn also became desirable decorative items in Europe (page 54).
Historically rhino carving has also existed in parts of India, Laos, Cambodia, China, and Japan. In China many small items, particularly clothing accessories, were carved from whole rhino horn (Run, Rhino Run, page 54). In Japan decorative netsuke, a type of fastener worn with traditional garments, were crafted from rhino horn as well as ivory, hardwoods, and other novel items (page 54). Ultimately rhino horn carving industries are not thought to have significantly contributed to a tradition or created demand by these cultures. Rhino horn’s popularity arose in Asia grew largely due to its perceived medicinal benefits (page 4).
The most expensive janbiya handles use high-quality polished rhino horn for the handle (page 7). These daggers have traditionally been used by Yemeni men as a status symbol since at least the 8th century (page 4) and janbiya made with a rhino horn hilt are notable for acquiring a sayfani (patina) with age, making it a prized family heirloom, and have value as a symbol of manhood. Less wealthy men must settle for a janbiya with a handle made of water buffalo horn, wood, or even plastic (page 96). One merchant who had a self-proclaimed monopoly on rhino horn imports alleged that he had imported 36,700 kilograms of rhino horn from 1970 through 1986 (page 28). The regional market may have peaked in the 1980s, when North Yemen was estimated to be importing roughly 40% of all rhino horn on the world market for use in janbiya (page 28) or to re-export to Chinese markets. During 1982 to 1986 North Yemen is believed to be one of the primary sources for more than 10,000 kilograms (22,000 pounds) of rhino horn imported by China (page 27). This horn would have been in the form of shavings and chippings leftover from carving the janbiya handle (page 27).
In 1979 in Taiwan and Hong Kong rhino horn from the Indian rhinoceros retailed for $18,000 per kilogram (page 4). In 1980 antique rhino horns carved in China were retailing in the United Kingdom for $900-5,000 ($2,628-14,600 in 2015 dollars). The reason for the disparity in prices is likely a result of the belief that rhino horn from Asia produced a much more potent effect, while the African rhino horn was weaker (page 26). This has historically contributed to greater demand for Asian rhino horn, but rhino horn from Africa has been in greater supply (page 25) for at least the last century.
Whole rhino horn prices vary from shop to shop and from city to city. In 2014 the retail price of a whole rhino horn is commonly pegged at around $60,000 per kilogram, with some reports of as much as $100,000 per kilogram being charged. It’s unclear whether these higher prices are for carved or ornate rhino horn, finely polished and with gold accents or jewels. But independent investigations have turned up other retail prices straight from Asia. In Southeast Asia’s notorious wildlife trafficking city of Mong La, Burma a shop owner quoted $45,000 for an entire rhino horn which was likely from an Asian rhino due to its small size. Chinese frequently visit Mong La, a short distance from the country’s border, and are said to be frequently the buyers of expensive and illegal items including tiger skins.
In Europe and the United States antique rhino horn walking canes from pre-Great Depression era England are readily available from prestigious auction houses. Antique cups are also highly valued and are still legally sold in some parts of Europe and North America, however some illegal traffickers have used this as a means of procuring rhino horn for the Chinese market. Similar concerns were expressed in September of 2010 by the United Kingdom’s Animal Health after the agency reported a rise in sales of antique rhino horn products. Some customs agencies are now racially profiling for rhino horn traffickers.
The rhino horn trade from Africa to the Middle East, and to Asia, has existed for centuries (pages 7, 8). But the myth that rhino horn can cure cancer is new. While the earliest recorded usage of rhino horn as a medicine dates back more than 2,100 years to the writing of the Shénnóng Běncǎo Jīng (The Classic of Herbal Medicine) there is no documented evidence that rhino horn can treat cancer in western medicine nor is there mention of it in Traditional Chinese medicine or Southeast Asian folk medicine. But myths persist about the way rhino horn has been used in specific cultures.
A common myth known by westerners is that powered rhino horn has been used as an aphrodisiac in Southeast Asia, but mostly this is re-reported by news media without sources. Undercover investigation reported in Run, Rhino, Run (Martin and Martin, 1985) showed that there is also no evidence of rhino horn being prescribed as an aphrodisiac in Chinese traditional medicine books (page 71), nor any pharmacists selling rhino horn for that purpose. However there is historical evidence of powdered rhino horn being used as an aphrodisiac by some people in India. The Gujarati people may have believed that powdered rhino horn could work as an aphrodisiac (page 4), but its use likely ended (page 25) in the late 1980s or early 1990s when supply of rhino horn decreased and the price skyrocketed, causing social customs to change (page 27).
In traditional Chinese medicine powdered horn is typically prescribed for the reduction of high fevers (page 8). This alleged remedy likely motivated and shaped rhino horn usage in surrounding countries, where rhino horn is similarly prescribed. Shavings and powdered rhino horn, as well as rhino blood, urine, and skin, have been used as traditional medicines by Asian cultures including the Burmese, Chinese, Nepalese, South Koreans, and Thai (page 4). South Korea and Thailand were the largest importers (page 14) of rhino horn during the 1970s and into the 1980s, largely for its alleged medicinal purposes (page 29). In post-war Japan pharmacists had prescribed medicines with rhino horn as a purported ingredient, but by the 1980s after joining CITES pharmacists were suggesting horn from other animals as an alternative, like the now critically endangered saiga antelope. In the latter half of the 1900s China has been a major manufacturer (page 10) of medicines (page 13) allegedly using rhino horn as an ingredient (page 10) despite national and international bans. China has exported pills, tonics, and other forms of these medicines to Hong Kong, Japan, Macao, Philippines, and South Korea (page 10).
In late 2011 prices for powdered rhino horn varied between $33 and $133 per gram in Vietnam. But a booming economy and one of the lowest unemployment rates in the world has led to a growing middle class and increased consumer spending. Renewed interest in traditional folk medicine and Traditional Chinese medicines are gaining popularity in some areas and may be contributing to demand for real rhino horn as well as related products. Today Vietnam is thought to be the largest consumer of rhino horn, but independent investigations suggest that rhino horn is becoming harder to find in the shops and market stalls in Hanoi, Vietnam due to strict penalties put in place in 2006 and decreasing consumer demand for folk medicines. In May of 2015 the Director of the Vietnam Management Authority for CITES, Do Quang Tung, claimed that Vietnamese demand for rhino horn had dropped 77% from the previous year.
Rhino skins have been found in markets in Hong Kong and Brunei alongside other rhino parts with purported medicinal value. In 1982 dried rhino hide retailed for around $370 per kilogram and for $635 per kilogram in Singapore (page 11). In the late 1980s and early 1990s Bangkok, Thailand had among the largest quantity of rhino parts for sale in traditional medicine shops, from both African and Asian sources (page 14). These parts included dried blood, horn, penis, sections of skin, and toenails (page 14).
According to the book Run, Rhino, Run the skins of rhinoceros have been used for many purposes among some African and Indian cultures. Ornate and ornamental shields made with rhino hide and gilded in gold were created in India during the early 1700s (pages 76, 77, 81). Unornamented shields were kept by Ethiopian aristocrats of the 1800s and also used by mercenary warriors fighting for Sayyid Majid bin Said Al-Busaid, the first Sultan of Zanzibar (page 76). During the same period, and through the early 1900s, some whips made for use on livestock and humans were created from rhino skin (page 76). Ornate and decorative shields made with rhino hide and gilded in gold were also created in India during the early 1700s (pages 76, 77, 81). | <urn:uuid:0d43b390-027e-4a9e-828a-bcf1d9960aea> | CC-MAIN-2019-47 | http://www.poachingfacts.com/faces-of-the-poachers/buyers-of-rhino-horn/?shared=email&msg=fail | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668525.62/warc/CC-MAIN-20191114131434-20191114155434-00218.warc.gz | en | 0.960759 | 4,755 | 3.59375 | 4 |
William Walker Atkinson was an attorney, publisher and author who was one of the pioneers of the New Thought movement in America in the early 1900s. He was the editor of New Thought magazine (1901-1905) and the Journal of Advanced Thought (1916- 1919).
The Art of Logical Thinking - William Walker Atkinson
Amazon Synopsis: REASONING
"Reasoning" is defined as: "The act, process or art of exercising the faculty of reason; the act or faculty of employing reason in argument; argumentation, ratiocination; reasoning power; disputation, discussion, argumentation." Stewart says: "The word reason itself is far from being precise in its meaning. In common and popular discourse it denotes that power by which we distinguish truth from falsehood, and right from wrong, and by which we are enabled to combine means for the attainment of particular ends."
By the employment of the reasoning faculties of the mind we compare objects presented to the mind as precepts or concepts, taking up the "raw materials" of thought and weaving them into more complex and elaborate mental fabrics which we call abstract and general ideas of truth. Brooks says: "It is the thinking power of the mind; the faculty which gives us what has been called thought-knowledge, in distinction from sense-knowledge.
It may be regarded as the mental architect among the faculties; it transforms the material furnished by the senses ... into new products, and thus builds up the temples of science and philosophy." The last-mentioned authority adds: "Its products are twofold, ideas and thoughts. An idea is a mental product which when expressed in words does not give a proposition; a thought is a mental product which embraces the relation of two or more ideas.
The ideas of the understanding are of two general classes; abstract ideas and general ideas. The thoughts are also of two general classes; those pertaining to contingent truth and those pertaining to necessary truth. In contingent truth, we have facts, or immediate judgments, and general truths including laws and causes, derived from particular facts; in necessary truth we have axioms, or self-evident truths, and the truths derived from them by reasoning, called theorems." Read online or download
A Series of Lessons in Gnani Yoga - William Walker Atkinson
From Chapter One: The Yogi Philosophy may be divided into several great branches, or fields. What is known as "Hatha Yoga" deals with the physical body and its control; its welfare; its health; its preservation; its laws, etc. What is known as "Raja Yoga" deals with the Mind; its control; its development; its unfoldment, etc. What is known as "Bhakti Yoga" deals with the Love of the Absolute—God. What is known as "Gnani Yoga" deals with the scientific and intellectual knowing of the great questions regarding Life and what lies back of Life—the Riddle of the Universe.
Each branch of Yoga is but a path leading toward the one end—unfoldment, development, and growth. He who wishes first to develop, control and strengthen his physical body so as to render it a fit instrument of the Higher Self, follows the path of "Hatha Yoga." He who would develop his will-power and mental faculties, unfolding the inner senses, and latent powers, follows the path of "Raja Yoga." He who wishes to develop by "knowing"—by studying the fundamental principles, and the wonderful truths underlying Life, follows the path of "Gnani Yoga." And he who wishes to grow into a union with the One Life by the influence of Love, he follows the path of "Bhakti Yoga." Read online or download
A Series of Lessons in Raja Yoga - William Walker Atkinson
From Chapter One: THE "I."
In India, the Candidates for Initiation into the science of "Raja Yoga," when they apply to the Yogi Masters for instruction, are given a series of lessons designed to enlighten them regarding the nature of the Real Self, and to instruct them in the secret knowledge whereby they may develop the consciousness and realization of the real "I" within them. They are shown how they may cast aside the erroneous or imperfect knowledge regarding their real identity.
Until the Candidate masters this instruction, or at least until the truth becomes fixed in his consciousness, further instruction is denied him, for it is held that until he has awakened to a conscious realization of his Actual Identity, he is not able to understand the source of his power, and, moreover, is not able to feel within him the power of the Will, which power underlies the entire teachings of "Raja Yoga." Read online or download
Clairvoyance and Occult Powers - William Walker Atkinson 1916
Lesson 1: The skeptical person who "believes only the evidence of his senses." The man who has much to say about "horse sense." "Common Sense" versus Uncommon Senses. The ordinary five senses are not the only senses. The ordinary senses are not as infallible as many think them. Illusions of the five physical senses. What is back of the organs of physical sense. All senses an evolution of the sense of feeling. How the mind receives the report of the senses. The Real Knower behind the senses. What the unfolding of new senses means to man. The super-physical senses. The Astral Senses. Man has seven physical senses, instead of merely five. Each physical sense has its astral sense counterpart. What the astral senses are. Sensing on the astral plane. How the mind functions on the astral plane, by means of the astral senses. The unfolding of the Astral Senses opens up a new world of experience to man. Read online or download
Dynamic Thought: Or the Law of Vibrant Energy- William Walker Atkinson 1906
From Foreword: This is a queer book. It is a marriage of the Ancient Occult Teachings to the latest and most advanced conceptions of Modern Science—an odd union, for the parties thereto are of entirely different temperaments. The marriage might be expected to result disastrously, were it not for the fact that a connecting link has been found that gives them a bond of common interest. No two people may truly love each other, unless they also love something in common—the more they love in common, the greater will be their love for each other. And, let us trust that this will prove true in this marriage of Occultism and Science, celebrated in this book. Read online or download
Genuine Mediumship or The Invisible Powers - William Walker Atkinson 1919
From Chapter One: It should be clearly understood by all students of occultism or psychic phenomena that man's knowledge and experience, normal or supernormal, is confined to the realm of Nature. There is a "ring pass-not" around the boundaries of the Kingdom of Nature which mortals cannot pass, no matter how high may be their degree of development and advancement. Even those great mystics whose writings are filled with the startling revelations of "union with the Divine," and of "At-one-ment with Deity," are under no illusion concerning this fact they know full well that only in so far as Deity involves itself in Nature—wraps itself up in the garments of Nature—can it be directly experienced by man, and thus actually known by him. Read online or download
The Hindu-Yogi Science Of Breath - William Walker Atkinson
From Chapter One: The Hindu Yogis have always paid great attention to the Science of Breath, for reasons which will be apparent to the student who reads this book. Many Western writers have touched upon this phase of the Yogi teachings, but we believe that it has been reserved for the writer of this work to give to the Western student, in concise form and simple language, the underlying principles of the Yogi Science of Breath, together with many of the favorite Yogi breathing exercises and methods. We have given the Western idea as well as the Oriental, showing how one dovetails into the other. We have used the ordinary English terms, almost entirely, avoiding the Sanscrit terms, so confusing to the average Western reader.
The first part of the book is devoted to the physical phase of the Science of Breath; then the psychic and mental sides are considered, and finally the spiritual side is touched upon. Read online or download
How to Read Human Nature - William Walker Atkinson 1916
From Chapter One: While the general subject of psychology includes
the consideration of the inner workings of the mind, the processes of thought,
the nature of feeling, and the operation of the will, the special subject of
Human Nature is concerned only with the question of character, disposition,
temperament, personal attributes, etc., of the individuals making up the race
of man. Psychology is general—Human Nature is particular. Psychology is more or
less abstract—Human Nature is concrete.
Psychology deals with laws, causes and principles—Human Nature deals with effects, manifestations, and expressions.
Human Nature expresses itself in two general phases, i.e., (1) the phase of Inner States; and (2) the phase of Outer Forms. These two phases, however, are not separate or opposed to each other, but are complementary aspects of the same thing. There is always an action and reaction between the Inner State and the Outer Form—between the Inner Feeling and the Outer Expression. If we know the particular Inner State we may infer the appropriate Outer Form; and if we know the Outer Form we may infer the Inner State. Read online or download
The Human Aura: Astral Colours and Thought Forms - William Walker Atkinson 1940
From Chapter One: Briefly, then, the human aura may be described as a fine, ethereal radiation or emanation surrounding each and every living human being. It extends from two to three feet, in all directions, from the body. It assumes an oval shape—a great egg-shaped nebula surrounding the body on all sides for a distance of two or three feet. This aura is sometimes referred to, in ordinary terms, as the "psychic atmosphere" of a person, or as his "magnetic atmosphere."
This atmosphere or aura is apparent to a large percentage of persons in the sense of the psychic awareness generally called "feeling," though the term is not a clear one. The majority of persons are more or less aware of that subtle something about the personality of others, which can be sensed or felt in a clear though unusual way when the other persons are near by, even though they may be out of the range of the vision. Being outside of the ordinary range of the five senses, we are apt to feel that there is something queer or uncanny about these feelings of projected personality. But every person, deep in his heart, knows them to be realities and admits their effect upon his impressions regarding the persons from whom they emanate. Even small children, infants even, perceive this influence, and respond to it in the matter of likes and dislikes. Read online or download
Memory: How to Develop, Train and Use It - William Walker Atkinson 1919
From Chapter One: Memory is more than "a good memory"—it is the means whereby we perform the largest share of our mental work. As Bacon has said: "All knowledge is but remembrance." And Emerson: "Memory is a primary and fundamental faculty, without which none other can work: the cement, the bitumen, the matrix in which the other faculties are embedded. Without it all life and thought were an unrelated succession." And Burke: "There is no faculty of the mind which can bring its energy into effect unless the memory be stored with ideas for it to look upon." And Basile: "Memory is the cabinet of imagination, the treasury of reason, the registry of conscience, and the council chamber of thought." Kant pronounced memory to be "the most wonderful of the faculties."
Kay, one of the best authorities on the subject has said, regarding it: "Unless the mind possessed the power of treasuring up and recalling its past experiences, no knowledge of any kind could be acquired. If every sensation, thought, or emotion passed entirely from the mind the moment it ceased to be present, then it would be as if it had not been; and it could not be recognized or named should it happen to return. Such an one would not only be without knowledge,—without experience gathered from the past,—but without purpose, aim, or plan regarding the future, for these imply knowledge and require memory. Even voluntary motion, or motion for a purpose, could have no existence without memory, for memory is involved in every purpose. Not only the learning of the scholar, but the inspiration of the poet, the genius of the painter, the heroism of the warrior, all depend upon memory." Read online or download
Mystic Christianity: Or the Inner Teachings of the Master - William Walker Atkinson 1908
From Chapter One: THE FORERUNNER.
Strange rumors reached the ears of the people of Jerusalem and the surrounding country. It was reported that a new prophet had appeared in the valley of the lower Jordan, and in the wilderness of Northern Judea, preaching startling doctrines. His teachings resembled those of the prophets of old, and his cry of "Repent! Repent ye! for the Kingdom of Heaven is at hand," awakened strange memories of the ancient teachers of the race, and caused the common people to gaze wonderingly at each other, and the ruling classes to frown and look serious, when the name of the new prophet was mentioned.
The man whom the common people called a prophet, and whom the exalted ones styled an impostor, was known as John the Baptist, and dwelt in the wilderness away from the accustomed haunts of men. He was clad in the rude garments of the roaming ascetics, his rough robe of camel's skin being held around his form by a coarse girdle of leather. His diet was frugal and elemental, consisting of the edible locust of the region, together with the wild honey stored by the bees of the wilderness. Read online or download
Practical Mind-Reading - William Walker Atkinson 1907
From Lesson Two: Nearly everyone has had evidences of Mind Reading or Thought Transference in his or her own life. Nearly every one has had experiences of being in a person's company when one of the two would make a remark and the other, somewhat startled, would exclaim, "Why, that's just what I was going to say," or words to that effect. Nearly every one has had experiences of knowing what a second person was going to say before the person spoke. And, likewise common is the experience of thinking of a person a few moments before the person came into sight. Many of us have suddenly found ourselves thinking of a person who had been out of our minds for months, or years, when all of a sudden the per[Pg 14]son himself would appear. Read online or download
The Power of Concentration - Theron Q. Dumont
From the Introduction: We all know that in order to accomplish a certain thing we must concentrate. It is of the utmost value to learn how to concentrate. To make a success of anything you must be able to concentrate your entire thought upon the idea you are working out.
Do not become discouraged, if you are unable to hold your thought on the subject very long at first. There are very few that can. It seems a peculiar fact that it is easier to concentrate on something that is not good for us, than on something that is beneficial. This tendency is overcome when we learn to concentrate consciously.
If you will just practice a few concentration exercises each day you will find you will soon develop this wonderful power.
Success is assured when you are able to concentrate for you are then able to utilize for your good all constructive thoughts and shut out all the destructive ones. It is of the greatest value to be able to think only that which will be beneficial. Read online or download
Reincarnation and the Law of Karma - William Walker Atkinson 1908
From Chapter One: There are many forms of belief—many degrees of doctrine—regarding Reincarnation, as we shall see as we proceed, but there is a fundamental and basic principle underlying all of the various shades of opinion, and divisions of the schools. This fundamental belief may be expressed as the doctrine that there is in man an immaterial Something (called the soul, spirit, inner self, or many other names) which does not perish at the death or disintegration of the body, but which persists as an entity, and after a shorter or longer interval of rest reincarnates, or is re-born, into a new body—that of an unborn infant—from whence it proceeds to live a new life in the body, more or less unconscious of its past existences, but containing within itself the "essence" or results of its past lives, which experiences go to make up its new "character," or "personality."
It is usually held that the rebirth is governed by the law of attraction, under one name or another, and which law operates in accordance with strict justice, in the direction of attracting the reincarnating soul to a body, and conditions, in accordance with the tendencies of the past life, the parents also attracting to them a soul bound to them by some ties in the past, the law being universal, uniform, and equitable to all concerned in the matter. This is a general statement of the doctrine as it is generally held by the most intelligent of its adherents. Read online or download
Subconscious and the Superconscious Planes of Mind - William Walker Atkinson 2010
Amazon Synopsis: Subconscious and the Superconscious Planes of Mind, written by W.W. Atkinson in 1909, is a somewhat supernatural text on the different levels at which the mind works and functions. There are the sub-conscious (below normal), conscious (normal), and super-conscious (above normal) levels, which Atkinson describes in detail. He also covers the elements of each level-for example, in the subconscious our memory works and resides.
While based in hard facts, Atkinson uses the mind theories to justify instances such as telepathy and mind reading, in which he strongly believed. American writer WILLIAM WALKER ATKINSON (1862-1932) was editor of the popular magazine New Thought from 1901 to 1905, and editor of the journal Advanced Thought from 1916 to 1919. He authored dozens of New Thought books under numerous pseudonyms, including "Yogi," some of which are likely still unknown today.
Thought-Culture - William Walker Atkinson 1909
From Chapter One: Man owes his present place on earth to his Thought-Culture. And, it certainly behooves us to closely consider and study the methods and processes whereby each and every man may cultivate and develop the wondrous faculties of the mind which are employed in the processes of Thought. The faculties of the Mind, like the muscles of the body, may be developed, trained and cultivated. The process of such mental development is called "Thought-Culture," and forms the subject of this book. Read online or download
Thought Vibration - William W. Atkinson 2012
Amazon Synopsis: THE Universe is governed by Law - one great Law. Its manifestations are multiform, but viewed from the Ultimate there is but one Law. We are familiar with some of its manifestations, but are almost totally ignorant of certain others. Still we are learning a little more every day - the veil is being gradually lifted. We speak learnedly of the Law of Gravitation, but ignore that equally wonderful manifestation, THE LAW OF ATTRACTION IN THE THOUGHT WORLD. We are familiar with that wonderful manifestation of Law which draws and holds together the atoms of which matter is composed - we recognize the power of the law that attracts bodies to the earth, that holds the circling worlds in their places, but we close our eyes to the mighty law that draws to us the things we desire or fear, that makes or mars our lives. PDF
Your Mind and How to Use It: A Manual of Practical Psychology - William Walker Atkinson 1911
From Chapter One: Perhaps the simplest method of conveying the idea of the existence and nature of the mind is that attributed to a celebrated German teacher of psychology who was wont to begin his course by bidding his students think of something, his desk, for example. Then he would say, "Now think of that which thinks about the desk." Then, after a pause, he would add, "This thing which thinks about the desk, and about which you are now thinking, is the subject matter of our study of psychology." The professor could not have said more had he lectured for a month. Read online or download
Do you like our website?
Please tell your friends about us.
The Number One way to prevent Truth decay is to use mental floss daily.
Peace I leave with you, my peace I give unto you: Not as the world giveth, give I unto you. Let not your heart be troubled, neither let it be afraid.
- John 14: 27, King James Bible
Some images on this site are Free Images from Dreamstime.com
Inch by inch, row by row, gonna make my garden grow. All it takes is a rake and a hoe, and a piece of fertile ground…
- John Denver, singer, songwriter
Well, the world’s not run by mothers. You know if it was, we’d all be taken care of.
- Faye Sanderson, my-spiritual-place.com
Donate to the Site and receive a free Angel Card Reading | <urn:uuid:a11645b5-0e5d-4853-b847-2361d37241a0> | CC-MAIN-2019-47 | https://www.my-spiritual-place.com/william-walker-atkinson.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00180.warc.gz | en | 0.958878 | 4,451 | 3.421875 | 3 |
E-Government and Corruption: Examining the applicability of using e-government to reduce corruption in IndiaAshwiniBlockedUnblockFollowFollowingMay 24(Spring 2017)E-Government is the use of information and communication technologies, commonly referred to as ICTs, within the public sector in order to improve the operations and delivery of the sector’s services.
Using these ICTs in government agencies as well as within the institutions of education and research are intended to result in more efficient processes as well as a quicker and more honest dissemination of information to the public.
Seeing as technology has had the effect in other mediums, such as the news media or communication strategies, to increase transparency, it would only be logical to apply the same technology into keeping the government honest.
With the continuous advancement of technology in the modern era, not only is there more technology to choose from when it comes to implementing e-government, but not using that technology to increase efficiency and transparency within the public sector is a waste.
In both developing and already developed countries, governments are seeking to leverage the new and expanding technology of today in order to ensure more accountability within the government and other public sector organizations as well as in improving quality and speed of services towards citizens.
In developing countries in particular, however, the adoption of e-government has a reportedly high failure rate, which is unsurprising as ensuring robust performance by large-scale information systems is difficult even for countries with more advanced technological skills.
Nevertheless, e-government has been seen in recent years as the solution to many of the inherent problems governments have in serving their constituents, and even more so in developing countries where typically the population of the countries restricts public agencies in improving their operations due to resource constraints.
When it comes to saving costs, improving quality, speeding up response times, and increasing access to services, e-government has the potential to improve the efficiency and effectiveness of administrations while at the same time increasing transparency and reducing corruption.
That last potential feature of e-government, reducing corruption, is key.
In developing countries, like India, corruption runs rampant, adversely affecting the economy and the overall efficiency of the government.
In a study conducted in 2009 by Transparency International, it was found that about 40 percent of Indians had dealt with first hand experience of either paying bribes or having to use “connections” to get elementary jobs done in the public sector.
Now currently the largest contributors to this corruption arte entitlement programs and social spending schemes that are enacted by the government, where much of the problem comes from the lack of knowledge on what the schemes actually entail a well as the additional fees typically asked for during the process of going through these schemes.
In particular, the manifestation of corruption in India takes the form of excessive regulations, complicated tax and licensing systems, bureaucracy, and the lack of transparency, all of which might be reduced through e-government.
This is the relationship that this paper seeks to explore — the relationship between administrative corruption and e-government in India.
In this case we looked at administrative corruption focusing on the firm-level which covered what a firm would need to do in order to attain basic services such as electricity and water.
I also looked at e-government as a statewide measure of what was offered to citizens in order to examine what the probability of firm-level government corruption was given citizen-level e-government offerings.
Literature ReviewWhile there are many studies that examine e-government and corruption as individual components of a country, very few examine their impacts on each other, especially in a developing country like India.
Implementing e-government should provide a country and its citizens with benefits such as increased efficiency in various governmental processes, transparency, and anticorruption, which will in turn lead to more citizen engagement and participation in the community as most prior studies on the reduction of corruption indicate.
Kumar and Best’s study looks at the impact and sustainability of e-government services in developing countries because typically e-government projects fail due to design-actually or design-reality gaps.
They longitudinally examine one case where e-government was extremely successful on the short term in a village in Tamil Nadu, India, but failed over the long term.
Focusing on how the presence of village Internet facilities offering government services affects the rate at which these services can be obtained, they use both quantitative and qualitative methods.
The study runs an OLS regression model for examining associations between availability of internet-based e-government services and the applications received for various services.
This resulted in a positive impact on government service provision by e-government, manifested through savings in time, cost, and effort required to obtain services.
It also resulted in reduced corruption for the first year after which the e-government services failed due to the lack of adequately trained personnel, public leadership, and unfortunate power dynamics (Best and Kumar 2006).
This starting point examining the context of e-government and corruption through the changing lens of technological development was conducted in the early 2000s, and since then the availability and access to technology has changed dramatically, which would hopefully impact the success of e-government ventures in developing countries.
Jindal and Sehrawat conducted an exploratory study on the condition of ICT infrastructure and its accessibility for using ICT based services in developing countries.
Similar to Kumar and Best’s study, this study found that the reasons most e-governance initiatives fail are due to the availability of infrastructure and the failure in bridging design-reality gaps.
Jindal and Sehrawat, however, believe this can be solved using ICT in e-governance.
This study was conducted in April 2016 which is recent enough that the technological advancement or the spread of technological advancement should be relatively similar to what it is now.
This means that the availability of technology which lends itself to the availability of the potential for e-government should be markedly similar (Jindal and Sehrawat 2016).
According to the study, however, a significant number of people surveyed did not have Internet facilities that would allow them to access ICT based services.
This indicated that the initial hurdle to jump over is the lack of technology infrastructure particularly in more rural settings.
In contrast, a majority of the population while using TVs for gathering political information also are using cell phones as the latest phenomenon of communication.
This indicates that progress in terms of technology that will be conducive to e-government is possible and with the more advanced technology, e-government can become more user-friendly encouraging more people to use e-government facilities instead of physically visiting government offices.
Ionescu also looks at the impact that e-government can have on reducing corruption and enhancing transparency through an analysis of the impact of ICT governed programs and initiatives to curb corruption.
Her results converged with prior research on the potential of ICT for enhancing social capital and anti corruption as well as the use of information technology to control corruption (Ionescu 2013).
Sharma and Mitra focus their study more on the impact of corruption, specifically in the way firms in India are impacted by corruption and whether e-governance could be of any use there.
In their study, they build on the existing literature about the factors involved in corruption to determine the impact of bribe payment on firm performance as well as to test the determinants of bribe payment, essentially who bribes and why.
Noting that illegal transactions can take place because of a denial of certain rights such as when smaller firms pay bribes to secure what they should already be due under normal circumstances, this study examines the relationship between growth and corruption.
There were two questions asked in the study; the first led to five hypotheses on the impact of corruption on firm productivity, and the second led to three hypotheses on what determines corruption in firms.
They looked at firm performance in particular, utilizing the World Bank data from a survey conducted on Indian manufacturing firms in 2005.
The first question was estimated using alternative frameworks and led to the conclusion that bribe payments work as a tax on profitability of firms and provide incentives for inefficiency.
The second question used Probit regressions and led to the conclusion that both tax compliance and the size of the firm have positive roles in bribe payment.
Sharma and Mitra also found that policy obstacles and bureaucratic complexity increase the probability of a bribe payment and that as a policy implication, these constraints need to be removed.
In their study on the effects of corruption on the manufacturing sector, Kato and Sato looked at the state level in India, using conviction rates of corruption-related cases to measure the extent of corruption and examined the impact of corruption on overall productivity.
They found that corruption reduces productivity, particularly in respect to smaller firms (Kato and Sato 2014).
However, the data sources they used in their study came with some shortcomings such as the fact that the Central Statistical Office which holds these records has an editing policy which impacts data consistency.
In their 2014 study, Chandiramani and Khemani examine the role that e-government has played in India in the past along with its shortcomings in order to identify its failures and work towards a more functioning e-government policy.
In India, the aim as of 2014 is to make e-governance mandatory in all government departments which would in turn reduce personal interaction of the public with government officials and thus help reduce corruption.
The current initiative in place includes the Information Technology Act of 2000 at the national level.
which in addition to regulating the elusive cyberspace, also introduced charters under which government departments has to make clear their goals, standards, and venues for grievances.
In addition the Ministry of Technology also plays a role nationally to facilitate e-governance.
On a state level, many states including Tamil Nadu have made progress in their quests to attain e-government.
Tamil Nadu has computerized major departments with the objective of restoring public confidence and creating an effective relationship between the citizens and the government.
In the past, e-government has often failed due to the lack of insight among policy makers, lack of transparency in governmental dealing, and a lack of mechanisms ensuring accountability among government officers (Chandiramani and Khemani 2014).
This article provides background knowledge on the existence of e-government in India as well as the extent and limitations of it currently.
The idea of reducing corruption through it is what I am trying to examine along with the impact of technology on that reduction.
The literature indicates that while numerous studies have been done on e-government and corruption separately, none had taken them all into account in a quantitative manner, particularly not recently with the advancement and availability of technology today, which is the central focus of my research.
The question in this paper is whether the probability of firm-level administrative corruption of various types increases or decreases given the e-government opportunities offered in the state in which the firm is based.
The additional questions of whether the availability of technology impacts corruption and whether the perception of corruption reflects the presence of e-government were also asked.
Given the benefits of technology in increasing transparency as well as the fact that with e-government, there is less of a likelihood of person to person interaction which is where bribery comes into play usually, I hypothesized that the presence of e-government decreases the likelihood of corruption.
I also hypothesized that the availability of technology would decrease the probability of corruption and that the higher the perception of corruption, the less e-government would be present.
Research MethodsThe data used in this research originated in two different places.
One is The World Bank Enterprise Survey’s Manufacturing Module of India dataset from 2014 which essentially has information on 1487 firms.
According to the methodology of the survey, the sampling provides firms from small, medium, and large enterprises and the questionnaire itself is conducted privately by contractors to the top managers or owners of each firm.
This dataset is quite expansive in asking questions about every area of the manufacturing business to firms in twenty-two states from all regions of the country of India.
From this mass of data, I narrowed down the variables we would be looking at in this particular study to the ones most relevant to corruption and technology, which I will discuss in the following paragraphs.
The other source of data was from the individual state websites of each of these twenty-two states.
These websites had information on what types of e-government services are offered in each of these states, which I appended to the original World Bank dataset based on the state information provided.
Unfortunately, given that India is a developing country along with the fact that there is no centralized source of information at the moment, the data collected about individual states’ e-government services is subject to some question (for instance, whether or not all services were included, whether or not the website had been updated, whether or not what one state considered to be a reportable service was not considered the same by another state, etc.
That is something that was understood going into the study and should be considered in evaluating the results.
From the World Bank firm data, there were three sets of key variables used.
A set for corruption measure, a set for perceptions measure, and a set for technology measure.
The corruption measures are based on the answers to questions on the questionnaire given to firms about whether or not they had to provide servicers with a “gift” or a bribe in order to get basic services.
The tricky part for all of the corruption measures was that in order to answer the question asking whether or not during their application for a certain service an informal gift of payment was expected or requested, the respondents had to first answer whether or not they had applied for that particular service in the last two years.
Given that short time frame, the dataset was narrowed considerably; however, there were still enough data points for each of these measures to ascertain an answer.
The first six measures listed in the table are whether an informal gift was requested or expected for 1) applying for electrical connection 2) applying for water connection 3) receiving a construction related permit 4) tax official inspections 5) applying for an import license 6) applying for an operating license.
The last measure (corr_index) is a sum of the first six indicating a total level of this administrative corruption within the firms: this measure has the value of 1, 2, 3, or 4 (none have five or six).
The technology measures assess how much general technology is used by these firms on a day-to-day basis.
These are dummy variables that indicate whether or not an establishment uses email to communicate with clients or suppliers, whether the establishment has its own website, and whether or not the establishment currently uses cell phones for their operations.
The perception measures assess how the managers and owners of these firms (the ones taking this survey) feel about the level of corruption as well as the obstacles they face.
There are seven measures I used for this.
The first is the perception of to what degree these people felt that the court system is fair, impartial, and uncorrupted.
The second through sixth look at to what degree tax administration, the court system, business licensing and permits, political instability, and general corruption are obstacles to the current operations of the firms.
The last measure takes into account what the biggest obstacle is.
In the case of the last measure, the variable is a dummy used to indicate whether corruption (which includes licensing struggles, instability, court corruption, and tax corruption) is perceived to be the biggest obstacle or not.
The e-government measures that were used in this research once again came from the websites of individual states included in the World Bank dataset.
These are e-government services that are offered for the citizens of the state, which we are using here under the assumption that the firm-level corruption is indicative of base-level administrative corruption and therefore the e-government services at the base-level offer insight.
The full list of these measures is listed to the left.
These were all dummy variables that indicated the presence of lack of presence of these services.
In this case they indicated whether or not a state’s government issued certificates including those for birth, death, caste declaration; whether there was a mechanism for issuing and resolving complaints; whether information about grain prices and other market information was made available; whether taxes could be filed online; whether bills could be paid online; whether one could register to vote online; whether information about government bills and laws was available online; whether education options such as registering for state-wide tests or applying for public university had an online forum; whether the state sent out newsletters about the latest services; whether one could apply for a trade license online; whether land records were available on the internet; whether transportation bills along with application for licenses was digitized; whether one could apply for government employment online or seek unemployment; whether e-procurement was an option; whether a mobile platform existed; whether there was an official directory of government officials available online; whether blood donor status was available online; whether services for those below the poverty line was online; whether one could find out online about the court cases; whether a cooperative audit information was available; whether pension services could be found online; whether hospital safety records for a mother and newborn infant were digitized; whether services intended to help those in rural areas were online; whether states had a resident data hub; whether there was global information systems to map the land, whether water tracking could be done online; whether immigration status and registration could be checked online; and whether passport application could be done online.
In addition to these e-government service measures, I also generated a new variable to sum up the other measures.
E-Government and CorruptionIn order to measure e-government and corruption, I ran multiple probit models to measure the probability of corruption occurring given the predictors of e-government.
There were six initial probit models run for each of the six corruption measures looking at the probability of corruption in electricity, water, construction, tax officials, import licensing, and operating licensing given the sum of e-government measures (so examining whether the number of e-government services provided by a state impacted the presence of corruption in each of these six administrative fields).
Pr(corruption) = Φ(β0 + β1egov) with “corruption” taking on each of the six measuresAfter these models were run and their marginal effects were computed, I ran probit models measuring again the probability of corruption in electricity, water, construction, tax officials, import licensing, and operating licensing given each of the individual e-government measures as predictors.
This was run to figure out which of the e-government measures best predicted the probability of corruption.
Pr(corruption) = Φ(β0 + β1issuecert + β2complaint +β3… )I also ran an ordered probit model on the probability of getting a 1, 2, 3, or 4 (there were no instances of 5 or 6) in the corruption index given the total e-government index.
Even though the numbers themselves are close to each other, there is a large difference between experiencing corruption (the requesting of bribes) in just one area of running a firm and experiencing that corruption in four separate areas.
That’s why the ordered probit model is used to measure the individual probability of receiving a 1, 2, 3, or 4 in the corruption index given the number of e-government services offered.
Technology and CorruptionFor the second part of my research analysis I looked at the probability of corruption overall given the use of technology by the firm in question.
The purpose of this portion of my research is to measure whether the use of email, cell phones, or websites by a company increases or decreases the probability that the same firm would experience corruption.
An ordered probit model was used to measure the probability of scoring a 1, 2, 3, or 4 on the corruption index given the presence of email usage, cell phone usage, and website usage.
Perceptions and E-GovernmentIn this area of research, the focus was on how people living in these states (in this case working or running these firms) perceived the level of corruption given the number of e-government services present in their state.
A probit models was run to predict the perception measures of the dummy response variables (perceived biggest obstacle faced by firms) given the e-government index (how many e-government services are present).
Five oprobit models were run.
Each of the other five perception measures include different levels to which people perceived tax officials, courts, licensing, unstable politics, and corruption to be problems.
So these models predicted the probability of each of those levels given the e-government index for the state.
ResultsIn my analysis I found that most of the corruption found in the dataset and in Indian firms during the year was with bribery on the part of tax officials; bribery with regards to obtaining an operating license was a close second.
I also found that Haryana, followed by Assam and Maharashtra, was the state that provided its citizens with the most e-government services, whereas Jharkhand and Karnataka were the states with the heaviest bribe-related corruption.
E-Government and CorruptionAfter examining whether the number of e-government services provided by a state impacted the presence of corruption in the electricity, water, construction, tax officials, import licensing, and operating licensing through a probit model, I found no significant correlation between the the number of e-government services and the presence of corruption in any of the administrative fields other than that of tax officials.
To reiterate, this variable indicates whether or not a “gift” or bribe was asked for when tax officials came to visit the firm.
An increase in e-government leads to a predicted decrease in the probability of a gift or bribe being requested by tax officials at a significance level of p<0.
Of course this being a probit model the coefficient listed in the table above is not easily interpreted at face value.
After computing the marginal effects for this model, it was found that a one unit change in the egov_sum decreases the probability of tax official corruption by .
This means that increasing the number of services provided via e-government may lead to a decreased probability of a tax official asking for a bribe.
This result aligned with my hypothesis particularly when we take into account that the corruption of tax officials from a firm perspective has the potential to be very similar to the corruption of tax officials from a citizen perspective.
An official who requires firms pay a gift during inspections probably asks the same of citizens.
This is particularly interesting to analyze when we remember that one of the e-government services offered by certain states is the ability to file taxes entirely online.
The second set of probit models that were run were measuring the probability of corruption in the same fields as before; however, this time with the predictors being each of the individual e-government measures.
The above tables indicate the results from the water model (the probability of corruption in attaining water) and the construction model (the probability of corruption in licensing construction).
The electricity model was excluded from this study because it yielded no significant results.
Marginal effects were computed for all of these and considered in the results.
Construction ModelWith the water model, issuing certificates, market information,bill payment, education information, and the availability of land records are the significant predictors.
Only education information availability, however, is consistent with my hypothesis that e-government decreases the probability of corruption.
This could be due to the exceedingly small number of observations for this measure.
The computed marginal effects indicate that the presence of education information online leads to decrease in the probability of corruption by .
As can be seen in the above models, with construction corruption, issuing certificates, providing market information, e-filing taxes, bill payment, enrolling in the electoral roll, providing education information, publishing e-gazettes, issuing trade licenses, e-procurement, and providing an official directory of government officers all yielded significant results (providing employment information did not).
For all of the mentioned above, excluding the publishing of the e-gazette, the official directory, and e-filing taxes, the results indicate that the presence of these e-government service decreases the probability of a bribery request when licensing corruption.
Especially with bill payment, this would seem to make sense given that if construction bills could be paid online, there would be less interaction with those licensing construction in the first place, where a potential bribe could happen.
Operating License ModelThe above two models look at bribery when tax officers come to visit and bribery in obtaining operating licenses.
With the tax model, significant results were yielded for the presence of a complaint redressal system, the availability of market information, the ability to pay bills online, the availability of education services, the availability of land records, transportation services, the presence of an official directory, e-information on judicial services, and cooperative audits.
For all of these services except the cooperative audit and the presence of an official directory, the probability of having corruption with tax officials decreases with the presence of these e-government services.
The official directory makes another appearance as a significant positive relationship between the presence of this directory and corruption, which is a rather interesting statistic to consider given that one would think that having a record of officials would lead to less corruption.
On the other hand, having such a record could also lend authority to those officials asking for bribes and gifts.
Interestingly enough there was not significant evidence to support that e-filing of taxes would reduce corruption of tax officials, which is something that I would want to examine further.
With obtaining operating licenses, the significant explanatory variables seem to be the availability of market information and the ability to enroll in electoral roll.
Only the availability of market information online has a negative coefficient, indicating that the presence of market information decreases the probability of corruption in receiving an operating license.
Importing License ModelAs for the importing license model, the significant explanatory variables are again the availability of market information, the ability to enroll in an electoral roll, and the availability of education information.
With the import license model, however, none of these statistics are negative indicating that the availability of these e-government services is actually associated with an increase in the probability of corruption in receiving an importing license.
This result seems to be fairly different from the others, which could actually be a result of the relatively small number of observations for this particular measure of corruption.
Since the question asked to firms is whether or not they had applied for an importing license in the past two years, it can be assumed that many of these firms do not deal with imports or have been in business long enough that they have had that license for more than two years.
Either way, due to the small number of observations, this result should be taken with a grain of salt.
The results from the ordered probit model used to measure the individual probability of receiving a 1, 2, 3, or 4 on the corruption index given the number of e-government services offered are shown below along with the computed marginal effects.
An increase in the sum of e-government measures offered by one unit leads to a decrease in the probability of the corruption index being 1, 2, 3, or 4 by .
This also supports my hypothesis and indicates that the probability of corruption decreases with the addition of e-government services.
Technology and CorruptionBelow are the results from the ordered probit model examining the probability of different levels within the corruption index given the usage of email, cell phones, and websites within the firm.
The results for the usage of email were not significant, whereas those for websites and cellphones were (albeit at a lower significance level).
Below are the computed marginal effects of that ordered probit model.
The marginal effects show that if a firm has its own websites, the probability of them experiencing corruption decreases by .
0209 (going from 0 to 1), by 0.
0162 (going from 1 to 2), and by 0.
00386 (going from 2 to 3).
With cell phone usage the probability of corruption actually ends up increasing.
Something to consider with cell phone usage, however, is whether or not the cell phones are smartphones, which would indicate some level of Internet activity (like a website indicates).
E-government and PerceptionsUnfortunately after running the tests on perceptions and e-government, no statistically significant results were able to be obtained and are therefore not included in this study.
That part of my research was inconclusive with the data that I had.
ConclusionCorruption in India is an age-old problem and one that greatly hinders India’s economic ability and decreases the interest and trust that constituents have in their government.
This paper supports the conclusion that e-government, the use of ICTs in the public sector, has the capability to reduce this corruption particularly at a base administrative level.
The results in this paper were varied, but in general supported the conclusion that the existence and proliferation of e-government, particularly with the availability of market information and the ability to file taxes and pay bills electronically, lead to a reduced probability of corruption at the firm level, specifically with regards to paying bribes or “gifts” in order to get basic services done.
As for the connection between the usage of technology and corruption, the data findings in this paper are minimal given that the only predictors being tested were the use of email, websites, and cell phones.
My hypothesis was only partially correct as the company having its own website did decrease its probability of experiencing corruption (and its probability of experiencing multiple forms of corruption), but the company using cell phones actually increased the probability of experiencing corruption.
These results are inconclusive about the effect of technology on corruption; however, examining whether or not there is a difference in the results between simple cell phones and smartphones would be interesting as the presence of Internet might be the dividing factor between the website predictor and that of cell phones.
In terms of the connection between perceptions of corruption and e-government, my results were inconclusive given that none of the resulting statistics were statistically significant.
Given a wider data source, this will be interesting to look at in the future.
In order for these results to be tested further, a new survey amongst the governments of various Indian states must be done to ascertain what e-government services they provide exactly as at the moment this paper assumes that those listed on their websites are fact, when in reality those may be outdated.
Surveying the individuals who use these e-government services or the constituency in general about whether they on an individual level have been asked for bribes or gifts for various services would also be a logical next step as currently we are relying on firm data and operating under the assumption that that is a strong indicator of citizen data.
Going forward, examining the spread of technology in these regions would also be an important factor to consider.
E-government is rapidly changing as the technology providing it changes, so it is valuable to note whether the people of India have access to all the changing technology or whether the effects of e-government are more concentrated on areas where this technology is most accessible.
In the rapidly changing digital world, e-government has the potential to help increase transparency and reduce corruption within the governments of developing countries like India, and more research should be done into the relationship between the two particularly with context given to the advancement of new technology.
“Use of information technology to address institutional failure.
” In Corruption and Anti-corruption.
Bertot, John C.
, Paul T.
Jaeger, and Justin M.
“Using ICTs to create a culture of transparency: E-government and social media as openness and anti-corruption tools for societies.
” Government Information Quarterly, 2010, 264–71.
Bhatia, Deepak, Subhash C.
Bhatnagar, and Jiro Tominaga.
“How Do Manual and E-Government Services Compare?.Experiences from India.
” In Information and Communications for Development.
“Transparency and Corruption: Does E-Government Help?” Commonwealth Human Rights Initiative, 2003.
Chandiramani, Komal, and Monika Khemani.
“E-GOVERNANCE: EXPLORING CITIZEN’S BEHAVIOR IN INDIA.
” International Journal of Research in Commerce and Management 5, no.
5 (May 2014).
De Waldemar, Felipe Starosta.
“New products and corruption: evidence from Indian firms.
” Dynamics of Institutions and Markets in Europe, 2011.
“THE IMPACT THAT E-GOVERNMENT CAN HAVE ON REDUCING CORRUPTION AND ENHANCING TRANSPARENCY.
” Economics, Management, and Financial Markets 8, no.
2 (2013): 210–15.
— — — “THE ROLE OF E-GOVERNMENT IN CURBING THE CORRUPTION IN PUBLIC ADMINISTRATION.
” Economics, Management, and Financial Markets 10, no.
1 (2015): 48–53.
Jindal, Neena, and Anil Sehrawat.
“Status of User Centric E-Governance Practices in North India.
” ResearchGate, April 2016.
Kumar, Rajendra, and Michael L.
“Impact and Sustainability of E-Government Services in Developing Countries: Lessons Learned from Tamil Nadu, India.
” The Information Society, 2006.
“Framework of E-governance at the Local Government Level.
” In Comparative E-Government.
Sharma, Chandan, and Arup Mitra.
“Corruption, Governance and Firm Performance: Evidence from the Indian Enterprises.
” National Institute of Financial Management. | <urn:uuid:e513e8c9-8dfa-4368-9bec-e4a82fe10bae> | CC-MAIN-2019-47 | http://datascience.sharerecipe.net/2019/05/30/e-government-and-corruption-examining-the-applicability-of-using-e-government-to-reduce-corruption-in-india/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00140.warc.gz | en | 0.955386 | 7,024 | 2.671875 | 3 |
This 30-item NCLEX exam about Neurological Disorders covering topics about Degenerative Diseases. The activity below will help the nurse and future nurses understand more about the disorders involving the neurological system and appropriate nursing management. Accomplish this quiz and soar high on your NCLEX!
EXAM TIP: If you see an option you have never heard of, do not choose it. It’s like a signal from your brain that that is not the correct answer.
Live as if you were to die tomorrow. Learn as if you were to live forever.
~ Mahatma Gandhi
Included topics in this practice quiz are:
- Amyotrophic Lateral Sclerosis
- Automatic Dysreflexia
- Basilar Skull Fracture
- Bell’s Palsy
- Cerebrovascular Accident
- Computed Tomography (CT) Scan
- Cranial Nerve
- Degenerative Diseases
- Guillain-Barré syndrome
- Lumbar Puncture
- Multiple Sclerosis
- Myasthenia Gravis
- Residual Dysphagia
- Right-Sided Hemiparesis
- Tonic-Clonic Seizure
Follow the guidelines below to make the most out of this exam:
- Read each question carefully and choose the best answer.
- You are given one minute per question. Spend your time wisely!
- Answers and rationales are given below. Be sure to read them.
- If you need more clarifications, please direct them to the comments section.
In Exam Mode: All questions are shown and the results, answers, and rationales (if any) will only be given after you’ve finished the quiz. You are given 1 minute per question, a total of 30 minutes for this exam.
Neurological Disorders Practice Quiz #5 (30 Questions)
Practice Mode: This is an interactive version of the Text Mode. All questions are given in a single page and correct answers, rationales or explanations (if any) are immediately shown after you have selected an answer.
Neurological Disorders Practice Quiz #5 (30 Questions)
In Text Mode: All questions and answers are given for reading and answering at your own pace. You can also copy this exam and make a printout.
1. A white female client is admitted to an acute care facility with a diagnosis of cerebrovascular accident (CVA). Her history reveals bronchial asthma, exogenous obesity, and iron deficiency anemia. Which history finding is a risk factor for CVA?
A. Caucasian race
B. Female sex
D. Bronchial asthma
2. The nurse is teaching a female client with multiple sclerosis. When teaching the client how to reduce fatigue, the nurse should tell the client to:
A. Take a hot bath.
B. Rest in an air-conditioned room.
C. Increase the dose of muscle relaxants.
D. Avoid naps during the day.
3. A male client is having tonic-clonic seizures. What should the nurse do first?
A. Elevate the head of the bed.
B. Restrain the client’s arms and legs.
C. Place a tongue blade in the client’s mouth.
D. Take measures to prevent injury.
4. A female client with Guillain-Barré syndrome has paralysis affecting the respiratory muscles and requires mechanical ventilation. When the client asks the nurse about the paralysis, how should the nurse respond?
A. “You may have difficulty believing this, but the paralysis caused by this disease is temporary.”
B. “You’ll have to accept the fact that you’re permanently paralyzeD. However, you won’t have any sensory loss.”
C. “It must be hard to accept the permanency of your paralysis.”
D. “You’ll first regain use of your legs and then your arms.”
5. The nurse is working on a surgical floor. The nurse must log roll a male client following a:
6. A female client with a suspected brain tumor is scheduled for computed tomography (CT). What should the nurse do when preparing the client for this test?
A. Immobilize the neck before the client is moved onto a stretcher.
B. Determine whether the client is allergic to iodine, contrast dyes, or shellfish.
C. Place a cap on the client’s head.
D. Administer a sedative as ordered.
7. During a routine physical examination to assess a male client’s deep tendon reflexes, the nurse should make sure to:
A. Use the pointed end of the reflex hammer when striking the Achilles’ tendon.
B. Support the joint where the tendon is being tested.
C. Tap the tendon slowly and softly
D. Hold the reflex hammer tightly.
8. A female client is admitted in a disoriented and restless state after sustaining a concussion during a car accident. Which nursing diagnosis takes highest priority for this client’s plan of care?
9. A female client with amyotrophic lateral sclerosis (ALS) tells the nurse, “Sometimes I feel so frustrateD. I can’t do anything without help!” This comment best supports which nursing diagnosis?
10. For a male client with suspected increased intracranial pressure (ICP), a most appropriate respiratory goal is to:
A. Prevent respiratory alkalosis.
B. Lower arterial pH.
C. Promote carbon dioxide elimination.
D. Maintain partial pressure of arterial oxygen (PaO2) above 80 mm Hg
11. Nurse Mary witnesses a neighbor’s husband sustain a fall from the roof of his house. The nurse rushes to the victim and determines the need to opens the airway in this victim by using which method?
A. Flexed position
B. Head tilt-chin lift
C. Jaw-thrust maneuver
D. Modified head tilt-chin lift
12. The nurse is assessing the motor function of an unconscious male client. The nurse would plan to use which plan to use which of the following to test the client’s peripheral response to pain?
A. Sternal rub
B. Nail bed pressure
C. Pressure on the orbital rim
D. Squeezing of the sternocleidomastoid muscle
13. A female client admitted to the hospital with a neurological problem asks the nurse whether magnetic resonance imaging may be done. The nurse interprets that the client may be ineligible for this diagnostic procedure based on the client’s history of:
14. A male client is having a lumbar puncture performed. The nurse would plan to place the client in which position?
A. Side-lying, with a pillow under the hip
B. Prone, with a pillow under the abdomen
C. Prone, in slight-Trendelenburg’s position
D. Side-lying, with the legs, pulled up and head bent down onto the chest.
15. The nurse is positioning the female client with increased intracranial pressure. Which of the following positions would the nurse avoid?
A. Head midline
B. Head turned to the side
C. Neck in neutral position
D. Head of bed elevated 30 to 45 degrees
A. Is clear and tests negative for glucose
B. Is grossly bloody in appearance and has a pH of 6
C. Clumps together on the dressing and has a pH of 7
D. Separates into concentric rings and test positive of glucose
17. A male client with a spinal cord injury is prone to experiencing automatic dysreflexia. The nurse would avoid which of the following measures to minimize the risk of recurrence?
A. Strict adherence to a bowel retraining program
B. Keeping the linen wrinkle-free under the client
C. Preventing unnecessary pressure on the lower limbs
D. Limiting bladder catheterization to once every 12 hours
18. The nurse is caring for the male client who begins to experience seizure activity while in beD. Which of the following actions by the nurse would be contraindicated?
A. Loosening restrictive clothing
B. Restraining the client’s limbs
C. Removing the pillow and raising padded side rails
D. Positioning the client to side, if possible, with the head flexed forward
19. The nurse is assigned to care for a female client with complete right-sided hemiparesis. The nurse plans care knowing that this condition:
A. The client has complete bilateral paralysis of the arms and legs.
B. The client has weakness on the right side of the body, including the face and tongue.
C. The client has lost the ability to move the right arm but can walk independently.
D. The client has lost the ability to move the right arm but can walk independently.
20. The client with a brain attack (stroke) has residual dysphagia. When a diet order is initiated, the nurse avoids doing which of the following?
A. Giving the client thin liquids
B. Thickening liquids to the consistency of oatmeal
C. Placing food on the unaffected side of the mouth
D. Allowing plenty of time for chewing and swallowing
21. The nurse is assessing the adaptation of the female client to changes in functional status after a brain attack (stroke). The nurse assesses that the client is adapting most successfully if the client:
A. Gets angry with family if they interrupt a task
B. Experiences bouts of depression and irritability
C. Has difficulty with using modified feeding utensils
D. Consistently uses adaptive equipment in dressing self
22. Nurse Kristine is trying to communicate with a client with brain attack (stroke) and aphasia. Which of the following actions by the nurse would be least helpful to the client?
A. Speaking to the client at a slower rate
B. Allowing plenty of time for the client to respond
C. Completing the sentences that the client cannot finish
D. Looking directly at the client during attempts at speech
23. A female client has experienced an episode of myasthenic crisis. The nurse would assess whether the client has precipitating factors such as:
A. Getting too little exercise
B. Taking excess medication
C. Omitting doses of medication
D. Increasing intake of fatty foods
24. The nurse is teaching the female client with myasthenia gravis about the prevention of myasthenic and cholinergic crises. The nurse tells the client that this is most effectively done by:
A. Eating large, well-balanced meals
B. Doing muscle-strengthening exercises
C. Doing all chores early in the day while less fatigued
D. Taking medications on time to maintain therapeutic blood levels
25. A male client with Bell’s Palsy asks the nurse what has caused this problem. The nurse’s response is based on an understanding that the cause is:
A. Unknown, but possibly includes ischemia, viral infection, or an autoimmune problem
B. Unknown, but possibly includes long-term tissue malnutrition and cellular hypoxia
C. Primary genetic in origin, triggered by exposure to meningitis
D. Primarily genetic in origin, triggered by exposure to neurotoxins
26. The nurse has given the male client with Bell’s palsy instructions on preserving muscle tone in the face and preventing denervation. The nurse determines that the client needs additional information if the client states that he or she will:
A. Exposure to cold and drafts
B. Massage the face with a gentle upward motion
C. Perform facial exercises
D. Wrinkle the forehead, blow out the cheeks, and whistle
27. A female client is admitted to the hospital with a diagnosis of Guillain-Barre syndrome. The nurse inquires during the nursing admission interview if the client has a history of:
A. Seizures or trauma to the brain
B. Meningitis during the last five (5 years
C. Back injury or trauma to the spinal cord
D. Respiratory or gastrointestinal infection during the previous month.
28. A female client with Guillain-Barre syndrome has ascending paralysis and is intubated and receiving mechanical ventilation. Which of the following strategies would the nurse incorporate in the plan of care to help the client cope with this illness?
A. Giving client full control over care decisions and restricting visitors
B. Providing positive feedback and encouraging active range of motion
C. Providing information, giving positive feedback and encouraging relaxation
D. Providing intravenously administered sedatives, reducing distractions and limiting visitors
29. A male client has an impairment of cranial nerve II. Specific to this impairment, the nurse would plan to do which of the following to ensure client to ensure client safety?
A. Speak loudly to the client
B. Test the temperature of the shower water
C. Check the temperature of the food on the delivery tray.
D. Provide a clear path for ambulation without obstacles
30. A female client has a neurological deficit involving the limbic system. Specific to this type of deficit, the nurse would document which of the following information related to the client’s behavior.
A. Is disoriented to person, place, and time
B. Affect is flat, with periods of emotional lability
C. Cannot recall what was eaten for breakfast today
D. Demonstrate inability to add and subtract; does not know who is the president
Answers and Rationale
1. Answer: C. Obesity
Obesity is a risk factor for CVA. Other risk factors include a history of ischemic episodes, cardiovascular disease, diabetes mellitus, atherosclerosis of the cranial vessels, hypertension, polycythemia, smoking, hypercholesterolemia, oral contraceptive use, emotional stress, family history of CVA, and advancing age.
- Options A, B, and D: The client’s race, sex, and bronchial asthma aren’t a risk factors for CVA.
2. Answer: B. Rest in an air-conditioned room.
Fatigue is a common symptom in clients with multiple sclerosis. Lowering the body temperature by resting in an air-conditioned room may relieve fatigue; however, extreme cold should be avoided. Other measures to reduce fatigue in the client with multiple sclerosis include treating depression, using occupational therapy to learn energy conservation techniques, and reducing spasticity.
- Option A: A hot bath or shower can increase body temperature, producing fatigue.
- Option C: Muscle relaxants, prescribed to reduce spasticity, can cause drowsiness and fatigue.
- Option D: Planning for frequent rest periods and naps can relieve fatigue.
3. Answer: D. Take measures to prevent injury.
Protecting the client from injury is the immediate priority during a seizure.
- Option A: Elevating the head of the bed would have no effect on the client’s condition or safety.
- Option B: Restraining the client’s arms and legs could cause injury.
- Option C: Placing a tongue blade or other object in the client’s mouth could damage the teeth.
4. Answer: A. “You may have difficulty believing this, but the paralysis caused by this disease is temporary.”
The nurse should inform the client that the paralysis that accompanies Guillain-Barré syndrome is only temporary. Return of motor function begins proximally and extends distally in the legs.
5. Answer: A. Laminectomy.
The client who has had spinal surgery, such as laminectomy, must be logrolled to keep the spinal column straight when turning.
- Options B and D: The client who has had a thoracotomy or cystectomy may turn himself or may be assisted into a comfortable position.
- Option C: Under normal circumstances, hemorrhoidectomy is an outpatient procedure, and the client may resume normal activities immediately after surgery.
6. Answer: B. Determine whether the client is allergic to iodine, contrast dyes, or shellfish.
Because CT commonly involves the use of a contrast agent, the nurse should determine whether the client is allergic to iodine, contrast dyes, or shellfish.
- Option A: Neck immobilization is necessary only if the client has a suspected spinal cord injury.
- Option C: Placing a cap over the client’s head may lead to misinterpretation of test results; instead, the hair should be combed smoothly.
- Option D: The physician orders a sedative only if the client can’t be expected to remain still during the CT scan.
7. Answer: B. Support the joint where the tendon is being tested.
To prevent the attached muscle from contracting, the nurse should support the joint where the tendon is being tested.
- Option A: The nurse should use the flat, not pointed, end of the reflex hammer when striking the Achilles’ tendon. (The pointed end is used to strike over small areas, such as the thumb placed over the biceps tendon.)
- Option C: Tapping the tendon slowly and softly wouldn’t provoke a deep tendon reflex response.
- Option D: The nurse should hold the reflex hammer loosely, not tightly, between the thumb and fingers so it can swing in an arc.
8. Answer: D. Risk for injury
Because the client is disoriented and restless, the most important nursing diagnosis is risk for injury.
Options A, B, and C: Although the other options may be appropriate, they’re secondary because they don’t immediately affect the client’s health or safety.
9. Answer: B. Powerlessness
This comment best supports a nursing diagnosis of Powerlessness because ALS may lead to locked-in syndrome, characterized by an active and functioning mind locked in a body that can’t perform even simple daily tasks.
- Options A and D: Although Anxiety and Risk for disuse syndrome may be the nursing diagnosis associated with ALS, the client’s comment specifically refers to an inability to act autonomously.
- Option C: A diagnosis of Ineffective denial would be indicated if the client didn’t seem to perceive the personal relevance of symptoms or danger.
10. Answer: C. Promote carbon dioxide elimination.
The goal of treatment is to prevent acidemia by eliminating carbon dioxide. That is because an acid environment in the brain causes cerebral vessels to dilate and therefore increases ICP.
- Options A and B: Preventing respiratory alkalosis and lowering arterial pH may bring about acidosis, an undesirable condition in this case.
- Option D: It isn’t necessary to maintain a PaO2 as high as 80 mm Hg; 60 mm Hg will adequately oxygenate most clients.
11. Answer: C. Jaw-thrust maneuver
If a neck injury is suspected, the jaw thrust maneuver is used to open the airway.
- Option A: A flexed position is an inappropriate position for opening the airway.
- Option B: The head tilt–chin lift maneuver produces hyperextension of the neck and could cause complications if a neck injury is present.
12. Answer: B. Nail bed pressure
Motor testing in the unconscious client can be done only by testing response to painful stimuli. Nail bed pressure tests a basic peripheral response. Options A, C, and D: Cerebral responses to pain are tested using
- Options A, C, and D: Cerebral responses to pain are tested using the sternal rub, placing upward pressure on the orbital rim, or squeezing the clavicle or sternocleidomastoid muscle.
13. Answer: C. Prosthetic valve replacement
The client having a magnetic resonance imaging scan has all metallic objects removed because of the magnetic field generated by the device. A careful history is obtained to determine whether any metal objects are inside the client, such as orthopedic hardware, pacemakers, artificial heart valves, aneurysm clips, or intrauterine devices. These may heat up, become dislodged, or malfunction during this procedure. The client may be ineligible if a significant risk exists.
14. Answer: D. Side-lying, with the legs, pulled up and head bent down onto the chest.
The client undergoing lumbar puncture is positioned lying on the side, with the legs pulled up to the abdomen and the head bent down onto the chest. This position helps open the spaces between the vertebrae.
15. Answer: B. Head turned to the side
The head of the client with increased intracranial pressure should be positioned so the head is in a neutral midline position. The nurse should avoid flexing or extending the client’s neck or turning the head side to side. The head of the bed should be raised to 30 to 45 degrees. Use of proper positions promotes venous drainage from the cranium to keep intracranial pressure down.
16. Answer: D. Separates into concentric rings and test positive of glucose
Leakage of cerebrospinal fluid (CSF) from the ears or nose may accompany basilar skull fracture. CSF can be distinguished from other body fluids because the drainage will separate into bloody and yellow concentric rings on dressing material, called a halo sign. The fluid also tests positive for glucose.
17. Answer: D. Limiting bladder catheterization to once every 12 hours
The most frequent cause of autonomic dysreflexia is a distended bladder. Straight catheterization should be done every four (4) to six (6) hours, and foley catheters should be checked frequently to prevent kinks in the tubing. Other causes include stimulation of the skin from tactile, thermal, or painful stimuli. The nurse administers care to minimize risk in these areas.
- Option A: Constipation and fecal impaction are other causes, so maintaining bowel regularity is important.
18. Answer: B. Restraining the client’s limbs
The limbs are never restrained because the strong muscle contractions could cause the client harm. If the client is not in bed when seizure activity begins, the nurse lowers the client to the floor, if possible, protects the head from injury, and moves furniture that may injure the client. Other aspects of care are as described for the client who is in bed.
- Options A, C, and D: Nursing actions during a seizure include providing for privacy, loosening restrictive clothing, removing the pillow and raising side rails in the bed, and placing the client on one side with the head flexed forward, if possible, to allow the tongue to fall forward and facilitate drainage.
19. Answer: B. The client has weakness on the right side of the body, including the face and tongue.
Hemiparesis is a weakness of one side of the body that may occur after a stroke. Complete hemiparesis is a weakness of the face and tongue, arm, and leg on one side. Complete bilateral paralysis does not occur in this condition.
- Options C and D: The client with right-sided hemiparesis has weakness of the right arm and leg and needs assistance with feeding, bathing, and ambulating.
20. Answer: A. Giving the client thin liquids
Before the client with dysphagia is started on a diet, the gag and swallow reflexes must have returned.
- Option B: Liquids are thickened to avoid aspiration.
- Option C: Food is placed on the unaffected side of the mouth.
- Option D: The client is assisted with meals as needed and is given ample time to chew and swallow.
21. Answer: D. Consistently uses adaptive equipment in dressing self
Clients are evaluated as coping successfully with lifestyle changes after a brain attack (stroke) if they make appropriate lifestyle alterations, use the assistance of others, and have appropriate social interactions.
- Options A, B, and C are not adaptive behaviors.
22. Answer: C. Completing the sentences that the client cannot finish
Clients with aphasia after brain attack (stroke) often fatigue easily and have a short attention span. The nurse would avoid shouting (because the client is not deaf), appearing rushed for a response, and letting family members provide all the responses for the client.
- Options A, B, and D: General guidelines when trying to communicate with the aphasic client include speaking more slowly and allowing adequate response time, listening to and watching attempts to communicate, and trying to put the client at ease with a caring and understanding manner.
23. Answer: C. Omitting doses of medication
Myasthenic crisis often is caused by under medication and responds to the administration of cholinergic medications, such as neostigmine (Prostigmin) and pyridostigmine (Mestinon). Option B: Cholinergic crisis (the opposite problem) is caused by excess medication and responds to withholding of medications. Options A and D: Too little exercise and fatty food intake are incorrect. Overexertion and overeating possibly could trigger
- Options A and D: Too little exercise and fatty food intake are incorrect. Overexertion and overeating possibly could trigger a myasthenic crisis.
- Option B: Cholinergic crisis (the opposite problem) is caused by excess medication and responds to withholding of medications.
24. Answer: D. Taking medications on time to maintain therapeutic blood levels
Taking medications correctly to maintain blood levels that are not too low or too high is important.
- Option A: Overeating is a cause of exacerbation of symptoms, as is exposure to heat, crowds, erratic sleep habits, and emotional stress.
- Option B: Muscle-strengthening exercises are not helpful and can fatigue the client.
- Option C: Clients with myasthenia gravis are taught to space out activities over the day to conserve energy and restore muscle strength.
25. Answer: A. Unknown, but possibly includes ischemia, viral infection, or an autoimmune problem
Bell’s palsy is a one-sided facial paralysis from compression of the facial nerve. The exact cause is unknown but may include vascular ischemia, infection, exposure to viruses such as herpes zoster or herpes simplex, autoimmune disease, or a combination of these factors.
26. Answer: A. Exposure to cold and drafts
Exposure to cold or drafts is avoided. Local application of heat to the face may improve blood flow and provide comfort.
Options B and C: Prevention of muscle atrophy with Bell’s palsy is accomplished with facial massage, facial exercises, and electrical stimulation of the nerves.
27. Answer: D. Respiratory or gastrointestinal infection during the previous month.
Guillain-Barré syndrome is a clinical syndrome of unknown origin that involves cranial and peripheral nerves. Many clients report a history of respiratory or gastrointestinal infection in the 1 to 4 weeks before the onset of neurological deficits. Occasionally, the syndrome can be triggered by vaccination or surgery.
28. Answer: C. Providing information, giving positive feedback, and encouraging relaxation
The client with Guillain-Barré syndrome experiences fear and anxiety from the ascending paralysis and sudden onset of the disorder. The nurse can alleviate these fears by providing accurate information about the client’s condition, giving expert care and positive feedback to the client, and encouraging relaxation and distraction. The family can become involved with selected care activities and provide diversion for the client as well.
29. Answer: D. Provide a clear path for ambulation without obstacles
Cranial nerve II is the optic nerve, which governs vision. The nurse can provide safety for the visually impaired client by clearing the path of obstacles when ambulating.
- Option A: Speaking loudly may help overcome a deficit of cranial nerve VIII (vestibulocochlear). Cranial nerve VII (facial) and IX (glossopharyngeal) control taste from the anterior two-thirds and posterior third of the tongue, respectively.
- Option B: Testing the shower water temperature would be useful if there were an impairment of peripheral nerves.
30. Answer: B. Affect is flat, with periods of emotional lability
The limbic system is responsible for feelings (affect) and emotions.
- Option A: The cerebral hemispheres, with specific regional functions, control orientation.
- Option C: Recall of recent events is controlled by the hippocampus.
- Option D: Calculation ability and knowledge of current events relates to the function of the frontal lobe.
You may also like these other quizzes and exam tip articles:
- 3,500+ NCLEX-RN Practice Questions for Free – Thousands of practice questions for different nursing concepts and topics to help you review for the NCLEX-RN.
- Nursing Exam Cram Sheet for NCLEX-RN – This downloadable guide contains condensed facts about the licensure exam and key nursing information.
- 20 NCLEX Tips and Strategies Every Nursing Students Should Know – Simple but effective tips you must know before you take the NCLEX or our exams.
- My NCLEX Experience: Study Tips and Resources for Nursing Students – Get personal and learn more review and exam tips in this article.
- 6 Easy Ways on How Nurses Can Master the Art of Delegation – Make delegation easier with these six strategies.
- 8-Step Guide to ABG Analysis: Tic-Tac-Toe Method – Know this trick in answering ABG Analysis questions!
- 5 Principles in Answering Therapeutic Communication Questions – How would you respond correctly when facing a theracom question? We break it down for you in this article.
- 12 Tips to Answer NCLEX Select All That Apply (SATA) Questions – There are tricks on how to tackle SATA questions, check them out in this article.
- 10 Effective NCLEX Test Taking Strategies and Tips and 11 Test Taking Tips & Strategies For Nurses – More tips on how to pass your licensure exam!
- 10 Brilliant Tips to Overcome Test Anxiety – Do you have test anxiety? Yes, it's a thing. Know more about overcoming it with this guide.
- Cardiac Arrhythmias | 16 Questions
- Cardiovascular Surgery Care | 15 Questions
- Coronary Artery Disease and Hypertension | 50 Question
- Hematologic Disorders | 40 Questions
- Myocardial Infarction and Heart Failure | 70 Questions
- Peripheral Vascular Diseases | 20 Question
- Valvular Diseases | 10 Question
- Respiratory System Disorders | 60 Questions
- Asthma and COPD #1 | 50 Questions
- Asthma and COPD #2 | 50 Questions
- Pneumonia and Tuberculosis | 60 Questions
- Neurological Disorders #1 | 10 Questions
- Neurological Disorders #2: Seizures | 50 Questions
- Neurological Disorders #3 | 25 Questions
- Neurological Disorders #4 | 30 Questions
- Neurological Disorders #5 | 30 Questions
Digestive and Gastrointestinal System
- Digestive System Disorders #1 | 80 Questions
- Digestive System Disorders #2 | 100 Questions
- Digestive System Disorders #3 | 50 Questions
- Digestive System Disorders #4 | 30 Questions
- Digestive System Disorders #5 | 30 Questions
- Digestive System Disorders #6 | 25 Questions
- Digestive System Disorders #7 | 20 Questions
- Endocrine System Disorders | 50 Questions
- Diabetes Mellitus #1 | 40 Questions
- Diabetes Mellitus #2 | 30 Questions
- Diabetes Mellitus #3 | 25 Questions
- Urinary System Disorders #1 | 50 Questions
- Urinary System Disorders #2 | 60 Questions
- Urinary System Disorders #3 | 45 Questions
- Genitourinary System Disorders | 50 Questions
Homeostasis: Fluids and Electrolytes
- Homeostasis, Fluids and Electrolytes #1 | 30 Questions
- Homeostasis, Fluids and Electrolytes #2 | 30 Questions
- Homeostasis, Fluids and Electrolytes #3 | 30 Questions
- Homeostasis, Fluids and Electrolytes #4 | 30 Questions
Cancer and Oncology Nursing
- Cancer and Oncology Nursing #1 | 56 Questions
- Cancer and Oncology Nursing #2 | 60 Questions
- Cancer and Oncology Nursing #3 | 25 Questions
- Cancer and Oncology Nursing #4 | 20 Questions
- Cancer and Oncology Nursing #5 | 15 Questions
Burns and Burn Injury Management
- Burn Injury Nursing Management #1 | 20 Questions
- Burn Injury Nursing Management #2 | 20 Questions
- Burn Injury Nursing Management #3 | 20 Questions
- Burn Injury Nursing Management #4 | 40 Questions
- Eye Disorders and Care | 26 Questions
- Ear Disorders and Care | 19 Questions
- Integumentary System Disorders #1 | 60 Questions
- Integumentary System Disorders #2 | 20 Questions
- Sleep Disorders | 30 Questions | <urn:uuid:6422f0e0-ad50-4fbd-95ac-4f672bc95d4f> | CC-MAIN-2019-47 | https://nurseslabs.com/nclex-exam-neurological-disorders-5-30-items/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00137.warc.gz | en | 0.891142 | 6,987 | 2.65625 | 3 |
Urban Transformation and Narrative Form
Once the cultural and political beacon of the Arab world, Cairo is now close to becoming the region’s social sump. The population of this megalopolis has swollen to an estimated 17 million, more than half of whom live in the sprawling self-built neighbourhoods and shantytowns that ring the ancient heart of the city and its colonial-era quarters. Since the late 1970s, the regime’s liberalization policy—infitah, or ‘open door’—combined with the collapse of the developmentalist model, a deepening agrarian crisis and accelerated rural–urban migration, have produced vast new zones of what the French call ‘mushroom city’. The Arabic term for them al-madun al-‘ashwa’iyyah might be rendered ‘haphazard city’; the root means ‘chance’. These zones developed after the state had abandoned its role as provider of affordable social housing, leaving the field to the private sector, which concentrated on building middle and upper-middle-class accommodation, yielding higher returns. The poor took the matter into their own hands and, as the saying goes, they did it poorly.
Sixty per cent of Egypt’s urban expansion over the last thirty years has consisted of ‘haphazard dwellings’. These districts can lack the most basic services, including running water and sewage. Their streets are not wide enough for ambulances or fire engines to enter; in places they are even narrower than the alleyways of the ancient medina. The random juxtaposition of buildings has produced a proliferation of cul-de-sacs, while the lack of planning and shortage of land have ensured a complete absence of green spaces or squares. The population density in these areas is extreme, even by slum standards. The over-crowding—seven people per room in some neighbourhoods—has resulted in the collapse of normal social boundaries. With whole families sharing a single room, incest has become widespread. Previously eradicated diseases such as tuberculosis and smallpox are now epidemic.footnote1
The generation that has come of age since 1990 has faced a triple crisis: socio-economic, cultural and political. Egypt’s population has nearly doubled since 1980, reaching 81 million in 2008, yet there has been no commensurate increase in social spending. Illiteracy rates have risen, with schools starved of funds. In the overcrowded universities, underpaid teaching staff augment their income by extorting funds from students for better marks. Other public services—health, social security, infrastructure and transportation—have fared no better. The plundering of the public sector by the kleptocratic political establishment and its cronies has produced a distorted, dinosaur-shaped social structure: a tiny head—the super-rich—presiding over an ever-growing body of poverty and discontent. At the same time, youth unemployment has been running at over 75 per cent.
The cultural realm, meanwhile, has become an arena for bigoted grandstanding, prey to both official censors—the long-serving Minister of Culture, Farouk Husni, showing the way—and self-appointed ones, in parliament and the broadsheet press.footnote2 In the political sphere, the Emergency Law, in place since 1981, has been punctiliously renewed by an almost comically corrupt National Assembly. The notorious Egyptian prison system has been made available to us, British and other European nationals subject to ‘extraordinary rendition’. Since Sadat’s unilateral agreement with Israel in 1979 a widening gulf has grown between popular sentiment and the collusion of the political establishment with the worst us–Israeli atrocities in the region, and its de facto support for their successive wars: invasions of Lebanon, Desert Storm, occupations of Afghanistan and Iraq. Egypt’s marginalization as a regional power has only increased the younger generation’s sense of despondency and humiliation.
It is within this unpropitious context that a striking new wave of young Egyptian writers has appeared. Their work constitutes a radical departure from established norms and offers a series of sharp insights into Arab culture and society. Formally, the texts are marked by an intense self-questioning, and by a narrative and linguistic fragmentation that serves to reflect an irrational, duplicitous reality, in which everything has been debased. The works are short, rarely more than 150 pages, and tend to focus on isolated individuals, in place of the generation-spanning sagas that characterized the realist Egyptian novel. Their narratives are imbued with a sense of crisis, though the world they depict is often treated with derision. The protagonists are trapped in the present, powerless to effect any change. Principal exponents of the new wave would include Samir Gharib ‘Ali, Mahmud Hamid, Wa’il Rajab, Ahmad Gharib, Muntasir al-Qaffash, Atif Sulayman, May al-Tilmisani, Yasser Shaaban, Mustafa Zikri and Nura Amin; but well over a hundred novels of this type have been published to date. From their first appearance around 1995, these writers have been dubbed ‘the 1990s generation’.
The Egyptian literary establishment has been virtually unanimous in condemning these works. Led by the influential Cairo newspaper Al-Akhbar and its weekly book supplement Akhbar al-Adab, its leading lights conducted a sustained campaign against the new writers for a number of years. Ibda’, the major literary monthly, initially refused to publish their work. The young writers were accused of poor education, nihilism, loss of direction, lack of interest in public issues and obsessive concentration on the body; of stylistic poverty, weak grammar, inadequate narrative skills and sheer incomprehensibility. Yet there has been little detailed critical scrutiny of this body of work and the new directions it suggests for contemporary Arabic literature; nor sustained attempts to relate it to the broader social and political context from which it has emerged.footnote3
In what follows I will attempt to illustrate both the range and the commonalities of the new Egyptian novel. It has been argued that this work should be understood outside the limits of genre classification, in terms of a free-floating trans-generic textual space.footnote4 Instead, I will suggest that these new novels do indeed share a set of distinct narrative characteristics; these involve both a rupture with earlier realist and modernist forms, and a transformation of the rules of reference by which the text relates to the extrinsic world. I will suggest that, whatever their actual settings, these works share demonstrable formal homologies with the sprawling slums of Cairo itself.footnote5
A new genre?
The arrival of this new wave in Egyptian fiction was signalled in 1995 by the publication of a seminal collection of short stories, Khutut ‘al Dawa’ir [Lines on Circles], from the small independent publishing house, Dar Sharqiyyat, under the direction of Husni Sulayman.footnote6 The stories, by Wa’il Rajab, Ahmad Faruq, Haytham al-Wirdani and others, shared an affectless style, much closer to spoken than to written Arabic; and a turn away from ‘great issues’ to focus on the everyday, the inconsequential. This was followed in 1996 by Samir Gharib ‘Ali’s first novel Al-Saqqar [The Hawker].footnote7 Its young anti-hero is Yahya, about to be made redundant as the state-owned factory where he is desultorily employed comes under the hammer of privatization. The backdrop is Egypt’s participation in the first Gulf War, and themes of prostitution recur throughout. Yahya’s main preoccupation is his insatiable sexual appetite, though his exploits are tinged with desperation. The short work is densely peopled with uprooted characters—Sudanese and Somalis, political refugees, displaced migrant and white-collar workers—of the same age group. Yahya’s friends are all jobless, with no direction or role in life. The dream of getting a position in an oil-rich state—‘the country that’s named after a family’, as Adam, a young Somali, puts it—has evaporated with the Gulf War and the end of the oil-boom. The sharp, cynical outlook of the male characters is belied by their impotence to bring about any change in their situation.
Three powerful female characters suffer asymmetric fates. Mastura, a village girl, has escaped to Cairo after being marked for an honour crime. Yvonne, an educated middle-class Copt, is waiting, haplessly, for news from the us, where her former lover is trying to get a green card. Melinda is a French researcher, investigating the condition of her sex in Egypt. Setting out to ‘avenge’ the situation of the Arab woman in general, she flaunts her emancipation ‘as if Napoleon’s fleet is at the gates of Alexandria’, according to Yahya, who regales her with scandalous stories about his forbears during their love-making. Yahya moves between Melinda’s luxurious apartment in Zamalek, an affluent quarter of Cairo’s colonial-era ‘second city’, and his marginalized friends in the ‘third city’, al-madun al-‘ashwa’iyyah, offering a sharp contrast between the adjacent worlds. Yet he is also lost and manipulated in the French girl’s spacious flat, while skilfully manoeuvering his way through the maze of poorer streets, where he always knows when to find his friends at home. Mastura shares a room with an old woman, Mama Zizi, on the ground floor of a ‘tall thin house in a narrow alley, with balconies extended like dogs’ tongues, dripping tar’. Mama Zizi does not speak or move, but ‘when I have sex with Mastura, she turns her face to the wall’; outside the broken door, the ground floor neighbours fight and insult each other. The style has a harsh matter-of-factness, endowing—for example—an account of a gang rape at the local police station with the banality of an everyday event: ‘Mastura came back at midday, depressed and exhausted. I tried to make her tell me what had happened, but she said nothing. I left her to sleep, and when she woke up I took her in my arms. She started to cry, and told me . . .’ There is no attempt to dramatize oppressive relations, as if some other outcome could be possible; simply a recording—‘depressed and exhausted’—of their intolerable existence. The use of multiple narrators, repetition—key sentences recurring—and a circular structure establishes a narrative hall of mirrors. The same action will be perceived in three different timeframes: anticipated by a narrator, imagined by another character, or as related at the moment of its occurrence. The effect is to create a powerful sense of inescapability.
Predictably enough, The Hawker came under attack from one of Al-Ahram’s leading columnists, Fahmi Huwaydi, who denounced it as ‘satanic, nihilist writing which ruins everything that is religious—be it Islamic or Christian—and all moral values’, and called for the novel to be banned. The publisher, the General Egyptian Book Organization, withdrew the book, and Ali fled to France.footnote8 His second novel Fir‘awn [The Pharaoh], published in 2000, shifted the setting from the city to the country: rural Minufiyya, the most densely populated province of Lower Egypt.footnote9 A petty thief, the lame ‘Isam, tells the story of his friend and fellow convict Sayyid, whose nickname is the Pharaoh. A village schoolteacher driven from his post by the Mabahith, Egypt’s political police, the Pharaoh leads the life of a fugitive, without having committed any crime; he is constantly on the run, trying to make ends meet and to feed his hungry dependents. The two men ride on the rooftops of trains, the normal means of transport in Egypt for the very poor, until Sayyid dies in his early thirties, falling onto the track. The narrative is fragmented, self-reflexive; the sense of the marginalized’s vulnerability to events is re-enacted in the telling of the tale. Again, the work is crowded with characters, as is Minufiyya itself; rich in what might be called sub-plots, though ironically so, in the absence of a main plotline. It is the picture of a decaying society, reflected in the mirror of its recent past.
A notable characteristic of these works is their concentration on the tangible minutiae of everyday life, on stopped moments of time. Wa’il Rajab’s novel in five chapters, Dakhil Nuqtah Hawa’iyyah [In an Air Bubble], published in 1996, relates the story of three generations by concentrating on ‘the visible part of the iceberg’, without resort to the syllogisms of the family saga.footnote10 Like The Pharaoh, it is largely set in rural Egypt. A few events, undramatic and non-sentimental, are subjected to detailed investigation; from these fragments, the reader reconstructs the family’s trajectory and the fates of its protagonists. The first chapter is entitled ‘The Click of the Camera’, suggesting a technique of neutral representation; yet this is laced with implicit sarcasm, and undercut by terse, economical prose. As in Haykal’s Zaynab, Egypt’s first great novel, published in 1912, it opens with the hero, Muhammad Yusuf, waking at dawn. But we soon realize that his world is the opposite of Zaynab’s, with its open horizons. Muhammad has witnessed the last phase of his father’s dreams of progress, but lives the bitter disappointment of Egypt’s aspirations; his own son predeceases him. The fragmented narrative becomes a reflection of the family’s disintegration, the frustration of its ambitions and of Egyptian hopes for a better future.
Constructed, again, through the complex intersection of different points in time, with multiple narrators, Mahmud Hamid’s Ahlam Muharrama [Forbidden Dreams], published in 2000, contains a striking examination of the return of the traditional strong man, the futuwwa, within the lawless setting of Cairo’s ‘third city’.footnote11 But whereas the old futuwwa was bound by a code of gallantry and magnanimity, the new one is merely a thug, motivated by greed, aggression or religious bigotry. In a chapter entitled ‘Sunday, 17 September 1978’—the day of Egypt’s signing of the Camp David Accords—Farhah is raped by ‘Uways, the futuwwa of the Kafr al-Tamma‘in quarter; as so often in the Arabic novel, the woman stands in for the country. Farhah’s family do not dare to retaliate against the strongman, but instead cleanse their honour by killing the girl herself. The novel’s final chapter is set, once again, against the backdrop of the Gulf War. Faris, a young journalist, spends his final evening before returning to his job in the Gulf with some friends in a Cairo bar. As they say goodbye, aspects of his life flood through his mind and he—‘you’: the narrating ‘I’ here casts the protagonist in the second person—finds himself throwing up below one of the stone lions on Kasr al-Nil Bridge, a relic of British imperialism. A riot-police van pulls up beside them: a congregation of four young men constitutes an illegal gathering under Egypt’s permanent Emergency Law. Faris becomes defiant: ‘you gasp for air and spit and say: let him do whatever he can!’ Another riot-police van arrives, disgorging armed security forces:
You try to insult back those who insult you, but your voice does not come out. The soldiers encircle you all, pointing their guns at your backs, and a high-ranking officer comes out of the car, to be formally greeted by the officer who was beating you. He explains the situation. The high-ranking officer motions to the soldiers. They all start beating you with the butt of their guns. You scream, you all scream and no one . . .
The novel ends. By contrast, the schizoid narrator-protagonist of Ahmad al-‘Ayidi’s An Takun ‘Abbas al-‘Abd [To Be Abbas al-‘Abd] finds himself confronting attempts by other characters to escape from the confines of the narrative altogether.footnote12 Pursuing a blind date—in fact, two: everything in this world is doubled—set up by his friend or alter ego, Abbas al-‘Abd, the narrator finds the instruction ‘call me’ and al-‘Ayidi’s actual cellphone number, 010 64 090 30, scribbled on the Cairo shopping-mall walls. Written in a hybridized street Arabic from which official Egypt has all but disappeared, this mordant work opens with the statement, ‘This is not a novel’. Ultimately, its serial duplicities are grounded in those of the national situation. ‘Don’t believe what you say to the others!’, the narrator-protagonist warns his other self. ‘Egypt had its Generation of the 1967 Defeat. We’re the generation after that—the generation of I’ve-got-nothing-to-lose.’
Fractured, reflexive narratives predominate in the work of the women writers of this new wave. Somaya Ramadan’s remarkable Awraq al-Narjis [Leaves of Narcissus] is one of the very few novels of this cohort to treat the once-classic theme of interaction with the West, a central concern in modern Arabic literature since the 19th century.footnote13 It is also unusual in dealing with the world of the elite, here perceived through a mosaic of splintered identities, when the vast majority of these works deal with the marginalized middle or lower classes. Equally striking, Nura Amin’s first novel, Qamis Wardi Farigh [An Empty Rose Coloured Dress], published in 1997, offered a vivid portrayal of alienated and fragmented selves. Her second, Al-Nass [The Text], written in 1998, which I read in manuscript, was too daring and experimental to find a publisher. Her third, Al-Wafat al-Thaniya Li-Rajul al-Sa’at [The Second Death of the Watch Collector], which appeared in 2001, is one of the most significant novels of the new generation.footnote14 It deals with the disintegration of the Egyptian middle class, in a period when a tiny fraction of it was integrated into the new business elite while the majority was left stranded. The central character is ‘Abd al-Mut’al Amin, the actual name of the author’s father; Nura Amin herself appears as both narrator and daughter, left with only her father’s sad collection of wrist watches after his death. The work is composed of four sections: ‘Hours’ selects five hours from ‘Abd al-Mut’al Amin’s life, the first from 1970, the last from the late 1990s; ‘Minutes’ records the moment of his death and the funeral rites. ‘Seconds’, the longest and most moving section, is Nura’s attempt to reconstruct the trajectory of her father’s career as a building contractor through the details of his daily life: his cars, from the little Egyptian-made Ramses of the 1970s, to the Fiat of the 80s and the Mercedes of the 90s, which breaks down in the desert; Amin spends the cold winter night stranded by the roadside, while the other cars whizz by. The final section, ‘Outside Time’, restores the memory of a patriotic family man, dazzled by the get-rich-quick promises of the infitah era, but destroyed by its corruption. In one scene, Amin drags his daughter up onto the scaffolding of his biggest construction site, a government office block, where they remain trapped: Nura is afraid to move lest she fall, and resentful of her father for bringing her up there as if she were a boy; Amin is angrily arguing with the workers, who are demanding their wages. His refusal to give kickbacks to those in command means that he never gets paid and, like many projects of the infitah period, the building never gets finished.
What common strategies do these varied works deploy? Most obviously, all reject the linear narrative of the realist novel. In its place they offer a juxtaposition of narrative fragments, which co-exist without any controlling hierarchy or unifying plot: Yahya’s sexual conquests in The Hawker; the meaningless assault of the riot police in Forbidden Dreams.footnote15 Secondly, the private is no longer in dialectical tension with the public, mediated by the interior lives of the characters, as in the realist or modernist novel. The two are now in direct antagonism, with the fictional space of interior life correspondingly reduced. At the same time, characters typically experience themselves as isolated within social environments. ‘We are a generation of loners, who live under the same roof as strangers who have similar names to ourselves. This is my father, this is my mother, and those are certainly my brothers and sisters. But I move between them as a foreigner meets other lodgers in the same hotel,’ says the narrator of To Be Abbas al-‘Abd. Thirdly, these narratives do not pose epistemological questions—how to comprehend the world; how to determine one’s stance within it—nor posit any a priori points for departure, as the realist novel did. Instead, the new Arabic novel asks ontological questions: what is this narrative world? What are the modes of existence that the text creates?
Like the modernist novel, this new genre is preoccupied with its own textual deconstruction, seeking to lay bare the internal dynamics of its own artistic process; narrators are fallible, multiple, polyphonic. But unlike most modernist works, these are intransitive narratives, concerned with existence, rather than the effects of deeds. Finally, the erasure of previous novelistic conventions in these texts is not driven by the anticipation of any alternative ordering of reality, but by the desire to strip existing realities of any legitimacy. Stylistically, they re-examine the vocabulary of daily existence in order to demonstrate its emptiness. This has been misinterpreted as a failure to master the intricate rhythms of classical Arabic, with its rhetorical tropes of cohesion and stasis; it should rather be read as an attempt to offer an aesthetics of fragmentation, based on the ruins of what was. These narrative worlds are formed from the wreckage of official literary discourse. Both the narrators and the protagonists of this new genre find themselves in a state of disorientation, trapped within a duplicitous, illogical order. The words at their disposal are no more coherent than their worlds.
Cities and scribes
Intermediations between literary forms and social realities are necessarily subtle, indirect and complex. To suggest a series of homologies between the narrative strategies of the Egyptian novel and Cairo’s changing urban fabric is not, of course, to posit any one-to-one correspondence between them. Yet it is possible to trace an evolving relationship between the modernizing project inaugurated from the 1870s by the Khedive Isma’il, who had been a student in Paris during Haussmann’s re-engineering of the French capital, and the development of the modern Egyptian novel. Isma’il planned and built a new city of wide boulevards and great open thoroughfares, the Opera House and Azbakiyya Park, situated to the north and west of the ancient medina or oriental city, which for thirteen centuries had developed by the slow process of in-building; the old Khalij al-Misry Street forming the boundary between the two. The two cities represented distinct world views and modes of operation, the densely populated medina retaining its traditional order and conservative-religious outlook, while the ‘second city’ proclaimed its rulers’ faith in ideas of progress, modernity and reason. This was also the vision of Rifa‘ah al-Tahtawi and his students, who were deeply influenced by the French Enlightenment. It continued to inform the work of the great modern Egyptian writers, from Muhammad al-Muwailihi, Muhammad Husain Haykal, Taha Husain and Tawfiq al-Hakim to Yahya Haqqi and Naguib Mahfouz. The notion that man can make himself—that ‘we are what we choose to be’, in Mirandola’s words—both individually and collectively, lay at the heart of the modern Arabic novel from the start. The central theme of Haykal’s Zaynab is the tragedy of Zaynab’s failure to be ‘what she wanted to be’. This is also the dilemma of Hamidah in Mahfouz’s Middaq Alley, published in 1947, who also fails to be the modern woman that ‘she willed’.
Many of the pioneers of realist narrative fiction in Egypt—Muwailihi, Hakim, Haqqi, Mahfouz—were born and brought up in the old city, but developed their literary talent in the context of the second; they are a product of the passage between the two worlds, with their contrasting rhythms and visions. The move from medina to the new city was not without its price: in Middaq Alley, Hamidah becomes a prostitute for the British soldiers when she leaves the protective haven of the alley. Despite their striving, most of the protagonists of Mahfouz’s great works of the 1940s and 50s end up where they started, or even worse off than before; the frustration of their hopes forms the substance of the novels. Yet they struggle, nevertheless, to transform their destinies.footnote16 The power of Mahfouz’s Cairo Trilogy [1956–57] lies in part in the battle by its hero Kamal to be what he wants to be, rather than what his father wants for him. The assumptions of modernity inherent in this work enable us to interpret its themes of declining patriarchal authority, the rebellion of the son and the free choices of the grandsons, with their individual yet opposing ideologies. These assumptions would underlie the trajectory of the Arabic novel after Mahfouz, from the work of Yusuf Idris, ‘Abd al-Rahman al-Sharqawi, Fathi Ghanim and Latifa al-Zayyat, through to the ‘1960s generation’.footnote17 I would suggest that there is a homology between the urban structure of the second city, with its wide thoroughfares—in contrast to the narrow alleyways of the medina—and the linear structure of the realist novel, the unfolding of its plot conditional upon wider social relations.
In the early decades of the 20th-century Cairo’s rulers created a further planned zone, Garden City, to the west of Qasr al-Dubarah. It was designed to act as a buffer between the British colonialists and the angry middle classes of the ‘second city’, constantly agitating for their departure. The pattern here was not linear but circular and labyrinthine, although clearly based on modern urbanist principles. Interestingly, this deliberately alienating space did not find its literary expression until the post-independence period, when the contradictions of the national-developmentalist project began to demand more complex and metaphorical forms. This in turn led to the emergence of modernist narrative, with its reflexivity, circular structure and problematization of the narrator’s status. Yet the Egyptian novel of the 1960s, though highly critical of the social reality from which it emerged, was still essentially a narrative of rational enlightenment, in which the idea of progress retained its meaning.footnote18 Again, there was a parallel with urban forms: by the mid-60s, modern Cairo had far outstripped the old medina, the relative decline of the traditional city also corresponding to the weakening influence of its norms under the prevailing secularist outlook of Nasser’s Egypt.
Turn from the modern
This situation had undergone a radical transformation well before the advent of the 1990s generation. Everything in their experience ran counter to notions of the ‘rule of reason’ or the epistemological centrality of man. For the marginalized youth of the 1990s and early 2000s, everyday life has become a process of humiliation and symbolic violence. Prolonged unemployment has created a sense that they are unwanted, that their youth is going to waste; this in turn induces an absurd kind of guilt. Naive faith in a better future is not an option; their starting point is cynicism and frustration. The hopes that previous generations had invested in a collective solution are belied by the corruption that has permeated the very marrow of the national culture. In Egypt, the general conditions of post-modernity—the shift from the verbal to the visual, the predominance of commercialized mass media—have been compounded by state censorship, on the one hand, and a glossy, well-funded Wahhabism, on the other. Sadat paid lip-service to freedom of expression while orchestrating what is known in Arabic literature as manakh tarid, an atmosphere unpropitious to independent cultural praxis which succeeded in pushing many dissenting intellectuals out of the country; subsequent governments have maintained the same traditions. This coincided with the rise of the oil states, Saudi Arabia in particular, to fill the cultural vacuum caused by the ostracization of Egypt after Camp David and the destruction of Beirut by the Lebanese civil war.
The same dynamics have underlain the rise of the al-madinah al-‘ashwa’iyyah. Cairo’s ‘third city’ developed randomly, without any overall plan, as a short-sighted reaction to the housing crisis. It reflected a terminal loss of faith in the state’s ability to fulfil its citizens’ basic needs. It was born out of a situation in which the immediate supersedes—indeed, negates—the strategic and long-term. Hence the irrationality of the ‘third city’, full of impasses and dead-ends. The two vast belts of semi-rural dwellings represent a regression from the urban planning of the ‘second city’, though without any of the bucolic beauty of the rural scene; an aimless return to pre-modern forms in housing, as well as in socio-political relations. The chaotic development of the ‘third city’ went hand in hand with the recoil from modernity and the return to traditional, even fundamentalist, stances, as underpinnings for the national ideology; with the deterioration of Egypt’s broader political culture and the emergence of an inverted scale of social and national values.
It is now possible to trace a series of homologies between the formal characteristics of the new Egyptian novel and the haphazard nature of the ‘third city’, as well as the broader impasse this represents. The first homology lies in the paradoxicality of these texts, which ask the reader to treat them as novels while at the same time confounding the aesthetic and generic expectations to which the form gives rise. These works demonstrate their awareness of the deep structures of Arab humiliation and the troubled social context within which they are produced; they assert the importance of their autonomy within it, yet they refuse to waste any energy in resolving its contradictions at the symbolic level. Paradoxicality is not posited as a topic for treatment, but rather as the ontological condition of the text itself. Hence the homology with the power structures of the Arab world, where state authority appears not as a real force, with a free will and independent project, capable of challenging the ‘other’ according to a national logic, but as a travesty of power, a scarecrow. Aware of its lack of legitimacy, it constantly attempts to gloss this over by exaggerating its authority, oscillating between an illusion of power and a sense of inferiority. Over the last three decades, new levels of domestic coercion and repression have been matched by the unprecedented subservience of the Egyptian political establishment to Washington’s diktats, without even enjoying its foreign master’s respect.
The writing ‘I’ of the new novel is acutely aware of its own helplessness, of being trapped in the present with all horizons closed. Its only escape strategy is to establish a narrative world that is ontologically similar to the actually existing world, but which permits a dialogical interaction with it and so constitutes a rupture—a rip in the closed horizon. The new text does not pose an alternative logic to that of existing reality, but attempts to interrupt its cohesion, creating gaps and discontinuities for the reader to fill. One corollary of this is the use of narrative fragments and juxtapositions, which refuse any all-embracing totality. Another is the treatment of plot: eliminating the middle, the central concern of the conventional novel, the new narrative consists of beginnings and ends, undermining any syllogistic progression. This creates a further disturbance within the narrative world, disorienting the reader’s guiding compass and intensifying the ontological dilemma.
Such a strategy also signals, of course, recognition of the narrator’s diminished authority; the appeal to the reader—‘call me’—knows it is unlikely to receive an answer. If the writing ‘I’ is no longer able to secure its position as the controlling consciousness of the text, and the author no longer has confidence in his or her narrator, it is because both have become variations of the subaltern self, inhabiting a subaltern country that has lost its independence, its dignity and its regional role. This creates a crisis in which the ‘I’ is unable to identify with itself, let alone with an ‘other’ or a cause. Yet it also offers a narrative capable of relating external reality ‘from the inside’, as if an integral part of it, while at the same time seeing it from the outside, the viewpoint of the marginalized, appropriate to its own insignificance. The new Arabic novel is immersed in the most minute details of its surrounding social reality, yet it is unable to accept it. The ‘novel of the closed horizon’ is the genre of an intolerable condition. | <urn:uuid:3b02e323-3b40-484e-8409-30c4010783d8> | CC-MAIN-2019-47 | https://newleftreview.org/issues/II64/articles/sabry-hafez-the-new-egyptian-novel | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668594.81/warc/CC-MAIN-20191115065903-20191115093903-00300.warc.gz | en | 0.955924 | 7,421 | 2.640625 | 3 |
Oilfield Geomechanics has a broad range of definitions, and depending on who you ask you may get a different answer. To this author, in its simplest form, it encompasses the study of how stresses and strains within the earth affect what we drill into and explore for. The magnitude and direction of stresses and how they affect the rock properties in a region, a field, and a wellbore has a massive impact and control on what we do in unconventional resource exploration and exploitation. Unconventional in this case refers to tight sands and shales containing oil or gas that require stimulation to produce at economic rates. This paper will describe how geomechanics influences wellbore stability, reservoir properties, and hydraulic stimulations. Through this description of geomechanics I hope to convince geophysicists that there is not so large a gap between the engineers we deal with and the seismic data we look at every day.
Applied geomechanics deals with the measurement and estimation of stresses within the earth, and how those stresses apply to oilfield operations. Throughout this paper we will be discussing stresses within the earth, and for convenience we will use the principal stress notation where the overburden or vertical stress is denoted σv, the maximum horizontal stress as σH, and the minimum horizontal stress as σh. Stress is a force per unit area, and if we visualize a point within the earth as a cube it can be visualized as in Figure 1. This consists of three normal stresses and six shear stresses. A simple rotation can be applied to this tensor which results in the shear stresses going to zero leaving only the principal stresses shown in Figure 2. This assumes that the overburden is vertical and horizontal stresses are normal to the vertical stress (Anderson, 1951). This assumption holds true in most areas, except near large geologic structures such as faults, salt domes, and igneous intrusions where more complicated stress models are needed to describe the stresses within the earth.
When we look at the simplified result of this diagram (rotated so no shear stresses exist), we see that we are left with the weight of the overlying rock (the overburden) and two horizontal stresses as shown in Figure 2. Now that we have defined stresses, we can get into the explanation of effective stresses. Within the earth, a formation’s strength and the fluids it contains dictates how stresses act and distribute within that formation. As a result, the pore pressure and rock properties of each formation need to be calculated or estimated to gain the full understanding of how stress acts within the earth. The pore pressure within a formation can help support the load that it maintains, and this needs to be taken into account when we estimate stresses. Terzaghi first described this relationship in 1943 with Equation 1 below:
Equation 1: Terzhagis equation where σ’ = Effective stress, σ = total stress, and Pp = pore pressure.
Within the oil and gas industry, rock properties are usually described in terms of Poisson’s ratio, Young’s modulus, bulk modulus, and shear modulus. These moduli are calculated from the P and S wave logs in wellbores and interestingly the dynamic moduli can also be estimated from seismic data (more on that link later). In addition to this, for wellbore stability purposes, we try to measure from core or estimate using empirical relations the Unconfined Compressive Strength (UCS). Full explanations of all of these moduli aren’t necessary in this paper, but those interested can find complete details in Mavko et al; 1999 and Jaeger and Cook, 2007.
It is not my intent to show how we can estimate all of these properties with well log and seismic data, but rather the impact that the results of this modeling, combined with or derived from seismic data, can have in the unconventional. Anyone interested in the construction of a geomechanical model will find these references of interest: Zoback et. al., 1985; Moos and Zoback, 1990; Sayers, 2010; Jaeger and Cook, 2007; Barton et al; 2009. It will be shown in the next few sections that the estimation of the direction and magnitude of these stresses along with the associated geologic rock properties provided by seismic can be extremely useful.
Stress directions and magnitudes; impact on the wellbore and completions:
Before we put a hole in the earth, it is in a state of stress equilibrium. The borehole that we are drilling disrupts that equilibrium and causes stress to redistribute around the borehole. We have mud weight to balance this dis-equilibrium, but commonly this is not enough to stop breakout or wellbore instability completely. By looking at the damage we cause in the borehole while drilling (via drilling, tripping in or out, surging and swabbing, etc.) we can estimate the stress directions in the formations we drill through and start to constrain the magnitudes of stresses with other drilling and completion data from the area (Figure 3). If we look at an image log as shown in Figure 4, we can see that breakout has a distinct appearance (caliper logs can be used for this purpose as well). Breakouts occur in the direction of minimum horizontal stress, as the maximum compression (where breakout occurs) in the wellbore happens 90 degrees from the maximum horizontal stress (in most cases). Because of this relationship we can estimate the σH direction.
Once the directions of stress are known, it is possible to make estimates of stress magnitudes. The overburden is usually quite easy to estimate, as we almost always have density logs in the area. Simply integrating the density of the overlying rock (and water if we are in an offshore setting) and multiplying by the acceleration due to gravity will give the overburden stress. The minimum horizontal stress can be estimated using leak off tests, offset completion data, or mini fracture tests within the wellbore. The maximum horizontal stress is always one of the largest unknowns in the world of geomechanics as there is no direct way to measure it. This can be constrained either by using advanced sonic measurements (Sayers, 2010) or by using the severity of wellbore breakouts (Moos and Zoback, 1990; Barton et al; 2009). The magnitudes of the horizontal stresses are of the utmost importance as the magnitudes with depth define the type of faulting regime that the formation of interest lies in. Figure 5 shows the Anderson fault classification based on relative magnitudes of principal stresses, while Figure 6 shows data from the publically available World Stress Map (Heidbach et al; 2008).
All of the information contained in this map is extremely useful for unconventional oil and gas exploration. Almost all horizontal wells completed in unconventional resource development are drilled in the direction of minimum horizontal stress. This is done to contact and prop open the largest amount of reservoir by making fractures perpendicular to the horizontal well as shown in the left of Figure 7. This also highlights how the maximum horizontal stress direction controls the direction of stimulation propagation in high horizontal stress ratio environments (strikeslip or thrust fault regime). In the left of Figure 7, which is in western Canada, the regional maximum horizontal stress direction is 45° east of north. This is also the direction in which the stimulation has grown according to the microseismic. The magnitudes of these stresses and their local variations are often overlooked, but are just as important. Take, for example, the Fort Worth Basin; which is the birthplace of North American shale gas. This basin has been used as an analogue for almost every new shale play in the past 10 years due to the large amount of publicly available data. However, the stress state in the Fort Worth Basin is one that is, for the most part, in a normal faulting regime (there are some exceptions to this); and the magnitudes of the horizontal stresses are nearly equal. Where the horizontal stress ratio is high, as in the Montney, induced fractures grow in a very linear fashion from the perforations out into the formation (left side of Figure 7 and Figure 8). In contrast, where the horizontal stress ratio is low, as in the Barnett, induced fractures are able to grow in a much more complex pattern using more of the pre-existing natural fracture network (right side of Figure 7 and Figure 9).
Looking at Figure 6 we can see that most other areas of emerging and abundant shale gas in North America are NOT in a normal faulting regime. Indeed, the basins that contain the Marcellus, Horn River, Bakken, Cardium, Monterey, and the Montney are almost all strike-slip or reverse, commonly with high horizontal stress ratios. Figures 7-9 all point to the fact that stress direction and magnitude matter in a way not fully appreciated by many. High horizontal stress anisotropy does not allow the growth of induced complex fracture networks that permit maximum reservoir contact. The growth of hydraulic stimulations is affected by many things outside of the completion design; pre-existing planes of weakness (fractures and especially faults), rock fabric and type, rock properties, etc. It is also important to note that while complex fracture network growth can be seen in highly compressive environments, it is usually the exception, not the rule. I will leave this topic with a quote from a paper written by King et al; 2008:
“Development of both primary and secondary fractures is possible when the maximum and minimum stresses are relatively similar. When tectonic stresses are highly dissimilar, switching fracture directions will be difficult and complex fracture development improbable.”
On top of the control on hydraulic stimulations, ratios of stresses within the earth control important details of how the wellbore breaks out in both the vertical and horizontal well section. This phenomenon is especially important, keeping in mind that these wells are nothing without a completion, and the cement job can greatly affect a stimulations effectiveness. In most cases people visualize breakout (or compressional failure) occurring (Figures 10 and 11) in a horizontal well on the sides of the well due to the weight of the overlying rock. This ovalization of the wellbore is problematic but easier to clean than the alternative. In highly compressive environments like strike-slip or thrust fault regimes, this breakout occurs not at the sides but at the top and bottom of the wellbore. This occurs wherever at least one of the horizontal stresses is greater than the vertical stress, i.e. in NW Alberta, NE British Columbia, and Appalachia (see Figure 6). This creates many more operational problems from stuck pipe, hole cleaning, well logging, and cement jobs.
When drilling these unconventional wells, the norm is to drill with as low a mud weight as possible to increase the rate of penetration and therefore speed up drilling and reduce the amount of rig time paid. This approach is usually not conducive to reducing breakout, because once it occurs; it tends to be self sustaining (see Figure 3). As the drill pipe is pulled out and put into the hole the pressure changes, this is known as surge and swab. When this occurs in a highly compressive environment, the potential for stuck pipe increases dramatically as rock falling from above has a much greater chance of causing tight spots and stuck pipe. If we knew that this compressive environment existed pre-drill, it would allow us to come in at a more appropriate mud weight, and therefore reduce or avoid altogether the severe breakouts that lead to serious operational setbacks.
In addition to regional stress directions and magnitudes, the properties of the formation itself are of great importance. This importance is not limited to building a complete geomechanical model; but also to assess our ability to break the rock with a hydraulic fracture, the formation’s strength when we drill through it, and how these rock properties relate to and control seismic wave propagation. I will concentrate primarily on how properties that we can derive from seismic using currently available techniques can be used for stimulation modeling and proppant selection. To move forward, a proper definition of the terms we are using is needed. This is followed by an introduction to how the rock properties as defined are used in drilling and completions engineering.
If we apply an increasing vertical load to a core plug, and leave the sides unconfined, the load will deform until it fails at the uniaxial or ‘unconfined’ compressive strength. This parameter is known as the UCS and is sometimes denoted as the rock’s “strength” by drilling and bit companies. This failure cannot be recovered (we broke the rock) and is therefore inelastic. The remainder of the terms that we are dealing with will be in the realm of the elastic, i.e. the loads that we apply are theoretically recoverable and do not go into the realm of plastic (or unrecoverable) deformation. More background on this can be found in Jaeger and Cook, 2007. The definitions below, summarized in Figure 12, are from Batzle et al; 2006.
For an isotropic and homogenous medium, we apply a vertical deformation (ΔL) associated with the vertical stress. Normalizing this deformation by the original length of the sample, L, gives the vertical strain εzz. By definition, Young’s modulus, E, is the ratio of applied stress (σzz) to this strain. Young’s modulus is therefore in units of stress (MPa, psi, etc.). This same stress will generally result in a lateral or horizontal deformation, ΔW. The lateral strain is then defined like the vertical strain and is denoted εyy. The relationship of these strains (vertical to horizontal) is known as Poisson’s ratio. The negative sign is attached because the signs of the deformations are opposite (vertical negative, horizontal positive).
In the oilfield, these rock properties can be derived from the P and S waves of modern sonic tools. Properties derived from logs (and seismic) are known as dynamic moduli, meaning that they are measured with sonic waves and need to be calibrated to laboratory measurements (static). There is much debate about the validity and the problems of up-scaling when moving from core scale, to logs, and then to seismic. This is because we tend to sample cores that are competent and un-fractured; and also because of the dispersion that occurs due to the different lengths of measurements used to measure these dissimilar scales. We know that in almost all cases fractures play a part at some scale and that dispersion due to measurement length always affects our accuracy. These issues aside, in the oilfield Young’s modulus (E) and Poisson’s ratio (ν) have become ubiquitous in both geomechanics and in engineering. Because of this they should also become common place in the geoscience world.
In hydraulic stimulation modeling, E is used as a proxy for how wide a crack can be opened in a formation, and is therefore used to pick perforation locations in both vertical wells, and when available, horizontal wells. A lower E means that a wider crack can be opened and therefore flow increased during the stimulation and more or sometimes larger proppant used. Since we make almost all of the permeability we will ever have in tight sand or shale oil and gas, this is an important parameter. At depths where the overburden is not the least stress, the minimum horizontal stress can be calculated using the uniaxial strain equation (Equation 2). This equation assumes that the reservoir is linear, homogenous, and that there is no tectonic strain caused by the tectonic component of stress (Hubbert and Willis, 1956 and Teufel, 1996). In most situations, tectonic stress and the resulting strains will be appreciable. Most stimulation simulators use modifications of this equation to account for tectonic effects, but these are in most cases poorly constrained and often just calibrations to existing stress data (minifracture tests, leak off tests, etc.)(Blanton and Olsen, 1999).
Equation 2: Uniaxial elastic strain model where σh = minimum horizontal stress, σv = overburden, ν = Poisson’s ratio, α = Biots constant, and Pp = pore pressure.
In addition to the importance of E and ν for stimulation modeling, petrophysicists have recently begun using these two well log derived rock properties as a proxy for rock ‘brittleness’ or ‘ductility’ (Rickman et al; 2008). A rock’s brittleness or ductility is influenced by many parameters outside of E and ν: the grain size and distribution within the rock, mineralogical content, and especially the fracturing of the formation all influence how easily a rock will break during stimulation. It is no secret though that our ability and time to measure these things are always limited, especially in the unconventional where turnaround from logging to completion varies from days to weeks. This usually doesn’t allow for in-depth lab testing to be performed. We are left then using E and ν for local and regional estimations of a formation’s brittleness, hopefully calibrated in some way to regional core measurements. This allows us to have an idea of the formation’s mechanical properties prior to perforating and stimulating. This concept, from Rickman et al; 2008, is shown in Figure 14.
Seismic rock properties and stress estimation:
The use of rock property estimation coupled with an estimate of the natural fracture density, either from AVO methods or seismic attributes, has become the method of choice for geophysicists searching for ‘sweet spots’ in shale basins (Goodway et al; 2006 and Changan et al; 2009). It has only been recognized recently, however, that we can use conventional P-wave (making certain assumptions) and multi-component seismic (Cary et al; 2010) to gain estimates of the horizontal stress directions or their ratios in the subsurface. In essence, if we have dynamic estimates of the E, ν, horizontal stress ratios, and accurate formation velocities, we have an initial estimate of the geomechanical model before the first horizontal wells are drilled.
Gray et al; 2010 outlined just how this can be done using conventional P-wave seismic and AVO lamda/mu/rho (LMR) analysis coupled with assumptions to ascertain horizontal stress ratios. The method outlined by Gray allows for the estimation of differential horizontal stress ratios (Figure 15) and dynamic rock properties from 3D P-wave seismic within one 3D seismic volume. Given what we know about unconventional geomechanics discussed above, this information away from the wellbore allows for much more advanced analysis pre-drill.
In addition to this Cary et. al. have recently shown that the difference in converted wave fast and slow velocities in the near surface can be indicative of differences in horizontal stresses as they deviate from the regional stress (Figure 16). Shear wave splitting is usually attributed to vertical cracks or fractures in the subsurface at depth. However, this splitting is observed in compliant rocks in the near surface where fracturing is known to be extremely minimal from regional core observations. Most of the fast shear (S1) direction is in the regional direction of maximum horizontal stress as derived from the World Stress Map (Heidbach et al; 2008).
We now have the ability to ascertain fracturing or stress state in a reservoir pre-drill, this is in addition to our ability to derive rock properties from conventional AVO or AVAz. Given what has been outlined in this paper, it is evident that P-wave and multi-component seismic can provide insights into geomechanics and engineering problems that are abundant in unconventional resource exploration. A 3D seismic survey (either P-wave or multi-component) can give initial estimates of fracturing or stress state, this can be further constrained when combined with local well data if it is available. In addition to this, estimates of the dynamic Young’s modulus, Poisson’s ratio, and density are possible if the data has adequate offset and azimuth coverage. When we combine this with regional well data of drilling events, logs, and completions we are well on our way to a geomechanical model before the first horizontal wells are drilled.
Geomechanics is not just an important parameter in analyzing an unconventional reservoir; it is perhaps the most important control on how our tight/shale reservoirs are developed. It dictates how our wellbores breakout and fail, the direction and areal extent of our hydraulic stimulations, the size and strength of proppant, how much that proppant could embed over time with pore pressure depletion, and a reservoirs mechanical characterization. Engineers use these parameters in their calculations and modeling, but ultimately the quantification of regional stresses and rock properties comes from geoscience data. Unconventional resource plays demand integration across teams and geomechanics bridges the gap from geology and geophysics to engineering in a way that is only now becoming more widely appreciated. Seismic surveys contain a large amount of data that can be utilized for geomechanical and engineering purposes before the first pads are ever drilled. All we have to do is use them.
This work would not have been possible without the help of Tom Bratton, Tom Davis, and Shannon Higgins who initially started me down the path of seismic and geomechanics. In addition, John Logel, Eric Andersen, Geoff Rait, Dave Gray, Jared Atkinson, and Rob Kendall have all provided help and insight into these issues over the past 2 years.
About the Author(s)
Kurt Wikel graduated with a B. Sc in Geology from the University of Montana. He received his M.Sc. from the Colorado School of Mines in Geophysics with a minor in Petroleum Engineering in 2008. A graduate of the Reservoir Characterization Project, he worked with Schlumberger DCS Denver on geomechanics applied to time lapse seismic data. He worked for Talisman Energy for 2.5 years in exploration geophysics and was the International Exploration Geomechanics Specialist until July 2010. Kurt is currently working on Subsurface Geophysics and Geomechanics for Petrobank Energy and Resources in Calgary, AB Canada.
Anderson, E.M. (1951) The Dynamics of Faulting and Dyke Formation with Applications to Britain. 2nd ed., Oliver & Boyd, Edinburgh.
Atkinson, J. (2010) “Multi-component time-lapse monitoring of two hydraulic fracture stimulations in an unconventional reservoir, Pouce Coupe field, Canada”. Masters Thesis. Colorado School of Mines, Department of Geophysics; Reservoir Characterization Project.
Batzle,M., Han, D-H, Hofmann, R. “Chapter 13: Rock Properties”. The Petroleum Engineering Handbook, Volume 1: General Engineering. Lake, L.W. Editor. SPE, 2006.
Barton, C., Moos, D., Tezuka, K. (2009) “Geomechanical wellbore imaging: Implications for reservoir fracture permeability”. AAPG Bulletin, v.93, no.11, November 1999. P 1551-1569.
Blanton, T.L. and Olsen, J.E., (1999) “Stress Magnitudes from Logs: Effects of Tectonic Strains and Temperature”. SPE 54653.
Cary, P., Li, X., Popov, G., and Zhang, C. (2010) “Shear-wave splitting in compliant rocks”. SEG The Leading Edge. October 2010. P 1278-1285.
Changan, D. et. al. (2009) “A workflow for integrated Barnett shale gas reservoir modeling and simulation”. SPE 122934.
Goodway, W., Varsek, J., and Abaco, C. (2006) “Practical applications of P-wave AVO for unconventional gas resource plays-1 and 2”. CSEG RECORDER. 2006 Special Edition. P 90-95.
Gray, D., Anderson, P., Logel, J., Delbecq, F., and Schmidt, D. (2010) “Estimating insitu, anisotropic, principal stresses from 3D seismic”. 72nd Mtg.: Eur. Assn. Geosci. Eng. Extended Abstracts.
Heidbach, O., Tingay, M., Barth, A., Reinecker, J., Kurfeß, D., and Müller, B. (2008): The 2008 release of the World Stress Map (available online at www.world-stressmap.org).
Hubbert, M.K., and Willis, D.G., (1956) “Mechanics of Hydraulic Fracturing”. Petroleum Branch Fall Meeting, Los Angeles, CA. October 14-17, 1956.
Jaeger,J . C., and N. G. W. Cook, “Fundamentals of Rock Mechanics”, 4th ed., 475 pp, Blackwell, Oxford, 2007.
King, G.E., Haile, L., Shuss, J., Dobkins, T.A. (2008) “Increasing fracture path complexity and controlling downward fracture growth in the Barnett shale”. SPE 119896.
Mavko, G., Mukerji, T., and Dvorkin, J. ”The Rock Physics Handbook: Tools for Seismic Analysis in Porous Media”. Cambridge University Press, 1999.
Moos, D. and Zoback, M.D. (1990) “Utilization of observations of wellbore failure to constrain the orientation and magnitude of crustal stresses: Application to continental, deep sea drilling project, and ocean drilling program boreholes”. Journal of Geophysical Research, vol. 95, no. B6, 9305-9325, June 10.
Rickman, R., Mullen, M., Petre, E., Grieser, B., and Kundert, D. (2008) ”A Practical Use of Shale Petrophysics for Stimulation Design Optimization: All Shale Plays Are Not Clones of the Barnett Shale”. SPE 115258.
Sayers, C. (2010) ”Geophysics under stress: geomechanical applications of seismic and borehole acoustic waves”. EAGE/SEG 2010 Distinguished Instructor Short Course. DISC series no. 13.
Terzaghi, K. “Theoretical Soil Mechanics”. John Wiley and Sons, 1943.
Teufel, L.W., (1996) “Influence of Pore Pressure and Production-Induced Changes in Pore Pressure on In-Situ Stress”. Albuquerque, New Mexico, Report to Sandia National Laboratories.
Zoback, M.D., Moos, D., and Mastin, L. (1985) “Wellbore breakouts and in-situ stress”, Journal of Geophysical Research, vol. 90, no. B7, 5523-5530, June 10. | <urn:uuid:a5373454-55c2-46d5-b7ed-8266a6641903> | CC-MAIN-2019-47 | https://www.csegrecorder.com/articles/view/geomechanics-bridging-the-gap-from-geophysics-to-engineering | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00221.warc.gz | en | 0.904479 | 5,720 | 3.34375 | 3 |
|FOOD AND AGRICULTURE ORGANIZATION OF THE UNITED NATIONS||ESN: FAO/WHO/UNU|
|WORLD HEALTH ORGANIZATION|
|THE UNITED NATIONS UNIVERSITY|
Provisional Agenda Item 4.1.3
Joint FAO/WHO/UNU Expert Consultation on Energy and Protein Requirements
Rome, 5 to 17 October 1981
THE RELATIONSHIP BETWEEN FOOD COMPOSITION AND AVAILABLE ENERGY
A.R.C. Food Research Institute
1. This paper examines two conventions that are important in the relationship between food composition and available energy. The first concerns the calculation of ‘protein’ values for foods from total nitrogen values, and the second the calculation of the energy value of foods. The second part incorporates the conclusions drawn in the two accompanying working papers on Energy Absorption and Protein Digestion and Absorption.
2.1 The Calculation of the Protein Content of Foods
The convention used in food composition where the total nitrogen in a food (usually based on a Kjeldahl-type of measurement) is multiplied by a factor to calculate ‘protein’ has a long history. It dates from the earliest studies of the composition of foods (McCollum, 1957) when the nature of food components was poorly understood.
It has been retained as a convention based on the fact that most proteins contain around 16% of nitrogen, and that therefore total N × 6.25 will give a reasonable estimate of protein content. In strict terms this should always be cited as ‘Crude Protein’ and ideally accompanied by the actual factor used.
The fact that some proteins contain more or less nitrogen has also been known for some time (Widdowson, 1955), and Jones (1941) measured the nitrogen content of a range of isolated proteins and calculated the most appropriate conversion factors. This approach has been extended at various times since Jones' original work, and the range of nitrogen conversion factors for specific foods has been used in US Department of Agriculture Publications and FAO/WHO publications (FAO/WHO, 1973) and the British Food Composition Tables (McCance and Widdowson, 1960, Paul and Southgate, 1978) in a slightly simplified way. The conversion factors used in Paul and Southgate (1978) are shown in Table 1. (based on FAO/WHO, 1973).
Factors for converting total nitrogen in foods to protein
|Factor per g N||Factor per g N|
|Wheat :||Peanuts, Brazil nuts||5.41|
|Flours (except wholemeal)||5.70||All other nuts||5.30|
|Macaroni||5.70||Milk and milk products||6.38|
|Rice||5.95||All other foods||6.25|
|Barley, oats rye||5.83|
2.2 Calculation from Non-protein Nitrogen
It is known that many, if not all foods contain non-protein nitrogen (noteably meat, fish and vegetables), and it has been suggested that the protein value of foods should be calculated from a value for non-protein nitrogen. As such of this non-protein nitrogen is often amino-acids, it seemed more logical in the context of food composition tables to calculate ‘protein’ content from amino-acid nitrogen (including protein and free amino acids and peptides). This approach has much to recommend it (Southgate, 1974), but data on food composition which gives the required information is not at all complete and could not be applied at the present time.
2.3 Calculation from Amino-acid Composition
This has led to a further suggestion that the conversion factors used to convert the total nitrogen values to ‘protein’ should be derived from amino-acid composition values. This also has much to recommend it on theoretical grounds.
At the present time there is sufficient amino-acid data to attempt these calculations, but the apparent improvement in the accuracy of ‘protein’ values in foods would, I think, be spurious for a number of reasons which relate primarily to the technical problems still associated with the measurement of amino-acids in foods. These relate to hydrolytic losses (and incomplete hydrolysis) of some amino-acids, the need to measure tryptohan, problems in measuring and assigning the ammonia produced on hydrolysis, the proportion of asparagine and glutamine present in the protein and the recovery of nitrogen over the whole process of analysis. In our review of the literature of amino-acid values (Paul and Southgate, 1978), we frequently found the literature reporting these analyses to be deficient in the detail required to calculate nitrogen conversion factors from amino-acid composition data that would be intrinsically more accurate than the present system.
2.4 ‘Protein’ in Studies of Digestibility and Biological Value
Most studies of this kind are based on total-nitrogen measurements in foods, faeces, urine and carcases, and it is very rare for these studies to be interpretable in any other way than by calculating protein in the conventional way. If better, that is, specific procedures for protein became available, then future studies could be based on this system.
2.5.1 The convention of calculating protein by multiplying total-N values by a factor should be retained. It must however be emphasised that this is a convention and does not give values for protein in the biochemical sense. [There is at present no method that is specific for protein in general; conventional colorimetric procedures depend on specific amino-acid residues, and dye-binding methods do not give identical binding with different proteins - all procedures are calibrated against a specific protein]
2.5.2 The calculation can be improved by calculating from protein-nitrogen values [it is however difficult to define procedures for measuring these values] or preferably from a nutritional viewpoint, from amino-acid nitrogen (protein amino-acids + free amino acids and peptides).
2.5.3 Where sound amino-acid composition values are available, amino-acid nitrogen to protein conversion factors could be used in the future.
2.5.4 At present all digestibility and biological value studies are based on this convention and no radical misinterpretation of the data available is likely to arise because of the use of the convention.
2.6.1 The conversion factors used in the 1973 report are still valid.
2.6.2 Compilers of Food Composition Tables and Data Bases should include total-nitrogen values and state very precisely which conversion factors have been used to calculate ‘protein’.
2.6.3 Non-protein nitrogen values for foodstuffs would be a useful addition in food compositional compilations.
2.6.4 In the future it is possible that calculation from amino-acid data would be a better approach to measuring protein in the biochemical sense in foods.
3. Calculation of the Available Energy of Foods
Most calculations of energy value are based on the Atwater system (Merrill and Watt, 1955) or derivatives of this system (Widdowson, 1955; Southgate and Durnin, 1970; Paul and Southgate, 1978). The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the last century and the early years of the present one. Its use has frequently been the cause of dispute (Maynard, 1944; FAO, 1947; Hollingsworth, 1955; Widdowson, 1955), but no real alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation.
3.1 Derivation of the Atwater System
Available energy (as used by Atwater) is equivalent to the modern usage of the term Metabolisable Energy (ME).
|Metabolisable Energy||= (Gross Energy in Food)||- (Energy lost in Foods,Secretions and Gases) Urine|
In most human work losses in secretions and gases are ignored. The Gross Energy of a food, as measured by bomb calorimetry is equal to the sum of the heats of combustion of the components - protein (GEp), fat (GEF) and carbohydrate (GECHO) (by difference) in the proximate system.
GE = ∑GEP + GEF + GECHO
Atwater considered the energy value of faeces in the same way.
By measuring ‘coefficients of availability’ or in modern terminology ‘apparent digestibility’, Atwater derived a system for calculating faecal energy losses.
Digestible energy = GEP (DP) + GEF(DF) + GECHO(DCHO) where DP DF DCHO are respectively the digestibility coefficients of protein, fat and carbohydrate calculated as for the constituent in question.
Urinary losses were calculated from the energy to nitrogen ratio in urine. Experimentally this was 7.9 kcals/g urinary N and thus his equation for metabolisable energy became
3.1.1 Gross Energy Values
Atwater collected values from the literature and also measured the heat of combustion of proteins, fats and carbohydrates. These vary slightly depending on sources and Atwater derived weighted values for the gross heat of combustion of the protein, fat and carbohydrate in the typical mixed diet of his time. It has been argued that these weighted values are invalid for individual foods and for diets whose composition in terms of foodstuffs is different from those eaten in the USA in the early 20th century (Maynard, 1944).
3.1.2 Apparent Digestibility Coefficients
Atwater measured a large number of digestibility coefficients for simple mixtures, and in substitution experiments derived values for individual foods.
These he combined in a weighted fashion to derive values for mixed diets. When these were tested experimentally with mixed diets they did not give a good prediction, and Atwater adjusted the coefficients for mixed diets (Merrill & Watt, 1955).
3.1.3 Urinary Correction
The energy/nitrogen ratio in urine shows considerable variation and the energy/organic matter is less variable (Benedict, cited by Merrill and Watt, 1955), but the energy/nitrogen value provided Atwater with a workable approach although this has caused some confusion (Widdowson, 1955) and only applies for subjects in nitrogen balance (Southgate & Barrett, 1966).
3.2 Specific Conversion System
Following Maynard's (1944) objection, Merrill and Watt (1955) returned to Atwater's original approach and derived a system whereby specific calorie conversion factors for different foods were proposed. This takes cognisance of the fact that first the gross energy values of the protein, fats and carbohydrates from different food sources are different, and second, that the apparent digestibility of the components of different foods is different.
This system relies on having measured heats of combustion of a wide range of isolated proteins, fats and carbohydrates. It also depends on data from digestibility studies, where individual foods have been substituted for basal diets in order to measure the apparent digestibility coefficients for those foods. This approach is based on the assumption that there are no interactions between foods in a mixture in the intestine, and from a practical view point, such studies with humans are difficult to control with the required accuracy.
3.3 Assumptions Based on the Use of Carbohydrates by Difference and the Effects of Dietary Fibre
These have been discussed in the earlier paper. In summary, the carbohydrate by difference approach presents several problems. Firstly, it does not distinguish between sugars, starch and the unavailable carbohydrates (dietary fibre).
This affects firstly the gross energy that is assigned to carbohydrate - sucrose has a heat of combustion of 3.95 (Kcal/g) (16.53 KJ/g) and starch 4.15 (Kcal/g) (17.36 KJ/g).
Secondly it does not provide for the fact that sugars and starch are virtually completely digested and absorbed, and thus provide metabolisable energy equivalent to their heat of combustion.
The unavailable carbohydrates (dietary fibre) are degraded to a variable extent in the large bowel. The products of this microbial digestion are fatty acids, CO2, methane and hydrogen. The fatty acids (acetate, butyrate and proprionate) are absorbed in the large intestine and provide some metabolisable energy. The extent of degradation depends on the source of the dietary fibre (its composition and state of division), and the individual consuming the dietary fibre. There is insufficient data to give firm guidance on the energy available from this source.
Finally dietary fibre affects faecal losses of nitrogen and fat as discussed earlier. Whether the increased fat loss is due to an effect on small intestinal absorption is not clear. The increased faecal nitrogen losses on high fibre diets are probably due to an increased bacterial nitrogen content of the faeces. Both these effects however lead to reductions in apparent digestibility, and therefore in the Atwater system produce small changes in the proper energy conversion factors for those diets (Southgate & Durnin, 1970).
3.4 Theoretical and Practical Considerations Relating to the Calculation of Energy Values
3.4.1 Variations in Heats of Combustion of Food Constituents Proteins
The experimental evidence for the magnitude of this variation is very limited, but as the heats of combustion of the individual amino-acids are different it is reasonable to expect variations between different proteins. Sands (1974) reported an observed range of from 5.48 for conglutin (from blue lupin) to 5.92 for Hordein (barley), which compares with Atwaters’ range of 5.27 for gelatin to 5.95 for wheat gluten. It is difficult to calculate expected values for a protein from amino-acid data, as some of the heats of combustion are not known accurately. Preliminary calculations on cows milk suggest a value of around 5.5 Kcal/g (23.0 kJ/g).
Analagously the experimental evidence is limited, but since the fatty acids differ in their heats of combustion one should expect fats to vary in heats of combustion. These differences are however relatively small - for example, breast milk fat has a calculated heat of combustion of 9.37 Kcal/g compared with that of cows milk fat of 9.19 Kcal/g.
Monosaccharides have heats of combustion of around 3.75 Kcal/g, disaccharides 3.95 and polysaccharides 4.15 – 4.20 Kcal/g. The heat of hydrolysis is very small and these values are essentially equivalent when calculated on a monosaccharide basis. Thus 100g sucrose gives on hydrolysis 105.6 g monosaccharide and 100g starch gives on hydrolysis 110g glucose.
3.4.2 Apparent Digestibility Coefficients
The human digestive tract is a very efficient organ, and the faecal excretion of nitrogenous material and fats is a small proportion (usually less than 10%) of the intake (Southgate and Durnin, 1970). Atwater recognised that the faecal excretion was a complex mixture of unabsorbed intestinal secretions, bacterial material and metabolites, sloughed mucosal cells, mucus, and only to a small extent, unabsorbed dietary components. This may be one reason why he chose to use ‘availability’ rather than digestibility. His view was that these faecal constituents were truly unavailable and that his apparent disregard of the nature of faecal excretion was justifiable in a practical context.
The relationship ‘Intake minus Faecal Excretion divided by Intake,’ wherever faecal excretion is small, will approximate to unity and thus these ‘coefficients’ have a low variance and have the appearance of constants. This is spurious since faecal excretion is variable even on a constant diet (Southgate and Durnin, (1970) Cummings et al, (1978)), and there is no evidence to suggest that faecal excretion is in fact related to intake in the way implied by these coefficients.
3.4.3 Practical Considerations in Calculations of Energy Value of Foods and Diets
The calculation of energy values must be regarded as an alternative to direct measurement, and therefore is likely to be associated with some inaccuracy when compared with direct assessment. These inaccuracies arise for a number of reasons:-
Variations in Food Composition: Foods are biological mixtures and as such show considerable variation in composition, particularly in respect of water and fat content. This means that compositional values quoted for representative samples of foods in food composition tables do not necessarily apply to individual samples of foods (Paul and Southgate, 1978). In studies where great accuracy is required, samples of the food consumed must be analysed.
Measurements of Food Intake: In estimating energy intakes, measurements of food intake are made, and these are known to be subject to considerable uncertainty. Even in studies under very close supervision the errors in weighing individual food items are rarely less than ± 5%. A certain degree of pragmatism must therefore be used when assessing procedures for calculating energy intakes, and many authors impute greater accuracy to quoted calculated energy intakes than is justifiable.
Individual Variation: Variations in individuals are seen in all human studies, and these variations are not allowed for in most calculations.
The theorectical and physiological objections to the assumptions inherent in the Atwater system are likely to result in errors much smaller than these practical matters. Southgate and Barratt, (1966) derived conversion factors from experimental studies with young infants, but these produced values for metabolisable energy intake that were insignificantly different from those obtained by direct application of the modified Atwater factors (Southgate and Durnin, 1970).
3.5 Alternatives to the Atwater System
At the present time there seem to be two possible approaches to the calculation of energy values.
The studies of Macy, (1942) Levy et al, (1958) and Southgate and Durnin, (1970) could provide an emperical system which would not involve the theoretical objections to the Atwater system. This would possibly be preferable on grounds of scientific aesthetics. The equation derived from the Southgate and Durnin (1970) observations and applicable to the Macy (1942) data has the following form:-
Metabolisable Energy = 0.977 Gross Energy -6.6N - 4UC where N = total nitrogen intake and UC is the intake of unavailable carbohydrates.
The data of Levy et al, (1958) produced an equation of a similar kind:-
ME = 0.976 GE - 7.959 N - 59.8
The residual constant possibly reflects the intake of unavailable carbohydrates in the diets studied by Levy et al (1958).
Gross energy values (if this was adopted) could be determined directly from food composition tables, or could be calculated from compositional data. Since these calculations do not involve physiological variables one would expect greater accuracy. Southgate and Durnin (1970) found that the accuracy of the Atwater system was limited by the accuracy with which the system predicted gross energy intake.
3.5.2 Biochemical Approach
The pathways of energy transduction in the mammalian system are well established and it is intuitively possible to derive a system for calculating energy value for foods and diets which would take account of the metabolic fate of ingested nutrients. Such a system would move away from the ‘black box’ view of the animal which is inherent in the Atwater system, and would meet the suggestion of Keys, (1945).
This approach should also take into account the current views of dietary induced thermogenesis, and provide a better indication of the metabolisable energy that would be available for storage in tissues and maintenance. It would be premature to advocate this approach at the present time until the factors controlling thermogenesis are better understood.
3.6.1 At the present time the conventional use of energy conversion factors provides a method for estimating the available energy intake which is more accurate than estimates of food intake, although there are real theoretical and practical objections to the derivation of such a system of factors.
3.6.2 The accuracy of the system is improved if measurements of sugars and starch are used instead of carbohydrate (by difference).
3.6.3 The unavailable carbohydrates, non-starch polysaccharides, or dietary fibre components of a diet contribute a small amount of energy by virtue of the fatty acids produced from them by the intestinal microflora. In normal Western diets this is negligible, but where the intake of these components is high (say 100g per day), the contibution may be important where energy intakes are marginal. Detailed studies are required to measure this potential contribution. However, high intakes of dietary fibre are associated with increased losses of faecal energy, and for most practical purposes the contribution of dietary fibre to energy intakes can be discounted in calculations.
3.6.4 Organic acids contribute to the energy intake, and where intakes of these are known they should be included.
3.7.1 For practical use in the estimation of the energy value of foods, the factors used by Paul and Southgate (1978) (Table 2) are suggested. However it must be recognised that the calculation of energy values is largely a convention, and other aspects produce greater inaccuracies in the estimates than the factors themselves.
3.7.2 In all studies of energy metabolism where great accuracy is required there is no substitute for direct measurements.
Energy Conversion Factors
|Carbohydrate (available (sugars + starch) expressed as monosaccharides)||3.75||16|
(a) These have been assumed to be completely absorbed and metabolised.
Cummings, J.H., Southgate, D.A.T., Branch, W., Houston, H., Jenkins, D.J.A., and James, W.P.T. (1978) Colonic response to dietary fibre from carrot, cabbage, apple, bran and guargum. Lancet, 1, 5–9.
FAO (1947) Energy-yielding components of food and computation of calorie values. FAO, Washington.
FAO/WHO (1973) Energy and Protein Requirements. FAO Nutrition Meetings Report Series No. 52. WHO Technical Report Series No. 522.
Hollingsworth, D.F., (1955) Some difficulties in estimating the energy value of human diets. Proc. Nutr. Soc. 14 154 – 160.
Jones, B.B. (1941) Factors for converting percentages of nitrogen in foods and feeds into percentage of protein. US. Dept. Agric. Circ. 183 22 (revised).
Keys, A. (1945) The refinement of metabolic calculations for nutritional purposes and the problem of availability. J. Nutr. 29 81–84.
Leyy, L.M., Bernstein, L.M. and Grossman, M.I. (1958). The calorie content of urine of human beings and the estimation of metabolisable energy of foodstuffs.US. Med. Res. & Nutr. Lab. Report 226.
McCance R.A. and Widdowson, E.M. (1960) The composition of Foods 3rd edition. Spec. Rep. Ser. Med. Res. Coun. Lond. No. 297. London, HMSO.
Macy, I.G. (1942) Nutrition and Chemical Growth in Childhood. Springfield. III. CC. Thomas.
McCollum, E.V. (1957) A History of Nutrition. Boston, Houghton Mifflen Co.
Maynard, L.A. (1944) The Atwater system of calculating the caloric value of diets. J. Nutr., 28, 443–452.
Merrill, A.L. and Watt, B.K. (1955) Energy Value of Foods - basis and derivation. U.S. Dept. Agric. Agric. Handbood No. 74.
Paul, A.A. and Southgate, D.A.T. (1978) McCance and Widdowsons' The Composition of Foods 4th edition London HMSO.
Sands, R.E. (1974) Rapid method for calculating energy value of food components. Food Technology (July) 29–40.
Southgate, D.A.T. (1974) Guidelines for the preparation of Tables of Food Composition Basel, S Karger.
Southgate, D.A.T. and Barrett, I.M. (1966) The intake and excretion of calorific constituents of milk by babies. Br. J. Nutr., 20, 363–372.
Southgate, D.A.T. and Durnin, J.V.G.A. (1970) Calorie conversion factors - An experimental reassessment of the factors used in the calculation of the energy value of human diets. Br. J. Nutr., 24, 517–535.
Widdowson, E.M. (1955). Assessment of the energy value of human foods. Proc. Nutr. Soc. 14, 142–154. | <urn:uuid:b8496d7b-9749-4f22-9905-d2ab76e44274> | CC-MAIN-2019-47 | http://www.fao.org/3/M2847E/M2847E00.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00137.warc.gz | en | 0.913306 | 5,316 | 3.25 | 3 |
On 27 October 2017, at 1410 Eastern Daylight-savings Time (EDT), a captain and first officer signed on for flight duty at Brisbane Airport, Queensland. The planned duty was to operate a JetGo Embraer ERJ135 aircraft, registered VH-ZJG, on four scheduled passenger transport sectors: from Brisbane to Dubbo, New South Wales, Dubbo to Essendon, Victoria, Essendon to Dubbo, and Dubbo to Brisbane.
The aircraft was scheduled to depart Brisbane at 1510. However, the flight crew were advised of an engineering delay of up to 45 minutes for unscheduled maintenance to change a main landing gear tyre. A replacement tyre was not immediately available, which resulted in an extended delay that eventually totalled 4 hours 15 minutes.
The aircraft departed Brisbane at 1925 and, after an uneventful flight, departed Dubbo for Essendon at 2116. The first officer was the pilot flying (PF) and the captain was the pilot monitoring (PM) for the sector from Dubbo to Essendon.
Prior to commencing descent, the flight crew programmed the aircraft’s flight management guidance system and briefed for an instrument landing system (ILS) approach to runway 26. It was the first time either pilot had operated into Essendon at night, and therefore their preferred approach was a runway 26 ILS approach. The flight crew also discussed the possibility of receiving radar vectors from air traffic control (ATC).
ATC informed the flight crew that due to aircraft traffic at neighbouring Melbourne Airport, runway 26 was unavailable. Therefore, ATC provided radar vectors for a visual approach to runway 35. As the aircraft passed abeam Melbourne Airport, the captain had Melbourne and Essendon runways in sight.
At 2220:18, ATC advised the flight crew that they would be positioned for a 5 NM (9.3 km) final approach at 2,100 ft above mean sea level (AMSL). This altitude was the radar lowest safe altitude for that sector of airspace. At 2221:48, ATC instructed the flight crew to descend to 2,100 ft.
The first officer recalled setting 2,100 ft on the aircraft’s altitude preselector. This directed the automatic flight control system (AFCS) to continue descent to 2,100 ft. He also recalled confirming the 2,100 ft set altitude on his primary flight display, as well as the flight director modes of heading and vertical speed mode. The captain recalled verifying the assigned altitude being set and flight director modes. Both flight crew recalled the autopilot was engaged at this time.
At 2223:02, as the aircraft passed about 2,300 ft on descent, ATC requested the flight crew to report sighting runway 35. At this time, the captain had lost sight of the runway. Becoming concerned that the captain could not visually identify the runway, the first officer also focused his attention looking outside the aircraft to the left to help locate the airport.
At about 2223:35, when 7.1 NM (13.1 km) from Essendon Airport and on a heading of 080°, the aircraft descended below the assigned altitude of 2,100 ft. Neither flight crew detected the aircraft was now below the radar minimum safe altitude and continuing to descend. When the captain next looked inside the aircraft at his primary flight display, he recalled seeing the altimeter indicating 1,600 ft, and he then called ’height’. The first officer also recalled seeing that they were below the assigned altitude at the same time.
At 2223:52, ATC instructed the flight crew to climb to 2,100 ft. However, that instruction was over transmitted by another aircraft and not heard by the flight crew. At 2223:58, ATC issued another instruction to climb immediately to 2,100 ft, which the flight crew acknowledged.
At 2224:05, a cleared level adherence monitoring (CLAM) alarm activated (Figure 1), further alerting ATC of a difference between the aircraft’s assigned altitude and its actual altitude. ATC immediately issued a terrain safety alert, advising the flight crew that the lowest safe altitude was 2,100 ft.
Recorded radar data showed the aircraft’s lowest altitude was about 1,500 ft during 2224:05 to 2224:10 (Figure 1).
Image shows the aircraft (JG044) with a current altitude of 1,500 ft (‘015’), a cleared altitude of 2,100 ft (‘021’), a radar vectored heading of 070° (‘H070’) and a groundspeed of 160 kt (‘16’).
Source: Airservices Australia
As the aircraft climbed above 2,100 ft, ATC advised the flight crew of their position relative to runway 35 and asked if they had the runway in sight. When they confirmed that they did, ATC asked if they wanted to continue the approach. The flight crew elected to discontinue the approach and ATC subsequently vectored the aircraft for another approach. The aircraft landed without further incident at 2236.
At 2259, the aircraft taxied for departure from Essendon and then completed the service to Dubbo and Brisbane. The flight crew finished duty in Brisbane at 0245.
Essendon Airport is located about 8 km south-east of Melbourne Airport. The proximity of the two airports adds complexity to operations at Essendon.
The airport has two runways aligned 17/35 and 08/26, and it is bounded on two sides by freeways with substantial amber lighting and well-lit residential areas. At night, the lights around the airport present a complex picture. The published aerodrome chart had a caution note describing that amber freeway lighting may confuse flight crews when attempting to identify runway 08/26 lighting.
Runway 35 did not have an instrument approach procedure. Instead, pilots were required to conduct visual approaches to this runway. It was equipped with a precision approach path indicator light (PAPI) array to provide pilots with vertical profile guidance during visual approaches.
At the time of the occurrence, visibility was greater than 10 km, and the wind was a northerly at 14 kt.
Essendon Airport had a curfew prohibiting aircraft movements from 2300 until 0600 for all operations other than emergency services. Operators would incur financial penalties for flights arriving or departing during the curfew period.
To continue the service from Essendon to Dubbo (and then Brisbane), the aircraft had to commence taxiing for departure before the curfew. Missing curfew would result in the aircraft being grounded until 0600 the next morning, disrupting the current service and that of the following day.
The aircraft taxied for the return flight from Essendon at 2259, 1 minute before the commencement of the curfew period. Both flight crew reported feeling significant pressure to complete the service and return the aircraft to Brisbane. Both pilots reported being aware of the potential problem with the curfew prior to departing Brisbane, and the first officer reported considering the potential problem with the curfew during the visual approach into Essendon.
Air traffic control information
Airservices Australia provided an ATC service to the aircraft for the entire flight, including the descent to Essendon. The approach controller who provided radar vectors to the flight crew was also responsible for sequencing a large number of aircraft arrivals into Melbourne at the same time.
In an effort to manage the risk that neither pilot had operated at night into Essendon, the captain’s preferred arrival was to runway 26 as it was equipped with an ILS and was the longer runway. However, due to the congestion of arriving and departing aircraft at Melbourne, ATC advised this request was not available. Although the captain maintained the ability to instruct ATC that he required the ILS approach, he was likely aware that doing so would possibly result in ATC needing the aircraft to enter a holding pattern until the controller could sequence the flow of aircraft traffic at both airports.
When conducting a visual approach to a runway, ATC can provide radar vectors to the pilot until the aircraft is aligned with the runway centreline. A pilot is required to report that they have sight of, and can maintain sight of, the landing runway in order for ATC to clear a pilot to conduct the approach.
The flight crew reported that during the radar vectoring towards Essendon, they felt pressure from ATC to sight runway 35. The ATSB reviewed audio recordings between the approach controller and the flight crew. The flight crew first contacted Melbourne Approach at 2213:58. At 2223:02, the approach controller asked them to report Essendon runway 35 in sight. This was the only recorded request made by the approach controller to the flight crew to sight runway 35.
Automatic flight control system
Flight crews normally manage flight of an ERJ135 using the aircraft’s AFCS. This system consists of dual autopilots, a flight guidance controller (FGC) and flight instrument displays.
To manage the aircraft in all flight phases, pilots select various modes on the FGC. Selected descent modes included flight level change, speed hold and vertical speed.
The pilot can engage the autopilot by pressing a button on the FGC. Intentional disengagement of the autopilot by a pilot generates an audible voice AUTOPILOT alert. Failure and disconnection of an autopilot results in the same audible voice alert and generates a warning message illuminated on a separate indicating system.
In the ‘vertical speed’ (VS) selected descent mode, the AFCS will maintain a selected vertical speed. The rate of vertical speed can be changed as needed by the pilot. With the autopilot engaged, the VS mode would automatically change to altitude capture mode as the aircraft approached a preselected altitude.
An ‘altitude preselect’ (ASEL) mode armed automatically if the aircraft climbed or descended towards a preselected altitude. Altitude preselect mode would then automatically capture and cancel any existing mode at an appropriate point based on preselected altitude error and vertical speed. The system would then automatically switch to altitude hold mode after the aircraft had levelled off at the preselected altitude.
The first officer recalled selecting the descent mode to vertical speed at the time ATC commenced issuing radar vectors. The flight crew reported that the autopilot was engaged during the descent and that the AFCS failed to capture the preselected altitude (2,100 ft) as expected. Further, the flight crew recalled that no alert was heard, either for autopilot disconnect or altitude exceedance, which should have sounded when the aircraft was 200 ft below the preselected altitude.
After descending below 2,100 ft, the flight crew reported that the flight director pitch bars, which indicate the direction of the preselected altitude, were providing guidance that the aircraft should climb.
The ATSB requested the aircraft’s flight data recorder. However, at the time of the request, the data for the occurrence flight had been overwritten.
Following the flight, no technical log entry was made regarding a problem with the autopilot capturing the selected altitude. Nevertheless, an engineering inspection of the AFCS was conducted following the aircraft’s arrival back in Brisbane, and no fault was found.
The flight crew advised that they were aware of other recent AFCS problems associated with the aircraft and the operator’s other ERJ135 aircraft. A review of maintenance records for the operator’s ERJ135 fleet identified that several AFCS-related problems had been reported during the period from 3 August. However, none of those problems were similar to what occurred during the occurrence flight. In addition, no subsequent problems that were similar in nature were reported on the occurrence aircraft.
Flight crew information
The captain held an Air Transport (Aeroplane) Pilot Licence (ATPL) and had 10,100 hours total flight experience, including 155 hours on the aircraft type. The first officer held a Commercial (Aeroplane) Pilot Licence and had 2,100 hours total flight experience, including 473 hours on type.
Both flight crew had operated into Essendon on many previous occasions, but neither had operated to that airport at night.
Flight and duty times
The captain had the two previous days (2526 October) rostered off duty, and had conducted administrative work from 10001600 on the 24 October. The first officer had the four previous days rostered off duty.
On the day of the occurrence, both flight crew signed on to commence duty at 1410 EDT. Due to the delay before the first flight, they ultimately signed off duty at 0245, a duty period of 12.6 hours. However, the captain advised that he commenced administrative duties, unrelated to the subsequent flights, at about 1200 EDT. Therefore, his actual duty time was 14.8 hours.
The captain recalled waking up at about 0700 EDT on the day of the occurrence after a ‘normal’ sleep. He therefore had been awake for 15.4 hours at the time of the occurrence, and 18.8 hours at the end of the extended duty period. The first officer recalled waking up at 0630 EDT on the day of the occurrence after a ‘reasonable’ sleep, and was therefore awake for 15.9 hours at the time of the occurrence and 19.3 hours at the end of the extended duty period.
The operator managed its flight crews’ flight and duty times to comply with a standard industry exemption to Civil Aviation Order (CAO) 48.0, which was issued to the operator by the Civil Aviation Safety Authority (CASA). The exemption stated that duty included any task associated with the business of an operator.
The operator’s rostering personnel managed flight crew flight and duty times in order to comply with the exemption. The operator’s procedures required that all work-related activities for the operator be reported and considered as duty time.
The rostered flight duty limit for a pilot signing on after 1300 local time for a four-sector duty was 12 hours. However, a pilot could elect to extend a duty already started for up to 2 hours as long as they felt mentally and physically fit to continue (and they submitted a report upon completing the duty). Although the captain’s recorded duty time did not exceed 14 hours by the end of the trip, the actual duty time did exceed the limit.
During the delay on the ground in Brisbane, the crew were offered an option to stand down as they were now facing a long duty period. The captain reported that he was told his standing down would mean his four scheduled flights that day would be cancelled as there were no replacement captains available. Both pilots reported feeling fit to continue and elected to continue the flights. However, the captain later reported that he felt some pressure to operate the flights. The cabin crewmember stood herself down and was replaced.
During radar vectoring to runway 35 at Essendon Airport, the aircraft descended below the radar minimum safe altitude of 2,100 ft. The flight crew reported that the autopilot was engaged and the altitude of 2,100 ft was preselected at the time of the occurrence. A subsequent engineering inspection found no fault with the AFCS. Because no flight data was able to be obtained, the ATSB was unable to confirm what the AFCS mode(s) and settings were at the time of the occurrence, or the reason why the aircraft descended below the preselected altitude.
Regardless of the reason for the aircraft descending through the prescribed altitude, flight crew have a vital role in monitoring the aircraft’s flight path, particularly during descent. In this case, the first officer (pilot flying) relied upon automation to capture the assigned altitude and diverted his attention outside of the aircraft to assist the captain (pilot monitoring) in sighting the runway. As a result, neither pilot was monitoring the aircraft’s flight instruments or descent path as it approached and subsequently descended through the assigned level, which was also the minimum safe altitude.
The flight had been significantly delayed from its scheduled time of operation. The flight crew were aware of the reduced time margin for their scheduled return flight to depart Essendon prior to the 2300 curfew. In addition, neither pilot had operated at night into Essendon Airport, and the captain’s requested option of conducting an ILS approach to runway 26 had been declined by ATC due to traffic. The captain’s subsequent difficulty in identifying runway 35 at night, the delayed arrival of the aircraft at Essendon and the proximity of the curfew time probably contributed to the first officer (pilot flying) focussing his attention outside the aircraft at a critical time of flight.
Both flight crew had the previous days off duty and had a reasonable amount of sleep the night before. Although both flight crew had been awake for 1516 hours at the time of the occurrence, there was insufficient evidence to conclude that they were operating at a level of fatigue known to influence performance at the time of the occurrence. Nevertheless, they would probably have been operating at an elevated risk of fatigue during the subsequent two flights.
These findings should not be read as apportioning blame or liability to any particular organisation or individual.
- During radar vectoring to runway 35 at Essendon, the aircraft descended through the radar lowest safe altitude (2,100 ft). The extent to which there was a problem with the functioning of the aircraft’s automatic flight control system could not be determined.
- Due to the captain (pilot monitoring) having difficulty sighting the runway, as well as perceived pressure to complete the flight, the first officer (pilot flying) focussed his attention outside the aircraft at a critical time during the descent.
- The flight crew did not detect that the aircraft had descended through the assigned level (2,100 ft) until the aircraft reached 1,600 ft.
Whether or not the ATSB identifies safety issues in the course of an investigation, relevant organisations may proactively initiate safety action in order to reduce their safety risk. The ATSB has been advised of the following proactive safety actions in response to this occurrence.
As a result of this occurrence, JetGo advised the ATSB that they had taken the following safety actions:
- The flight crew involved in the incident were subsequently provided with ground and simulator training for operations into Essendon at night.
Flight crew should be mindful that during higher workload phases of flight, such as during approach and landing at an unfamiliar airport, introducing tasks that divert both flight crew members’ attention from monitoring the aircraft’s flight profile and altitude should be minimised. Further, during a visual approach, pilots must ensure that at least one pilot monitors the aircraft’s flight path profile and energy state.
An increasing trend has been identified where pilots do not effectively manage their aircraft’s flightpath when unexpected events arise during the approach to land.
When compared to other phases of flight, the approach and landing has a substantially increased workload and is traditionally the phase of flight associated with the highest accident rate. Flight crews must continuously monitor aircraft and approach parameters, and the external environment, to ensure they maintain a stable approach profile and make appropriate decisions for a safe landing.
The selection of inappropriate autoflight modes, unexpected developments, or any confusion about roles or procedures can contribute to decisions and actions that increase the safety risk to the aircraft and its passengers.
The ATSB SafetyWatch information on Descending too low on approach provides more resources and information.
- Eastern Daylight-saving Time (EDT): Coordinated Universal Time (UTC) + 11 hours. EDT was the time zone relevant where the occurrence took place and it has been used throughout the report to minimise confusion. The time in Brisbane was Eastern Standard Time, or UTC + 10 hours.
- Pilot Flying (PF) and Pilot Monitoring (PM): procedurally assigned roles with specifically assigned duties at specific stages of a flight. The PF does most of the flying, except in defined circumstances; such as planning for descent, approach and landing. The PM carries out support duties and monitors the PF’s actions and the aircraft’s flight path.
- Instrument Landing System: A landing aid which provides lateral and vertical guidance to flight crew during approach to land.
- Radar vectoring: ATC provision of track bearings and altitudes used to guide and position an aircraft.
- System-detected non-conformance alert that checks the conformance of the actual flight level of a surveillance track with respect to the cleared flight level inputted by the controller.
|Date:||27 October 2017||Investigation status:||Completed|
|Time:||2223 AEST||Investigation level:||Short - click for an explanation of investigation levels|
|Location:||12.8 km SW of Essendon Airport||Investigation phase:||Final report: Dissemination|
|State:||Victoria||Occurrence type:||Flight below minimum altitude|
|Release date:||19 December 2018||Occurrence category:||Incident|
|Report status:||Final||Highest injury level:||None|
|Aircraft manufacturer||Embraer-Empresa Brasileira De Aeronautica|
|Type of operation||Air Transport High Capacity|
|Damage to aircraft||Nil|
|Departure point||Dubbo, NSW| | <urn:uuid:4fab5d8f-caef-4ed2-822b-0efe2515d73b> | CC-MAIN-2019-47 | http://www.atsb.gov.au/publications/investigation_reports/2017/aair/ao-2017-106/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00299.warc.gz | en | 0.960501 | 4,395 | 2.578125 | 3 |
SIMBAD is an astronomical database of objects beyond the Solar System. It is maintained by the Centre de données astronomiques de France. SIMBAD was created by merging the Catalog of Stellar Identifications and the Bibliographic Star Index as they existed at the Meudon Computer Centre until 1979, expanded by additional source data from other catalogues and the academic literature; the first on-line interactive version, known as Version 2, was made available in 1981. Version 3, developed in the C language and running on UNIX stations at the Strasbourg Observatory, was released in 1990. Fall of 2006 saw the release of Version 4 of the database, now stored in PostgreSQL, the supporting software, now written in Java; as of 10 February 2017, SIMBAD contains information for 9,099,070 objects under 24,529,080 different names, with 327,634 bibliographical references and 15,511,733 bibliographic citations. The minor planet 4692 SIMBAD was named in its honour. Planetary Data System – NASA's database of information on SSSB, maintained by JPL and Caltech.
NASA/IPAC Extragalactic Database – a database of information on objects outside the Milky Way maintained by JPL. NASA Exoplanet Archive – an online astronomical exoplanet catalog and data service Bibcode SIMBAD, Strasbourg SIMBAD, Harvard
A binary star is a star system consisting of two stars orbiting around their common barycenter. Systems of two or more stars are called multiple star systems; these systems when more distant appear to the unaided eye as a single point of light, are revealed as multiple by other means. Research over the last two centuries suggests that half or more of visible stars are part of multiple star systems; the term double star is used synonymously with binary star. Optical doubles are so called because the two stars appear close together in the sky as seen from the Earth, their "doubleness" depends only on this optical effect. A double star can be revealed as optical by means of differences in their parallax measurements, proper motions, or radial velocities. Most known double stars have not been studied adequately to determine whether they are optical doubles or doubles physically bound through gravitation into a multiple star system. Binary star systems are important in astrophysics because calculations of their orbits allow the masses of their component stars to be directly determined, which in turn allows other stellar parameters, such as radius and density, to be indirectly estimated.
This determines an empirical mass-luminosity relationship from which the masses of single stars can be estimated. Binary stars are detected optically, in which case they are called visual binaries. Many visual binaries have long orbital periods of several centuries or millennia and therefore have orbits which are uncertain or poorly known, they may be detected by indirect techniques, such as spectroscopy or astrometry. If a binary star happens to orbit in a plane along our line of sight, its components will eclipse and transit each other. If components in binary star systems are close enough they can gravitationally distort their mutual outer stellar atmospheres. In some cases, these close binary systems can exchange mass, which may bring their evolution to stages that single stars cannot attain. Examples of binaries are Sirius, Cygnus X-1. Binary stars are common as the nuclei of many planetary nebulae, are the progenitors of both novae and type Ia supernovae; the term binary was first used in this context by Sir William Herschel in 1802, when he wrote: If, on the contrary, two stars should be situated near each other, at the same time so far insulated as not to be materially affected by the attractions of neighbouring stars, they will compose a separate system, remain united by the bond of their own mutual gravitation towards each other.
This should be called a real double star. By the modern definition, the term binary star is restricted to pairs of stars which revolve around a common center of mass. Binary stars which can be resolved with a telescope or interferometric methods are known as visual binaries. For most of the known visual binary stars one whole revolution has not been observed yet, they are observed to have travelled along a curved path or a partial arc; the more general term double star is used for pairs of stars which are seen to be close together in the sky. This distinction is made in languages other than English. Double stars may be binary systems or may be two stars that appear to be close together in the sky but have vastly different true distances from the Sun; the latter are termed optical optical pairs. Since the invention of the telescope, many pairs of double stars have been found. Early examples include Acrux. Mizar, in the Big Dipper, was observed to be double by Giovanni Battista Riccioli in 1650; the bright southern star Acrux, in the Southern Cross, was discovered to be double by Father Fontenay in 1685.
John Michell was the first to suggest that double stars might be physically attached to each other when he argued in 1767 that the probability that a double star was due to a chance alignment was small. William Herschel began observing double stars in 1779 and soon thereafter published catalogs of about 700 double stars. By 1803, he had observed changes in the relative positions in a number of double stars over the course of 25 years, concluded that they must be binary systems. Since this time, many more double stars have been measured; the Washington Double Star Catalog, a database of visual double stars compiled by the United States Naval Observatory, contains over 100,000 pairs of double stars, including optical doubles as well as binary stars. Orbits are known for only a few thousand of these double stars, most have not been ascertained to be either true binaries or optical double stars; this can be determined by observing the relative motion of the pairs. If the motion is part of an orbit, or if the stars have similar radial velocities and the difference in their proper motions is small compared to their common proper motion, the pair is physical.
One of the tasks that remains for visual observers of double stars is to obtain sufficient observations to prove or disprove gravitational connection. Binary stars are classified into four types accordi
Minute and second of arc
A minute of arc, arc minute, or minute arc is a unit of angular measurement equal to 1/60 of one degree. Since one degree is 1/360 of a turn, one minute of arc is 1/21600 of a turn – it is for this reason that the Earth's circumference is exactly 21,600 nautical miles. A minute of arc is π/10800 of a radian. A second of arc, arcsecond, or arc second is 1/60 of an arcminute, 1/3600 of a degree, 1/1296000 of a turn, π/648000 of a radian; these units originated in Babylonian astronomy as sexagesimal subdivisions of the degree. To express smaller angles, standard SI prefixes can be employed; the number of square arcminutes in a complete sphere is 4 π 2 = 466 560 000 π ≈ 148510660 square arcminutes. The names "minute" and "second" have nothing to do with the identically named units of time "minute" or "second"; the identical names reflect the ancient Babylonian number system, based on the number 60. The standard symbol for marking the arcminute is the prime, though a single quote is used where only ASCII characters are permitted.
One arcminute is thus written 1′. It is abbreviated as arcmin or amin or, less the prime with a circumflex over it; the standard symbol for the arcsecond is the double prime, though a double quote is used where only ASCII characters are permitted. One arcsecond is thus written 1″, it is abbreviated as arcsec or asec. In celestial navigation, seconds of arc are used in calculations, the preference being for degrees and decimals of a minute, for example, written as 42° 25.32′ or 42° 25.322′. This notation has been carried over into marine GPS receivers, which display latitude and longitude in the latter format by default; the full moon's average apparent size is about 31 arcminutes. An arcminute is the resolution of the human eye. An arcsecond is the angle subtended by a U. S. dime coin at a distance of 4 kilometres. An arcsecond is the angle subtended by an object of diameter 725.27 km at a distance of one astronomical unit, an object of diameter 45866916 km at one light-year, an object of diameter one astronomical unit at a distance of one parsec, by definition.
A milliarcsecond is about the size of a dime atop the Eiffel Tower. A microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth. A nanoarcsecond is about the size of a penny on Neptune's moon Triton as observed from Earth. Notable examples of size in arcseconds are: Hubble Space Telescope has calculational resolution of 0.05 arcseconds and actual resolution of 0.1 arcseconds, close to the diffraction limit. Crescent Venus measures between 66 seconds of arc. Since antiquity the arcminute and arcsecond have been used in astronomy. In the ecliptic coordinate system and longitude; the principal exception is right ascension in equatorial coordinates, measured in time units of hours and seconds. The arcsecond is often used to describe small astronomical angles such as the angular diameters of planets, the proper motion of stars, the separation of components of binary star systems, parallax, the small change of position of a star in the course of a year or of a solar system body as the Earth rotates.
These small angles may be written in milliarcseconds, or thousandths of an arcsecond. The unit of distance, the parsec, named from the parallax of one arc second, was developed for such parallax measurements, it is the distance at which the mean radius of the Earth's orbit would subtend an angle of one arcsecond. The ESA astrometric space probe Gaia, launched in 2013, can approximate star positions to 7 microarcseconds. Apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red giant with a diameter of 0.05 arcsecond. Because of the effects of atmospheric seeing, ground-based telescopes will smear the image of a star to an angular diameter of about 0.5 arcsecond. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1 arcsecond. Space telescopes are diffraction limited. For example, the Hubble Space Telescope can reach an angular size of stars down to about 0.1″. Techniques exist for improving seeing on the ground. Adaptive optics, for example, can produce images around 0.05 arcsecond on a 10 m class telescope.
Minutes and seconds of arc are used in cartography and navigation. At sea level one minute of arc
The Kelvin scale is an absolute thermodynamic temperature scale using as its null point absolute zero, the temperature at which all thermal motion ceases in the classical description of thermodynamics. The kelvin is the base unit of temperature in the International System of Units; until 2018, the kelvin was defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. In other words, it was defined such that the triple point of water is 273.16 K. On 16 November 2018, a new definition was adopted, in terms of a fixed value of the Boltzmann constant. For legal metrology purposes, the new definition will come into force on 20 May 2019; the Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Thomson, 1st Baron Kelvin, who wrote of the need for an "absolute thermometric scale". Unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or written as a degree; the kelvin is the primary unit of temperature measurement in the physical sciences, but is used in conjunction with the degree Celsius, which has the same magnitude.
The definition implies that absolute zero is equivalent to −273.15 °C. In 1848, William Thomson, made Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby "infinite cold" was the scale's null point, which used the degree Celsius for its unit increment. Kelvin calculated; this absolute scale is known today as the Kelvin thermodynamic temperature scale. Kelvin's value of "−273" was the negative reciprocal of 0.00366—the accepted expansion coefficient of gas per degree Celsius relative to the ice point, giving a remarkable consistency to the accepted value. In 1954, Resolution 3 of the 10th General Conference on Weights and Measures gave the Kelvin scale its modern definition by designating the triple point of water as its second defining point and assigned its temperature to 273.16 kelvins. In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."In 2005, the Comité International des Poids et Mesures, a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition specified as Vienna Standard Mean Ocean Water.
In 2018, Resolution A of the 26th CGPM adopted a significant redefinition of SI base units which included redefining the Kelvin in terms of a fixed value for the Boltzmann constant of 1.380649×10−23 J/K. When spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm; when reference is made to the "Kelvin scale", the word "kelvin"—which is a noun—functions adjectivally to modify the noun "scale" and is capitalized. As with most other SI unit symbols there is a space between the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a "degree", the same as with the other temperature scales at the time, it was distinguished from the other scales with either the adjective suffix "Kelvin" or with "absolute" and its symbol was °K. The latter term, the unit's official name from 1948 until 1954, was ambiguous since it could be interpreted as referring to the Rankine scale. Before the 13th CGPM, the plural form was "degrees absolute".
The 13th CGPM changed the unit name to "kelvin". The omission of "degree" indicates that it is not relative to an arbitrary reference point like the Celsius and Fahrenheit scales, but rather an absolute unit of measure which can be manipulated algebraically. In science and engineering, degrees Celsius and kelvins are used in the same article, where absolute temperatures are given in degrees Celsius, but temperature intervals are given in kelvins. E.g. "its measured value was 0.01028 °C with an uncertainty of 60 µK." This practice is permissible because the degree Celsius is a special name for the kelvin for use in expressing relative temperatures, the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding that the official endorsement provided by Resolution 3 of the 13th CGPM states "a temperature interval may be expressed in degrees Celsius", the practice of using both °C and K is widespread throughout the scientific world; the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been adopted.
In 2005 the CIPM embarked on a programme to redefine the kelvin using a more experimentally rigorous methodology. In particular, the committee proposed redefining the kelvin such that Boltzmann's constant takes the exact value 1.3806505×10−23 J/K. The committee had hoped tha
Hipparcos was a scientific satellite of the European Space Agency, launched in 1989 and operated until 1993. It was the first space experiment devoted to precision astrometry, the accurate measurement of the positions of celestial objects on the sky; this permitted the accurate determination of proper motions and parallaxes of stars, allowing a determination of their distance and tangential velocity. When combined with radial velocity measurements from spectroscopy, this pinpointed all six quantities needed to determine the motion of stars; the resulting Hipparcos Catalogue, a high-precision catalogue of more than 118,200 stars, was published in 1997. The lower-precision Tycho Catalogue of more than a million stars was published at the same time, while the enhanced Tycho-2 Catalogue of 2.5 million stars was published in 2000. Hipparcos' follow-up mission, was launched in 2013; the word "Hipparcos" is an acronym for HIgh Precision PARallax COllecting Satellite and a reference to the ancient Greek astronomer Hipparchus of Nicaea, noted for applications of trigonometry to astronomy and his discovery of the precession of the equinoxes.
By the second half of the 20th century, the accurate measurement of star positions from the ground was running into insurmountable barriers to improvements in accuracy for large-angle measurements and systematic terms. Problems were dominated by the effects of the Earth's atmosphere, but were compounded by complex optical terms and gravitational instrument flexures, the absence of all-sky visibility. A formal proposal to make these exacting observations from space was first put forward in 1967. Although proposed to the French space agency CNES, it was considered too complex and expensive for a single national programme, its acceptance within the European Space Agency's scientific programme, in 1980, was the result of a lengthy process of study and lobbying. The underlying scientific motivation was to determine the physical properties of the stars through the measurement of their distances and space motions, thus to place theoretical studies of stellar structure and evolution, studies of galactic structure and kinematics, on a more secure empirical basis.
Observationally, the objective was to provide the positions and annual proper motions for some 100,000 stars with an unprecedented accuracy of 0.002 arcseconds, a target in practice surpassed by a factor of two. The name of the space telescope, "Hipparcos" was an acronym for High Precision Parallax Collecting Satellite, it reflected the name of the ancient Greek astronomer Hipparchus, considered the founder of trigonometry and the discoverer of the precession of the equinoxes; the spacecraft carried a single all-reflective, eccentric Schmidt telescope, with an aperture of 29 cm. A special beam-combining mirror superimposed two fields of view, 58 degrees apart, into the common focal plane; this complex mirror consisted of two mirrors tilted in opposite directions, each occupying half of the rectangular entrance pupil, providing an unvignetted field of view of about 1°×1°. The telescope used a system of grids, at the focal surface, composed of 2688 alternate opaque and transparent bands, with a period of 1.208 arc-sec.
Behind this grid system, an image dissector tube with a sensitive field of view of about 38-arc-sec diameter converted the modulated light into a sequence of photon counts from which the phase of the entire pulse train from a star could be derived. The apparent angle between two stars in the combined fields of view, modulo the grid period, was obtained from the phase difference of the two star pulse trains. Targeting the observation of some 100,000 stars, with an astrometric accuracy of about 0.002 arc-sec, the final Hipparcos Catalogue comprised nearly 120,000 stars with a median accuracy of better than 0.001 arc-sec. An additional photomultiplier system viewed a beam splitter in the optical path and was used as a star mapper, its purpose was to monitor and determine the satellite attitude, in the process, to gather photometric and astrometric data of all stars down to about 11th magnitude. These measurements were made in two broad bands corresponding to B and V in the UBV photometric system.
The positions of these latter stars were to be determined to a precision of 0.03 arc-sec, a factor of 25 less than the main mission stars. Targeting the observation of around 400,000 stars, the resulting Tycho Catalogue comprised just over 1 million stars, with a subsequent analysis extending this to the Tycho-2 Catalogue of about 2.5 million stars. The attitude of the spacecraft about its center of gravity was controlled to scan the celestial sphere in a regular precessional motion maintaining a constant inclination between the spin axis and the direction to the Sun; the spacecraft spun around its Z-axis at the rate of 11.25 revolutions/day at an angle of 43° to the Sun. The Z-axis rotated about the sun-satellite line at 6.4 revolutions/year. The spacecraft consisted of two platforms and six vertical panels, all made of aluminum honeycomb; the solar array consisted of three deployable sections. Two S-band antennas were located on the top and bottom of the spacecraft, providing an omni-directional downlink data rate of 24 kbit/s.
An attitude and orbit-control subsystem ensured correct dynamic attitude control and determination during the operational lifetim
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
A star is type of astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous points in the sky due to their immense distance from Earth; the most prominent stars were grouped into constellations and asterisms, the brightest of which gained proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. However, most of the estimated 300 sextillion stars in the Universe are invisible to the naked eye from Earth, including all stars outside our galaxy, the Milky Way. For at least a portion of its life, a star shines due to thermonuclear fusion of hydrogen into helium in its core, releasing energy that traverses the star's interior and radiates into outer space. All occurring elements heavier than helium are created by stellar nucleosynthesis during the star's lifetime, for some stars by supernova nucleosynthesis when it explodes.
Near the end of its life, a star can contain degenerate matter. Astronomers can determine the mass, age and many other properties of a star by observing its motion through space, its luminosity, spectrum respectively; the total mass of a star is the main factor. Other characteristics of a star, including diameter and temperature, change over its life, while the star's environment affects its rotation and movement. A plot of the temperature of many stars against their luminosities produces a plot known as a Hertzsprung–Russell diagram. Plotting a particular star on that diagram allows the age and evolutionary state of that star to be determined. A star's life begins with the gravitational collapse of a gaseous nebula of material composed of hydrogen, along with helium and trace amounts of heavier elements; when the stellar core is sufficiently dense, hydrogen becomes converted into helium through nuclear fusion, releasing energy in the process. The remainder of the star's interior carries energy away from the core through a combination of radiative and convective heat transfer processes.
The star's internal pressure prevents it from collapsing further under its own gravity. A star with mass greater than 0.4 times the Sun's will expand to become a red giant when the hydrogen fuel in its core is exhausted. In some cases, it will fuse heavier elements in shells around the core; as the star expands it throws a part of its mass, enriched with those heavier elements, into the interstellar environment, to be recycled as new stars. Meanwhile, the core becomes a stellar remnant: a white dwarf, a neutron star, or if it is sufficiently massive a black hole. Binary and multi-star systems consist of two or more stars that are gravitationally bound and move around each other in stable orbits; when two such stars have a close orbit, their gravitational interaction can have a significant impact on their evolution. Stars can form part of a much larger gravitationally bound structure, such as a star cluster or a galaxy. Stars have been important to civilizations throughout the world, they have used for celestial navigation and orientation.
Many ancient astronomers believed that stars were permanently affixed to a heavenly sphere and that they were immutable. By convention, astronomers grouped stars into constellations and used them to track the motions of the planets and the inferred position of the Sun; the motion of the Sun against the background stars was used to create calendars, which could be used to regulate agricultural practices. The Gregorian calendar used nearly everywhere in the world, is a solar calendar based on the angle of the Earth's rotational axis relative to its local star, the Sun; the oldest dated star chart was the result of ancient Egyptian astronomy in 1534 BC. The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period; the first star catalogue in Greek astronomy was created by Aristillus in 300 BC, with the help of Timocharis. The star catalog of Hipparchus included 1020 stars, was used to assemble Ptolemy's star catalogue.
Hipparchus is known for the discovery of the first recorded nova. Many of the constellations and star names in use today derive from Greek astronomy. In spite of the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear. In 185 AD, they were the first to observe and write about a supernova, now known as the SN 185; the brightest stellar event in recorded history was the SN 1006 supernova, observed in 1006 and written about by the Egyptian astronomer Ali ibn Ridwan and several Chinese astronomers. The SN 1054 supernova, which gave birth to the Crab Nebula, was observed by Chinese and Islamic astronomers. Medieval Islamic astronomers gave Arabic names to many stars that are still used today and they invented numerous astronomical instruments that could compute the positions of the stars, they built the first large observatory research institutes for the purpose of producing Zij star catalogues. Among these, the Book of Fixed Stars was written by the Persian astronomer Abd al-Rahman al-Sufi, who observed a number of stars, star clusters and galaxies.
According to A. Zahoor, in the 11th century, the Persian polymath scholar Abu Rayhan Biruni described the Milky | <urn:uuid:f66f72a5-9039-4c32-880f-f1d090b1c131> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/28_Cancri | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00499.warc.gz | en | 0.942779 | 6,753 | 3.5625 | 4 |
Heat stroke is a life-threatening disease requiring immediate admission to an ICU. Heat Stroke. Heat stroke is the most serious type of heat injury, and it’s considered to be a medical emergency. Heat stroke and deaths from excessive heat exposure are more common during summers with prolonged heat waves. Heat stroke may damage the vital organs of your body and lead to even death. Heat stroke can happen if your body temperature ascends to 104 F or higher. Look out for these signs and symptoms of heat stroke and heat exhaustion this summer. The condition is most common in the summer months. France suffered the worst losses, with 14,802 people dying from causes attributable to the blistering heat. Heat stroke is a medical emergency that may rapidly result in death. 7 million, with 10. Heat stroke arises when cellular injury is caused by excess body temperature. If your body heats up faster than it can cool itself, you experience heat stress. If the exposure to heat continues, the body looses this ability and the body temperature critically rises. The most serious medical condition caused by the cold is hypothermia, which occurs when the body loses heat faster than it can produce it, and can cause death if not brought under control. In hot settings, you need to be mindful of the temperature outside. Heat stroke: it is a life threatening condition. Other symptoms of heat stroke include a throbbing headache, flushed skin, dizziness, irrational behavior and unconsciousness. Proteins are denatured, and injured cells undergo apoptosis (programmed cell death) or necrosis. 6k Views Heat stroke is a condition which occurs when the body gets overheated mostly because of exposure and physical exertion to high temperature situation for a long time. What Is Heat Stroke? When the human body overheats, a condition referred to as heatstroke can be triggered. The incidence of heat stroke is rising globally and India saw almost 2,000 people die in the summer heat wave of 2015. Heat stroke is a life-threatening condition that usually occurs by ignoring the signals of heat exhaustion. Confusion is another symptom of the condition. Heat stroke is a very serious condition that can cause severe dehydration and blackouts. Heat stroke is the most serious of any heat-related illness. Symptoms of Heat Exhaustion: Headache, dizziness, lightheadedness, fainting Weakness and moist skin Mood changes, irritability, confusion Nausea, vomiting Symptoms of Heat Stroke:. The following are some of the common causes of heat exhaustion:. But it can progress to heat stroke. A stroke is when there is damage to the brain due to lack of blood supply. Understand the physiologic responses and adaptations to heat (chapter 2). Heat stroke happens when the human body temperature is elevated from overexposure to sun or other means of high temperatures. You can get heat stroke by not replacing lost fluids over days or weeks, or you can bring it on in a few hours by exercising strenuously on a hot day without drinking plenty of liquids first. Exercise-related heat exhaustion is an illness caused by getting too hot while exercising. There are several possible causes of hyperthermia and heat stroke in dogs. Heat stroke in athletes is typically caused by a combination of hot and/or humid conditions, prolonged strenuous activity, clothing that restricts the evaporation of sweat, inadequate adaptation to the heat, and/or lack of physical fitness. Heat exhaustion is caused by an increase in core body temperature often coupled with fluid loss (dehydration). There the information is verified and tabulated by cause of death, age, sex and race/ethnicity, for the United States and each state. Heat stroke can happen when the body is not able to cool itself. We have more details about Detail, Specification, Customer Reviews and Comparison Price. What causes heat stroke? Our bodies make a tremendous amount of internal heat and we normally cool ourselves by sweating and radiating heat through the skin. Heat Stroke is a term too loosely used. Heat stroke is a very serious condition that can cause severe dehydration and blackouts. closed as a duplicate of: Is extreme fatigue a prolonged side effect of a heat stroke or heat exhaustion? 0 Answers These messages are for mutual support and information sharing only. Heat syncope can be caused by blood pooling in the legs if a person has been standing still for a long time in a hot environment. temperature just by sweating. It is therefore important to know the symptoms and how to take quick action in case your dog has a heat stroke. Heat stroke (also known as sunstroke) is the most serious form of heat injury and can be deadly. This disorder is not the same as fever. List of 30 causes for Heat stroke and Short-term memory loss, alternative diagnoses, rare causes, misdiagnoses, patient stories, and much more. 5 C in children after being exposed to high temperatures in the environment. Major Heat stroke causes are as follows: Exposure to the Sun - Especially on hot days, between 11am and 3pm. The stroke symptoms typically develop quickly but can become apparent over hours or in rare cases even days. The reasons, however, may vary. If this happens, heat-related illnesses may occur. It is a medical emergency because it can lead to serious brain damage, organ failure and even death. Heat stroke can kill or cause damage to the brain and other internal organs. Early diagnosis of heat exhaustion and heat stroke is of vital importance as these conditions may progress to fatal complications. Signs of heat stroke in kids. Look out for these signs and symptoms of heat stroke and heat exhaustion this summer. — Summer can bring more than heat and humidity, especially if you’re outside for extended periods or if being active. The heat index factors humidity and temperature to approximate how the how the weather really feels. The United States Department of Labor makes the following recommendation, "Heat illness can be prevented. Stroke : Causes. This often occurs after a child has been exercising or playing in the heat and becomes dehydrated from losing excessive fluids and salt from sweating. When there is a heat wave, everyone is looking for ways to cool off and stay healthy. - • In 2013, worldwide prevalence of stroke was 25. While the body normally cools itself by sweating, during extreme heat, this might not be enough. That clot blocks a blood vessel in the brain. Mild and medium strokes aren’t usually fatal in the near-term, meaning you’re not likely to die in the first week or even the first couple of months after a stroke. Generally speaking, the treatment for dehydration, heat exhaustion and heat stroke are very similar. When parked in the sun, the temperature in your car can rise 20 degrees F (more than 6. Estimates of heat-related mortality rates represent cases that occurred. As the summer approaches, so does the increase in number of sun stroke or heat stroke cases. Heat stroke is the most serious form of heat injury and is considered a medical emergency. Q3 An employee experiences a "Heat Rash". It occurs when the body becomes unable to control its temperature: the body's temperature rises rapidly, the sweating mechanism fails, and the body is unable to cool down. ATP depletion leads to myocyte injury and the release of intracellular muscle constituents, including creatine kinase (CK) and other muscle enzymes, myoglobin, and various electrolytes. If an underlying cause has been determined, then treatment will also be directed at that cause. List of 10 causes of Heat stroke This section shows a full list of all the diseases and conditions listed as a possible cause of Heat stroke in our database from various sources. Classic heat stroke builds up over days and is most common in infants and in the elderly. Mistaken for a murderer, Weather Warden Joanne Baldwin is hunted down and killed by her colleagues. The most serious medical condition caused by the cold is hypothermia, which occurs when the body loses heat faster than it can produce it, and can cause death if not brought under control. Dehydration: Both alcohol and the sun cause dehydration. It's important to recognize heat exhaustion early and seek treatment as quickly as possible. Text Equivalent. While the item may be priced similarly at different shops. Heat exhaustion is a mild heat related illness that may cause lethargy and other problems. If you suspect you or someone else may be suffering from heat stroke, seek immediate medical help. At best, the patient is conscious, has a falling body. As the summer approaches, so does the increase in number of sun stroke or heat stroke cases. 17 Dog Care Tips and Advice for first time Dog Owners. Shower, bathe, or sponge off with cool water. Heat exhaustion vs. Heat Stroke (Weather Warden) [Rachel Caine, Dina Pearlman] on Amazon. May 22, 2011 Heat Stroke and Hyponatremia The most likely cause of death during hot weather sports is heat stroke, when the body temperature rises so high that it cooks the brain (Medicine and Science in Sports and Exercise, July 2008). “While we love seeing our long termers out of their kennels for the day, when the temperatures rise to the levels that are forecasted, a dog’s chance of suffering from a heat stroke increases,” said Laura Keith-Williams, IACS Kennel Manager. Heat Stroke. Learn the causes and symptoms of Heat Stroke and Heat Exhaustion. Heat stroke is a medical emergency and it is easy to miss the signs of heat stroke in infants and toddlers because of their inability to communicate their discomfort to their parents and caregivers. DIC can also be caused by pyometra or septicemia, but Grant says heatstroke is the most common cause. And a headache can be a symptom of heat stroke. All information is peer reviewed. Commonly, heat stroke will cause lethargic and weak behavior. dog being left in the car on a hot day with the windows up). Because the body cannot cool itself, body temperature continues to rise rapidly to dangerously high levels. With the nation's dangerous heat wave continuing its spread across the central and eastern U. It’s scary to see your cat suddenly not be able to walk, look drunk, fall over to his or her side, have a head tilt, or act neurologically inappropriate (e. Heat injuries are a significant cause of kidney failure. Heat stroke in cats is a very real and serious concern on hot days. Heat stroke and heat exhaustion are not the same thing. Males are more vulnerable to sun stroke than females. In this case, a person does not sweat enough to. Heat stroke occurs when the body's temperature rises rapidly, the sweating mechanism fails, and the body is unable to cool down. This can be caused by plaque buildup in the. Heat stroke: Introduction. It can occur at any age, but is most common in young children. Heat stroke is a severe medical emergency. If your dog is showing signs of heat stroke, move him out of the source of heat immediately, e. Heat exhaustion can quickly lead to heat stroke, which is a serious medical condition that requires immediate medical help. Summer Temps, Dehydration Can Cause Heat Stroke Leonardtown, Md. But when the air temperature is close to the dog’s body temperature, cooling by panting is not an efficient process. This occurs when heat is produced due to the dryness of the blood. In a 2004 issue of Emergency Medicine Clinics. It is a medical emergency. Alcoholics are very susceptible to heat stroke. Heat Stroke is the most se ious he at- el ed illness. It includes minor illnesses, such as heat edema, heat rash (ie, prickly heat), heat cramps, and tetany, as well as heat syncope and heat exhaustion. It's important to recognize heat exhaustion early and seek treatment as quickly as possible. Heat stroke (also known as sunstroke) is the most serious form of heat injury and can be deadly. into an air-conditioned. Heat exhaustion and heatstroke are part of a continuum of heat-related illness. When there is a heat wave, everyone is looking for ways to cool off and stay healthy. What is Heat Stroke? A Heat Stroke is a medical emergency, when the core body temperature is elevated to 104 deg. With a heat index of 130 F or higher, heat stroke is extremely likely. — Summer can bring more than heat and humidity, especially if you’re outside for extended periods or if being active. This condition is caused when the blood vessels in the body dilate in order to keep the body cool and as a result blood flow to the brain is reduced causing fainting. Lastly, heat stroke is the final phase and could lead to shock or organ failure. Heat exhaustion and heatstroke are two potentially serious conditions that can occur if you get too hot. Exertional heat stroke. This involves taking. Heat causes heat. Signs of heat stroke include extremely high body temperature, red skin which may be dry or moist; changes in consciousness; rapid, weak pulse; rapid, shallow breathing; confusion; vomiting; and seizures. Treatment of a Cat Stroke. The treatment for both classic and exertional heat stroke is the same: Cool the victim as quickly as possible with whatever means available--for example, wet sheets, a fan, or ice under the armpits. If your horse is not exercising, he can develop heat stroke when spending time in an enclosed trailer, an area with no shade, or when in a barn that is closed or not well ventilated. Remember, heat stroke is life threatening, don’t take any chances, call for professional medical help. The most common health problems caused by hot work environments include: 3. Heat Stroke is the most serious heat-related health problem. Dehydration and Heat Stroke Symptoms: A primary cause of both dehydration and heat stroke occur when dogs are shut up in cars or other confined areas with no ventilation and the “green house effect” from the sun causes the interior of the vehicle or confined area to reach high temperatures. Heat stroke occurs when the body overheats, usually as a result of physical exertion or prolonged exposure to high temperatures. Heat stroke can cause death or permanent disability if emergency. The following are the causes of heat stroke: Exposure to high temperatures for a long period of time. How Low Doses Of Radiation Can Cause Heart Disease And Stroke Date: October 23, 2009 Source: Public Library of Science Summary: A mathematical model constructed by researchers predicts the risk of. Heat stroke and deaths from excessive heat exposure are more common during summers with prolonged heat waves. The sooner you begin treatment, the better your guinea pig's chance of recovery. My inr is 3. Heatstroke can cause organ damage or death. Likewise, this page shows the most highly-reported side effects of SUBOXONE, so you can see if HEAT STROKE ranks among SUBOXONE's most well-known side effects. discussions on several types of health issues. Due to heat stroke there can be a rise in temperature. Heat stroke can happen if your body temperature ascends to 104 F or higher. First Aid Guide. More detailed information about the symptoms , causes , and treatments of Heatstroke is available below. General measures. Commonly, heat stroke will cause lethargic and weak behavior. Heat stroke causes increase of serum catecholamine levels, in which oversecretion and abnormal responses to catecholamines are a possible cause of stress-induced cardiomyopathy. The heat of an infant is released by four means, such as radiation, convection, conduction, and transpiration in order to keep his/her body temperature constantly. If someone gets hot and sweats a lot from dancing as well,. Aggressive Behavior After a Stroke. The top five causes of death in sport and physical activity include cardiac, brain injury, heat stroke, cervical injury, and exertional sickling. When someone has a debilitating stroke, at some point the survivor may experience anger. Heat stroke is more severe than heat exhaustion and requires urgent medical treatment. It includes minor problems such as heat rash (prickly heat), heat cramps, and heat exhaustion. For example, in July 1995, a strong heat wave killed over 600 people in the City of Chicago alone. Heatstroke occurs because the body cannot lose heat rapidly enough in conditions of extreme heat. The progression to multiple organ dysfunction can be fatal as many organ systems may be affected. Heat Stroke Heat stroke is a life-threatening febrile illness due to the breakdown of the body’s heat control systems and the subsequent acute immunologic and metabolic reactions to elevated body temperatures. Doing strenuous physical activity under the hot weather. The latter is a relatively rare side effect of many drugs, particularly those that affect the central nervous system. Heat illness is best understood in three separate degrees of severity: heat cramps, heat exhaustion, and the most serious and deadly form, heat stroke. • Removal from the source of heat stress and rapid initiation of cooling, as the risk of morbidity and mortality for patients with heat-related illness is associated with the duration of hyperthermia. Table I lists the common symptoms of each and at what body. org] If it is heat stroke , cool the athlete rapidly using cold water immersion. Summer Temps, Dehydration Can Cause Heat Stroke Leonardtown, Md. Catecholamines may therefore be the key in linking heat stroke and stress-induced cardiomyopathy. 7 C) in 10 minutes. Heat exhaustion is caused when the body’s cooling mechanism fails to maintain normal body temperature leading to overheating of the body. "Compared with stroke in older people, stroke in the young is a. Any delay in cooling can kill him. Heat stroke is the most serious of any heat-related illness. This allows. How to Tell the Difference Between Anxiety and a Mini Stroke. High blood pressure weakens arteries over time and is a major cause of hemorrhagic stroke. While the body normally cools itself by sweating, during extreme heat, this might not be enough. Eventually the body temperature will be high enough to cause the cat to collapse and have seizures or slip into a coma. Heat stroke is a medical emergency and it is easy to miss the signs of heat stroke in infants and toddlers because of their inability to communicate their discomfort to their parents and caregivers. This condition occurs in three stages. Heat Stroke is the most serious form of heat-related illnesses, with a body temperature higher than 40 degrees celsius. we had a wet cloth on his head and he did drink allot of water just like it says on all the websites but I'm wondering what could happen to him. Heat rash is the least severe form of heat-related illness, and happens when sweat doesn't evaporate from the skin. “A person who has suffered one heat stroke is at increased risk for another,” says Dr. There is generally a lack of sweating in classic heat stroke while sweating is generally present in exertional heatstroke. heat stroke. Poor hydration interferes with sweating and increases the risk of heat reactions. And Exertional Heat Stroke (EHS) is the condition where it is caused due to excessive physical exercise. Heat stroke is a serious condition. Heat stroke is the most serious of any heat-related illness. Long-Term Effects. Heat Stroke Heat stroke occurs when the body becomes unable to control its temperature: the body's temperature rises rapidly, the sweating mechanism fails, and the body is unable to cool down. All dogs can suffer from heat stroke, but the following types of dogs are at greater risks: Dogs weakened by health problems or those that are on medication. If your horse is not exercising, he can develop heat stroke when spending time in an enclosed trailer, an area with no shade, or when in a barn that is closed or not well ventilated. Heat and Asthma. Heat Stroke. Blood is at its thickest in the morning when we awake, a leading reason why strokes and heart attacks disproportionately occur in the morning. Heat syncope can be caused by blood pooling in the legs if a person has been standing still for a long time in a hot environment. The NIOSH book on occupational diseases (p499) states: "The physical disabilities caused by excessive heat exposure are in order of increasing severity, - heat rash, heat cramps, heat exhaustion, and heat stroke" a) Are all of these recordable? b) Are "prickly heat" and "heat rash" the same thing?. The human body uses sweat as a means of cooling off, but in extreme heat, sweating can't always cool the. ” Heat stroke is the most serious type of problem and is a medical emergency:. Heat Stroke Symptoms Include:. Cindy Mizuhara, CVT, VTS(ECC), describes hyperthermia, covers the causes and treatment for dangerous hyperthermia, and when to let the body’s own defense against infection do its job. (Check with your doctor if you are on limited fluids or fluid pills. Heat stroke is serious. Dogs with heat stroke often have temperatures greater than 106 F. Stroke and Tinnitus: Is There A Relation? The relation between stroke and tinnitus is best understood by an examination of what 'stroke' implies in this context. 1-4 Hyponatremia, also a heat hazard, occurs when excessive water consumption causes an imbalance to the body chemistry. org] If it is heat stroke , cool the athlete rapidly using cold water immersion. Its also called Non Exertional Heat Stroke (NEHS). Conditions that interfere with heat loss, including certain skin disorders and drugs that decrease sweating, increase the risk. With heat exhaustion progressing to a heat stroke, the baby will begin to show signs of muscle heat cramps such as stomach and leg cramps, followed by incessant crying. Heat Stroke Symptoms and Side Effects | Livestrong. All dogs can suffer from heat stroke, but the following types of dogs are at greater risks: Dogs weakened by health problems or those that are on medication. However, whenever dealing with a victim of heat stroke, the patient must be referred to professional medical assistance. Treatment of heat related illnesses depend on the condition, but symptoms may include headache, nausea, vomiting, dizziness, fainting, seizures, and coma. While it might not seem serious, heat stroke can cause concerning, dangerous changes in your health. That’s a little different from heat stroke brought on by exercising strenuously in the heat, which can cause your skin to feel slightly moist or dry. These account for 87% of all stroke cases. Long Term Effects Of Heat Stroke Heat stroke can cause several problems including the ones that can last a life time even if you just have the stroke once. Decreased heat loss. Contributing causes of death are defined as other significant conditions that contributed to the death, but did not result in the underlying cause of death 7. HEAT CRAMPS are a warning sign the body has lost too much salt through sweating. During summer time, it is one of the commonest conditions affecting people in India. Long-Term Effects. heat stroke. (While the military routinely uses ice to cool heat stroke victims, some studies have shown this can also cause frostbite. So I believe to have had a heat stroke during a music festival in the summer. Heat stroke can cause death or. Heat stroke develops when the body systems are overwhelmed by heat and begin to stop functioning. This disorder is not the same as fever. Without prompt treatment, heat exhaustion can lead to heatstroke, a life-threatening condition. Body temperature may rise to 106°F or higher within 10-15 minutes. Heat stroke is typically caused by a combination of environmental, physical, and behavioral factors. Elderly people and those with high blood pressure are also prone to heat exhaustion. Heat stroke and deaths from excessive heat exposure are more common during summers with prolonged heat waves. Combinations of heat, overactivity and dehydration can make a mild heat stroke into something serious. Heat stroke is a severe medical emergency. 6°C, failure of thermoregulation and a decreased level of consciousness. Heat Stroke is the third leading cause of death among high school athletes. In pathological terms a stroke implies a condition where oxygen supply to the brain is inadequate due to either a blockage or hemorrhage of a blood vessel. It is a widely-known fact that excessive alcohol causes dehydration. Proteins are denatured, and injured cells undergo apoptosis (programmed cell death) or necrosis. Symptoms of heat stroke: · The hallmark symptom of heat stroke is a core body temperature above. 2 In addition to being the first and fifth leading causes of death, heart disease and stroke result in serious illness and disability, decreased quality of life, and hundreds of billions of dollars in economic loss every year. Understand the risk factors for heat casualties (chapter 4 ). Warfarin thins the blood thereby making exposure to sun cause surface blood in the skin and eyes to heat making us more susceptible to heat stroke and retinal damage. Over the past decade, daily record temperatures have occurred twice as often as record lows across the continental United States, up from a near 1:1 ratio in the 1950s. It is used to check for heart problems caused by heat exhaustion. Individuals may be running errands, exercising, or enjoying a day outside when the body begins to overheat after so much fun in the sun. Rapid cooling of the body is an essential aspect of the Heat Stroke. Dehydration: Both alcohol and the sun cause dehydration. Deaths due to Heat Stroke accounted for close to 5% of all the deaths due to the natural causes between 2010 and 2014. The heat makes the wind evil flow upwards in the body which affects the meridians and obstructs orifices (openings of the body). [kendrickfincher. Depending on the severity of heat stroke, a combination of cooling methods, fluid therapy, and medications are likely to be used for treatment. Heat stroke occurs after two other heat-related illnesses. You can prevent heatstroke if you receive medical attention or take self-care steps as soon as you notice problems. A body temperature of 104 F (40 C) or higher is the main sign of heatstroke. Heat stroke is characterized by muscle heat cramps and can be threatening to the infant as they are more susceptible to damage caused by extreme heat. What are some clinical symptoms of heat stroke in cattle? Beef Cattle October 14, 2008 Cattle that have their mouths opened and breathing hard, show signs of lethargy with their heads low and well as increased salivation are some early symptoms. Heat stroke is a medical emergency and can cause organ damage, death or permanent disability if not treated urgently. Heat stroke, a form of hyperthermia, is characterized by a core body temperature Treatment. Despite aggressive lowering of core body temperature and treatment, the pathophysiologic changes associated with heat stroke can lead to multi-organ dysfunction, which can be fatal. "Compared with stroke in older people, stroke in the young is a. Warfarin thins the blood thereby making exposure to sun cause surface blood in the skin and eyes to heat making us more susceptible to heat stroke and retinal damage. If the core temperature rises above 105. Others symptoms of heat illness include a drastic mood change, feeling. Share in the message dialogue to help others and address questions on symptoms, diagnosis, and treatments, from MedicineNet's doctors. Heat stroke is typically sudden, worsens quickly, and may lead to a coma, irreversible brain damage, and death. To help keep your dog safe and cool during the summer, here is the lowdown on signs that he's overheating and how to prevent it: hint, a little water does wonders for keeping your pup cool. However, whenever dealing with a victim of heat stroke, the patient must be referred to professional medical assistance. Dide effects of certain medications (for example, dehydration, increased urination, sweating). Heat Stroke. All 3 reactions are caused by exposure to high temperatures often with high humidity. 5 cause of death and a leading cause of disability in the United States, you probably have a friend or family member who has suffered from it. Over the past decade, daily record temperatures have occurred twice as often as record lows across the continental United States, up from a near 1:1 ratio in the 1950s. What is heat exhaustion and what are the symptoms of heat stroke in kids, babies, adults, and pets? The life-threatening condition can develop in MINUTES, here's how to spot the signs Josie Griffiths. 0 °C and disarray. When a person with heat exhaustion does not receive treatment, the illness can worsen and become heat stroke. Heat stroke can result in the brain overheating and cells dying. Sometimes a person experiences symptoms of heat exhaustion before progressing to heat strokes. Heat Stroke –This often happens if heat exhaustion is left untreated. Heat stroke is typically caused by a combination of hot environment, strenuous exercise, clothing that limits evaporation of sweat, inadequate adaptation to the heat, too much body fat, and/or lack of fitness. A lack of sweat—or an abundance of it. Heat stroke Owner Factsheet for cats. We lose much of the heat through convection (wind cooling) and evaporation (sweat), but we “settle” on a temperature up around 39 degrees. The sooner you begin treatment, the better your guinea pig's chance of recovery. With these conditions, the body’s natural cooling mechanisms are affected. Heat exhaustion is an illness caused by dehydration and salt loss, and can lead to heat stroke. Heatstroke occurs because the body cannot lose heat rapidly enough in conditions of extreme heat. Other possible causes of heat stroke include sweat gland problems, heart disease, alcohol use and dehydration. Signs of heat stroke Panting or rapid breathing. View messages from patients providing insights into their medical experiences with Heat Exhaustion - Causes. Heat stroke Owner Factsheet for cats. • Stroke was the second-leading global cause of death behind heart disease in 2013, accounting for 11. Heat cramps - muscle pains or spasms that happen during heavy exercise. Heat stroke (also known as sunstroke) is the most serious form of heat injury and can be deadly. During summer time, it is one of the commonest conditions affecting people in India. The human body uses sweat as a means of cooling off, but in extreme heat, sweating can't always cool the. Heat stroke is a life-threatening condition and you should seek immediate medical. Heat disorders generally have to do with a reduction or collapse of the body’s ability to shed heat by circulatory changes and sweating or a chemical (salt) imbalance caused by too much sweating. Heat stroke is a life-threatening injury requiring neurocritical care; however, heat stroke has not been completely examined due to several possible reasons, such as no universally accepted definition or classification, and the occurrence of heat wave victims every few years. Reduced ability to acclimatize. Heat stroke can cause death or permanent disability if emergency treatment is not provided. Heat stroke can cause brain damage in dogs and even death in a short time. Drinking alcohol. Poor hydration system increases the risk of Heat Stroke. Classic heat stroke builds up over days and is most common in infants and in the elderly. Heat stroke can cause blood disorders and damage to the heart, liver, kidneys, muscles, and nervous system. That's why there's about a two-year delay in the mortality statistics found in the Heart Disease and Stroke Statistics Update. Exercise-related heat exhaustion is an illness caused by getting too hot while exercising. Heat-related illnesses such as heat stroke occur when the body struggles to cool itself. Recent research has identified a cascade of. What Is Heat Stroke? When the human body overheats, a condition referred to as heatstroke can be triggered. View messages from patients providing insights into their medical experiences with Heat Exhaustion - Causes. Heat stroke is a medical emergency that can result in brain and/or internal organ damage. Increased sensitivity to heat. Because mortality is considered "hard" data, it's. Hot weather can be uncomfortable as well as dangerous. Your loved one, for example, may not eat food from one side of the plate because he or she doesn’t see it. Working or exercising in hot conditions or weather without drinking enough fluids is the main cause of heat stroke. Heat stroke is a life-threatening injury requiring neurocritical care; however, heat stroke has not been completely examined due to several possible reasons, such as no universally accepted definition or classification, and the occurrence of heat wave victims every few years. Your dog will usually require hospitalization for 24 - 48 hours until deemed stable for discharge. Extreme dehydration can occur, and this heat stroke can easily progress into an illness that lasts longer than required. However, after a mild or medium stroke you might get a complication like a blood clot in one of your legs that causes a pulmonary embolism that kills you,. Heat stroke takes place when your body is no longer able to cool itself down. Be on the lookout for heat-related problems if you take medications that can affect your body's ability to stay hydrated and dissipate heat. Apart from this, Heat Syncope can also be caused as a result of blood collecting in the legs if an individual is standing for an extended period. After Effects Of Heat Stroke. tells SELF. Immediate Care. People have died as a result of heat stroke from organ failure and/or gone into comas due to the brain getting too hot. body temperature is greater than 40. Reports split the deaths into two categories: heat-caused and heat-related. This is a serious medical condition that needs immediate medical condition, or it may lead to permanent disability or death. Heat exhaustion and heatstroke are two potentially serious conditions that can occur if you get too hot. Ideally, place a fan near the wet dog to accelerate the cooling. | <urn:uuid:af90de84-29d9-4f2d-aa1a-7ceda2cb3dad> | CC-MAIN-2019-47 | http://igjb.flcgorizia.it/heat-stroke-causes.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00500.warc.gz | en | 0.950377 | 6,974 | 3.6875 | 4 |
During the Depression and early years of World War II there was widespread concern about the diminishing rate of growth of Australia's population. Despite negligible migration, the population had grown from 6.6 million in 1932 to 7.2 million in 1942. Nevertheless, the population was becoming less youthful, 16 per cent of women had never married, up to 20 per cent of married women were childless, there were fewer births occurring in the first year of marriage, and families were becoming smaller. The net reproduction rate in 1921 was 1.3, but during the Depression it went below the replacement level for the first time, falling to 0.93 in 1934. The number of marriages increased sharply in 1939–42, but this was thought to be a wartime aberration.
Public opinion polls showed that a great majority of people believed Australia needed a much larger population and thought that this would best be achieved by larger families. Politicians, community leaders, journalists and other commentators shared this view. In 1943 both John Curtin and Robert Menzies claimed that the population needed to double in the next 20 years, while Frank Forde went further, suggesting that it needed to reach 30 million in the next 30 years. WC Wentworth pointed out that, with less than 2.5 children per marriage, the population might never reach 8 million. The economist Colin Clark asserted that, unless attitudes to children changed, there was a risk that Australia would either be deluged with migrants or would be taken by force. The budding demographer WD Borrie was more restrained, but he agreed that at the current rate of fertility the population could start to decline within 30 years. Substantial post-war migration would lessen the risk, but since 1900 migration had accounted for only 18 per cent of Australia's population growth. Moreover, Britain and northern Europe also had birth-rate problems, so unless migrants came from southern Europe migration would not necessarily alleviate the problem.
In March 1944 the British government announced that there would be a royal commission into the declining birth rate. Questioned in Parliament, Curtin said that consideration would be given to a similar inquiry in Australia. John Cumpston, the Director-General of Health, responded with enthusiasm and within a few days had written a lengthy memorandum on the problem, referring to national psychology, statistical evidence, deliberate prevention of birth, economic influences, family life and housing, and nutritional and pathological influences. The Department of Post War Reconstruction also had an interest in population policy and its links with economic and social policy. HC Coombs met Cumpston and they agreed on a division of labour. In May 1944 the NHMRC appointed a committee to look at the medical aspects of the declining birth rate. Cumpston commissioned reports from Dame Enid Lyons and Lady Cilento on child-bearing, KS Cunningham on national psychology and education for parenthood, and Constance Duncan on measures to improve the circumstances of mothers and young children. Gerald Firth drafted a report on statistical, economic and social aspects, assisted by Borrie and staff in the Bureau of Census and Statistics and the Department of Labour and National Service.
The various reports, with a general summary by Cumpston, were published by the NHMRC in November 1944. Stan Carver, the Statistician, criticised Cumpston for downplaying some of the economic factors, such as economic insecurity, the desire of most people to share rising living standards with their children, and the increasing cost of family maintenance. The report was the most comprehensive study of the Australian population problem since the turn of the century. In the next few years further work was done on nutritional factors and other medical aspects of the birth-rate problem. The Department of Post War Reconstruction looked at such measures as extended social service benefits for large families and the training of domestic helpers, but met with Treasury resistance. Gradually public concern abated. The improvement in the net reproduction rate did not cease with the end of the war, but rose to 1.41 in 1947 and 1.5 in 1958. With large-scale migration from 1947 onwards, fears of a static or even declining population were forgotten for several decades.
|COPIES OF CABINET RECORDS, 1901–60
Sir Frederick Stewart. The birth-rate and associated matters, 19 September 1941
|Department of Health|
|CORRESPONDENCE FILES, 1925–49
|Decline of the birth-rate: suggested Royal Commission, 1944–49 (4 parts)
Reports, memoranda, and correspondence with organisations and individuals on the decline in the birth rate; the inter-departmental inquiry into reasons for the decline; the report (November 1944) by the NHMRC; and proposals for improved hospital, medical and nursing services for mothers and the training of domestic workers. The correspondents include JHL Cumpston, F McCallum, HC Coombs, GG Firth, SR Carver, C Duncan and B Mayes.
|Committee on medical aspects of the decline in the birth-rate, 1944–47 (2 parts)
Minutes of meetings (June 1944 – February 1945) of the committee of inquiry into medical aspects of the decline in the birth rate (chair: M Allan), notes on housing policy in relation to the birth rate, and correspondence about the inter-departmental inquiry into the birth rate. The correspondents include JHL Cumpston, HC Coombs, SR Carver and B Mayes.
|Department of Immigration|
|CORRESPONDENCE FILES CLASS 5 (BRITISH MIGRANTS), 1945–50
|The birthrate and future of the population, 1945–47
Includes a memorandum (February 1945) on aspects of the decline of the birth rate and the future of the population and a letter (20 May 1947) from R Wilson, the Commonwealth Statistician, on Australian population trends.
|Department of Post War Reconstruction|
|CORRESPONDENCE FILES, 1941–50
|International relations: population problems and migration, 1941–46
Correspondence of JG Crawford, GG Firth, LF Crisp and other officers with WD Borrie of the University of Sydney relating to his research on differentials in family structure, fertility and other population problems.
|1943/446 Pt 1|
|International relations: population problems and migration, 1942–45
Memoranda by WD Borrie on population and post-war development, the role of immigrants in population growth, imperial planning in migration, fertility and family structure in Australia, and other subjects.
|1943/446 Pt 2|
|Population: proposed inquiry, 1944–45
Memoranda and correspondence on the proposed inquiry on the declining birth rate, collaboration with the Department of Health and the Bureau of Census and Statistics, research on the economic and social aspects of the population problem, housing policy and the birth rate, and meetings of the working party. The correspondents include HC Coombs, GG Firth, JHL Cumpston, SR Carver and WD Borrie.
|Population: general suggestions, 1944–45
Letters from individuals, mostly sent to JB Chifley or JJ Dedman, on the population problem, household facilities, the birth rate, child malnutrition, maternity welfare and financial assistance to families.
|1944/167 Pt 1|
|Population material for report, 1944
Reports by GG Firth and his drafts of the report on the economic and social aspects of the birth rate and memoranda and material received from the Bureau of Census and Statistics, the University of Melbourne and other sources. The correspondents include GG Firth, W Prest, WD Borrie and EJR Heyward.
|Population policy, 1944–47
Interim report of the NHMRC on medical aspects of the decline in the birth rate and correspondence on maternal and infant welfare, the training of domestic workers and action to be taken by the Department of Health and the Department of Post War Reconstruction. The correspondents include HC Coombs, GG Firth, KAL Best and HJ Goodes.
|Regional planning: discussions on population planning with Prof. Griffith Taylor, 1948
A memorandum by the Regional Planning Division concerning discussions with the expatriate geographer Griffith Taylor on the optimum population of Australia with reference to geographical distribution and a letter (28 May 1948) by AS Brown to JG Crawford about the visit of Griffith Taylor to Australia.
|Regional distribution of population, 1949
Minutes relating to population and regional development, decentralisation, and the settlement of migrants on the land. The correspondents include AA Calwell, LF Crisp, G Rudduck and THE Heyes.
|Prime Minister's Department|
|CORRESPONDENCE FILES, 1934–50
|Increase of the Australian birthrate, 1938–47 (2 parts)
Correspondence of J Curtin and JB Chifley with organisations and individuals dealing with population questions, maternal and child welfare, birth control and the decline in the birth rate.
In the period 1929–37 the number of departures from Australia exceeded the number of arrivals. There was an increase in immigration in 1938–40, including 6475 Jewish refugees from Germany and Austria, but after 1940 immigration virtually ceased. The staff of the Migration Branch within the Department of the Interior dwindled to four and they responded to talk of large-scale post-war migration with some scepticism. A number of researchers and officials shared their scepticism. WD Borrie pointed out to the Department of Post War Reconstruction that Britain and other countries in northern Europe were likely to face labour shortages after the war and large-scale emigration was not in their interests. WD Forsyth in his book The Myth of Open Spaces (1942) conceded that post-war industrial development might support moderate immigration. He agreed, however, with Borrie that migration from Britain and northern Europe was a thing of the past, while migration from southern and eastern Europe would require careful selection, education and social control. Focusing on the Australian economy, LF Giblin told the Financial and Economic Committee in January 1943 that it would be unwise to contemplate an upsurge in migration until Australia had made up arrears in capital expenditure, housing and the provision of social services.
The decision to begin planning for post-war immigration was prompted by a despatch from the British government in April 1943. It was considering adopting a free passage scheme, similar to one that operated in 1919–22, for ex-servicemen and their dependants who wished to emigrate to the dominions. The Department of the Interior was inclined to limit its response to ex-servicemen, but Roland Wilson, LF Crisp and Paul Hasluck saw the despatch as an opportunity to seek Cabinet directions on post-war immigration generally. On 20 October 1943, on the recommendation of JS Collings, Cabinet set up an inter-departmental committee on immigration, chaired by Joseph Carrodus. It remained in existence until 1946. Much of its work was done by sub-committees, dealing with British migration, child migration, foreign migration, the commencement of migration, and publicity. Their memberships overlapped and key figures were AR Peters and JH Horgan (Interior), LF Crisp and JG Crawford (Post War Reconstruction), WD Forsyth (External Affairs), FH Rowe (Social Services) and HJ Goodes (Treasury).
British migration was relatively uncontentious. In May 1944 Cabinet decided that it would share with the British government the cost of providing free passages for British ex-service personnel and their dependants and assisted passages for other British emigrants. No approved applicant would be required to pay more than £10. Child migration provoked more dissension. The Department of Interior officials favoured giving support to migration organisations, such as Barnados and the Fairbridge Society. Supported by Chifley, the Post War Reconstruction representatives doubted the ability of such bodies to handle large numbers of children, including European war orphans, and argued that government infrastructure needed to be set up. At a meeting of the inter-departmental committee in October 1944, HC Coombs claimed that children were the best type of migrant and suggested a target of 17,000 children per annum for three years. The Commonwealth would meet the cost of bringing the children to Australia and maintaining them. In December 1944 Cabinet accepted the Chifley/Coombs arguments in favour of large-scale child migration. The inter-departmental committee also supported a vigorous policy of bringing European migrants, including refugees, as it considered that British migration alone would not lead to a much larger population. The question of assistance to non-British migrants was discussed but not resolved. The committee's report was referred to a Cabinet sub-committee, which never met, and no decision was taken.
On 13 July 1945 Arthur Calwell became the first Minister for Immigration. He had a strong interest in the subject and had just written a well-researched pamphlet entitled 'How many Australians tomorrow?'. About 25 officers from the Department of the Interior, headed by AR Peters, formed the nucleus of the new department. In May 1946 Tasman Heyes took over the position of Secretary from Peters. Under his leadership, the Department of Immigration grew rapidly and by 1950 the staff totalled more than 5000. In 1947 Calwell established an Immigration Advisory Council, chaired by Les Haylen, with representatives of trade unions, employer organisations, the Returned and Services League and other bodies. It was provided with a huge amount of information on passage schemes, shipping, the selection of migrants, deportations and other matters, and many policy proposals were referred to it for endorsement.
Calwell made his first statement on immigration in Parliament on 2 August 1945. His emphasis was very much on population: a much larger population was needed to meet challenges to 'our right to hold this land'. Immigration policy should be closely related to social policy (creating security and higher standards of living) and economic policy (creating full employment and markets for Australian goods). Echoing Giblin, he said that Australia's maximum absorption capacity was 2 per cent per annum, or 140,000 people. Taking into account the net population increase, this left a ceiling for migration of 70,000 per annum. This target was frequently quoted by officials, but it was not in fact reached until 1949.
In his statement Calwell warned that large-scale migration would probably have to wait for two years. There were many difficulties in putting into effect the ambitious new policy. The quest for large numbers of child migrants came to nothing and was abandoned in 1946. The first post-war contingent of unaccompanied children arrived in August 1947 and other small groups followed, sponsored by churches and migration societies. The Commonwealth and state governments provided financial support but child migration eventually came to an end. The free and assisted passage agreements with the British government were signed on 5 March 1946 and came into effect on 31 March 1947. There were large numbers of prospective British migrants, but a critical shortage of shipping prevented large numbers arriving until the latter months of 1947. The numbers soared in 1949 and 1950, and by 1957 more than 600,000 post-war British migrants had arrived in Australia, of whom about half were assisted. The Assisted Passage Scheme remained in force until 1982.
An interim policy on the admission of non-British migrants was adopted in 1946, but shipping difficulties and numerous excluded categories meant that the numbers were low until late 1947. The first substantial group was 1321 Jewish refugees from Europe and China, who were sponsored by the Executive Council of Australian Jewry. Some politicians and newspapers reacted with a virulent campaign against Calwell and his support for Jewish migration weakened. He visited several northern European countries in 1947 but, with the exception of the Netherlands, there was little interest in migration agreements. Gradually, more encouragement was given to migrants from southern and eastern Europe and by 1949 large numbers of nominated Italians were arriving. Aware of serious labour shortages in Australia, Calwell secured Chifley's approval to sign an agreement with the International Refugee Organization (IRO) in July 1947. It specified that Australia would take 12,000 refugees and displaced persons per annum, with the IRO meeting the bulk of the transport costs and the Australian government providing employment. At first, the Australian selection teams were only interested in unmarried displaced persons from Baltic countries, but family units were admitted in 1948 and all European nationalities in 1949. Limitations on numbers were abandoned and 75,500 displaced persons arrived in 1949 and 70,000 in 1950. Such a massive influx imposed enormous challenges for the Department of Immigration and the Commonwealth Employment Service, in setting up reception centres and hostels, devising educational programs, providing training, and allocating the migrants to public works projects, the building, timber and sugar industries, and other employment.
The government's success in bringing about large-scale immigration in 1945–49 was, to some degree, overshadowed by its rigid adherence to the White Australia Policy. The inter-departmental committee on immigration did not discuss coloured immigration, believing that any change in policy was a matter for Cabinet. At no time did Cabinet consider a relaxation of the policy and Calwell, in particular, always held that it was sacrosanct. A relatively small number of Indonesians, Malayans and Chinese had been evacuated to, or stranded in, Australia during the war. Most were repatriated in 1945, but about 800 were still in Australia in 1948. Calwell's efforts to deport them, which often meant breaking up families, aroused strong opposition both in Australia and Asia. In particular, the O'Keefe case in 1949 became a cause célèbre, leading to a High Court case and the swift enactment of the Wartime Refugees Removal Act. Opposition politicians, diplomats, church leaders, academics and editorial writers all argued for a degree of flexibility and sensitivity in administering the White Australia Policy. Calwell's stock response was that any departure from the policy 'would ultimately impair the homogeneity of our population and bring to this country the dissensions and problems that are the inevitable lot of states having mixed populations'. In taking this stand, he was consistently supported by Chifley.
|CURTIN, FORDE AND CHIFLEY MINISTRIES: CABINET MINUTES AND AGENDA, 1941–49
|British and alien migration to Australia, 20 October 1943||538|
|Post-war migration, 10 May 1944||538A|
|Post-war migration: report and recommendations on white alien migration, 10 November 1944||538B|
|Child migration, 6 December 1944||538C|
|Foreign child migration, 5 November 1946||538C(1)|
|Inter-Departmental Committee on Migration: report on the establishment of reciprocity in connection with the social services of Great Britain and the Commonwealth, 18 January 1945, 17 April 1945||538D|
|Publicity necessary to give effect to the government's decisions in regard to migration policy, 18 January 1945||538E|
|Maltese migration: question of assisted passages, 2 February 1945||538F|
|Assisted passage agreement for immigrants from Malta, 9 June 1947||538F(1)|
|Dutch migration from the Netherlands to Australia, 19 December 1946||538I|
|Legal guardianship of child migrants, 2 July 1946||724A|
|Financial assistance to non-governmental migration organisation for the provision of capital facilities for the accommodation and care of migrants, 4 June 1946||1192|
|Immigration policy and procedure, 5 November 1946||1192A|
|Shipping in relation to immigration, 23 August 1946||1239|
|Proposed amendment of Immigration Act, 7 April 1949||1580|
|Commonwealth Immigration Advisory Council|
|COUNCIL MEETINGS: VOLUMES OF AGENDA, NOTES AND MINUTES, 1947–58
Minutes and agenda papers of the Commonwealth Immigration Advisory Council (chair: LC Haylen), including reports on the Assisted Passage Scheme, shipping, legislation, migration from particular countries, reception and training centres for displaced persons, education of non-British migrants, housing, the admission of non-Europeans, deportations, the establishment of migration offices, publicity and statistics.
|Department of External Affairs|
|CORRESPONDENCE FILES, 1943–44
|Migration: Australian policy, 1940–44 (2 parts)
Minutes of the first meeting (9 June 1943) of the Inter-Departmental Committee on Migration, memoranda, correspondence and newspaper cuttings. They include memoranda by Sir John Latham (30 May 1941, 21 September 1943), PMC Hasluck (9 October 1941), Sir Frederic Eggleston (15 March 1943) and WD Forsyth (11 October 1943) on the White Australia Policy and post-war migration.
|Migration: British migration to Australia, 1944
Draft report (17 March 1944) of the sub-committee on assisted migration and correspondence on migration to Australia by British subjects. The correspondents include Sir Iven Mackay, WA Wynes, JA Carrodus and AR Peters.
|Migration: child migration, 1944
Notes of a meeting (24 January 1944) of the sub-committee on child migration and correspondence concerning proposals of the Department of Post War Reconstruction, organisations promoting child migration, and preparation of a Cabinet submission. The correspondents include WD Forsyth, JA Carrodus and S Spence.
|Migration: Sub-Committee 2A: white alien migration, 1943–44
Memoranda by WD Borrie, correspondence, and the draft report (September 1944) of the sub-committee on white alien migration. They include notes (February 1944) by R Wheeler of his interviews in London with representatives of foreign governments about post-war migration to Australia.
|Migration: Sub-Committee 2: absorption and time of resumption, 1942–43
Report (22 December 1943) of the sub-committee on the timing of resumption of migration (chair: JG Crawford) and memoranda by WD Borrie and the Department of Post War Reconstruction.
|Migration: Sub-Committee 3: coloured immigration, 1943–45
Includes memoranda (October–November 1943) by WD Forsyth and the Department of the Interior on the White Australia Policy and immigration policy.
|Migration: refugee migration, 1943–44
A report by Caroline Kelly on European refugees in New South Wales (1938–43), a memorandum (22 March 1944) by JA Carrodus on the proposed Jewish settlement in the East Kimberleys, and correspondence between AR Peters and WD Forsyth about refugees.
|Migration: Europe, 1943
A Department of External Affairs memorandum (29 June 1943) on post-war migration from south-eastern Europe.
|CORRESPONDENCE FILES, 1945
|Migration: Australian policy, 1944–48
A statement (7 March 1945) by the Department of the Interior on post-war immigration policy, notes (13 July 1945) on the United Nations and Australian immigration policy, and correspondence and extracts from overseas newspapers on the White Australia Policy.
|Migration: IDC, 1945–46
Minutes of the 5th–7th meetings of the Inter-Departmental Committee on Migration (chair: AR Peters, THE Heyes) and a letter (9 November 1945) from AR Peters to AA Calwell on the migration of allied servicemen to Australia.
|Migration discussions: Australia, 1944–46
Newspaper cuttings on post-war migration, sources of migrants, the White Australia Policy and related subjects.
|Migration: British migration to Australia, 1945–46
Cables between the Department of External Affairs and the United Kingdom Dominions Office and the draft agreement (5 March 1946) between Australia and the United Kingdom on the Free and Assisted Passage Scheme.
|Migration: child migration to Australia, 1944–45
Notes of a conference (9 January 1945) of Commonwealth and state officers on child migration (chair: JA Carrodus) and correspondence, cables and newspaper cuttings on child migration.
|Department of Immigration|
|CORRESPONDENCE FILES CLASS 2 (RESTRICTED IMMIGRATION), 1945–50
Files of correspondence and other records relating to restricted immigration, including the admission of Asians into Australia and deportations.Series: A433
|Chinese and other coloured immigration (post-war), 1943–44||1944/2/53|
|Australian Board of Missions: resolutions, 1941–45||1946/2/203|
|Indian High Commission: query on immigration into Australia, 1945–47||1947/2/1705|
|White alien immigration: Statistician's figures, 1940–48||1947/2/1794|
|Aliens married to Australian women: deportation, 1948||1948/2/3189|
|Immigration policy (including Wartime Refugees Act): correspondence, 1949||1949/2/10|
|Sir Frederic Eggleston, 1949||1949/2/6244|
|White Australia policy, 1948–51||1950/2/176|
|CORRESPONDENCE FILES CLASS 3 (NON-BRITISH EUROPEAN MIGRANTS),
Correspondence files relating to European migrants, including refugees, displaced persons, enemy aliens, selection, employment, housing and assimilation.Series: A434
|White alien migration: Ministerial decisions on applications for admission, 1945–47||1945/3/1882|
|Refugees in United Kingdom: admission to Australia, 1945–48||1948/3/4074|
|Replies to newspaper criticism of Displaced Persons, 1948–49||1948/3/13193|
|Australian Jewish Welfare Society scheme for admission of 300 refugee children, 1939–46||1949/3/3|
|Land settlement schemes: migrant participation, 1949–50||1949/3/2543|
|Australian Refugee Immigration Committee: policy, 1938–48||1949/3/7286|
|Survey of former Service camps: housing of migrants, 1947||1949/3/25382|
|Displaced Persons employment policy, 1948–49||1950/3/13|
|TW White: notes on overseas migration, 1948||1950/3/9855|
|Social welfare aspects of immigration, 1948–49||1950/3/17477|
|Italian workers for North Queensland sugar areas, 1947–48||1950/3/42901|
|CORRESPONDENCE FILES CLASS 5 (BRITISH MIGRANTS), 1945–50
Correspondence concerning British immigration, including organisations interested in promoting British migration, nominations, training schemes, housing, shipping, employment and individual cases.Series: A436
|Migration policy: Ministerial statement, 2 August 1945||45/5/834|
|Catholic Hierarchy of Australia: proposals concerning migration, 1946–49||49/5/461|
|Conference of Commonwealth and State migration officers, 1947–48||50/5/2178|
|Immigration Advisory Committee, 1946||1945/5/563 Pt 1|
|Overseas children's schemes: policy, 1941–43||1946/5/2949|
|White alien migration: policy, 1943–46||1947/5/16|
|Housing difficulties of migrants: complaints, 1946–48||1947/5/1733|
|Australian Natives Association: information concerning migration, 1945–47||1947/5/2989|
|Priorities for shipment of migrants under Free and Assisted Passage Scheme, 1946–49||1948/5/51|
|State survey of accommodation for migrants, 1946–47||1948/5/70|
|State and district immigration committees: formation, 1948–49||1948/5/5356|
|Industrial absorptive capacity of Australia in relation to migrants, 1946||1949/5/2979|
|Conference of Commonwealth immigration officers, 1947||1949/5/5627|
|Shipping in relation to immigration: IDC, 1945–48||1950/5/2990-1|
|CORRESPONDENCE FILES CLASS 13 (MIGRANTS H-K), 1951–52
|Caroline Kelly, 1943–47
Report (June 1944) by Caroline Kelly of the University of Sydney on child immigration agencies (non-governmental) in New South Wales, and correspondence and notes concerning meetings between Kelly and the sub-committee on child migration and about the respective roles of the government and private agencies in fostering child migration. The correspondents include AR Peters, JH Horgan and WD Forsyth.
|CORRESPONDENCE (POLICY FILES), 1922–55
Files relating to the assimilation, welfare and education of migrants, including legislation, migrant organisations, transport, housing and accommodation, sponsorship, conferences and refugees.Series: A445
|Formula for allocation of migrant passages among the States, 1946–49||124/1/28-29|
|Un-nominated single British migrants: acceptance by States, 1948–49||124/1/34|
|Full-fare passages: selection of UK operatives for Australian factories, 1946–49||124/1/37|
|Agenda and minutes of ID conference on housing and accommodation, 1948–49||125/2/1|
|Fairbridge farm schools, Western Australia, Pinjarra, 1946–51||133/2/12|
|Re-establishment benefits: Empire and Allied Ex-Servicemen's Migration Scheme, 1944–46||178/1/6|
|Displaced persons employment opportunities policy, 1947–48||179/9/3|
|Displaced persons employment policy, 1948–51||179/9/5|
|Conferences on housing for Australians and migrants, 1948–49||202/3/34-35|
|Immigration of Italians to Australia, 1938–53||211/1/6|
|Report by AA Calwell of visit to Europe, 1947||223/2/5|
|Revised policy: admission of non-British Europeans, 1946–48||235/1/1-2|
|New policy for enemy aliens, 1947–49||235/1/25|
|Department of Post War Reconstruction|
|CORRESPONDENCE FILES OF THE ECONOMIC POLICY DIVISION, 1944–50
Correspondence and notes on the placement of displaced persons, migrant labour for the coal industry, the selection of migrants, and the Commonwealth–State Conference on Migration (16 May 1949).
|Migrant labour: general, 1949–50
Correspondence on population estimates and the availability of migrant labour for public works. The correspondents include JJ Dedman and LF Crisp.
|Immigration: conferences and statements of policy, 1949
Agenda papers and summary of proceedings of the conference (19 May 1949) of Commonwealth and state ministers on immigration (chair: AA Calwell), with comments by AS Brown.
|CORRESPONDENCE FILES, 1941–50
|Population problems and immigration, 1941–47 (3 parts)
Cabinet submissions, minutes (1943–46) of the Inter-Departmental Committee on Migration, reports, correspondence and notes on post-war migration policy, re-establishment benefits for British service personnel, and conferences of Commonwealth and state ministers on migration. The correspondents include JS Collings, HC Coombs, JG Crawford, LF Crisp, MW Phillips and RF Archer.
|Immigration: miscellaneous correspondence, 1943–46
Memoranda and correspondence with organisations and individuals concerning child migration, the admission of refugees and post-war migration generally. The correspondents include HC Coombs, JG Crawford, Bishop TB McGuire, Bishop CV Pilcher, W Bromhead and W Pickering.
|Immigration: assisted British immigration, 1943–46
The report (15 March 1944) of the sub-committee on assisted immigration (chair: AR Peters), memoranda and correspondence on British migration to Australia, relations with the Department of the Interior, the formation of the Inter-Departmental Committee on Migration, and the Free and Assisted Passage Scheme. The correspondents include LG Melville, LF Crisp and R Wilson.
|IDC on Child Migration, 1943–45
Notes of meetings and report (17 March 1944) of the sub-committee on child migration (chair: AR Peters), transcript of the conference (9 January 1945) of Commonwealth and state officials on child migration (chair: JA Carrodus), and memoranda and correspondence on child migration. The correspondents include JB Chifley, HC Coombs and LF Crisp.
|Sub-Committee on White Aliens Migration, 1941–45
Report (21 September 1944) of the sub-committee on white alien migration (chair: AR Peters), comments by LF Crisp, and papers on the proposals of Sir John Latham on the White Australia Policy.
|Immigration publicity, 1944–47
Report (28 November 1944) of the Inter-Departmental Committee on Migration concerning publicity for the recommendations of the committee, with comments by UR Ellis.
|Immigration conferences, 1946–47 (2 parts)
Agenda papers and proceedings of conferences of Commonwealth and state ministers and Commonwealth and state officers on immigration and related correspondence.
|Department of Social Services|
|CORRESPONDENCE FILES WITH A (ADMINISTRATION) PREFIX, 1941–74
|Migration: social service benefits, 1943–44
Minutes of the sub-committee on reciprocal social services and correspondence on the work of the sub-committee, entitlements of discharged British service personnel, reciprocity, and the report of Caroline Kelly on child migration agencies. The correspondents include FH Rowe, TH Pitt, JA Carrodus and AR Peters.
|A183 Pt 1|
|Prime Minister's Department|
|CORRESPONDENCE FILES, 1934–50
|Immigration: policy, 1938–50
Cabinet papers and correspondence of J Curtin and JB Chifley, mainly with the United Kingdom Dominions Office, the Australian High Commissioner in London and state premiers, on post-war migration policy, in particular about British migration, the shipping problem, training of displaced persons and deportation of Asian migrants.
|A349/1/2 Pts 4-8|
|Foreign migration: policy, 1940–48
Correspondence mainly relating to the 1947 agreement with the IRO to bring displaced persons to Australia.
|A349/3/1 Pt 3|
|Immigration: employment of foreign migrants, 1948–50
Correspondence between JB Chifley and state premiers regarding migrant labour needed in particular industries or districts.
|Jews: policy, 1938–46
Correspondence of J Curtin and JB Chifley concerning the persecution of Jewish people by the German government, the establishment of the Executive Council of Australian Jewry, and a proposal by the British government that dominions accept Jewish refugees and other displaced persons.
|M349/3/5 Pt 2|
|CORRESPONDENCE FILES, 1901–76
Cabinet submissions, parliamentary questions, conference agenda and correspondence regarding decisions of the 1945 premiers conference, Commonwealth–state conferences, financial procedures for the Assisted Passage Scheme, maintenance allowances for child migrants, reciprocity with the United Kingdom in social service benefits, medical services for migrants, and the assisted passage agreement with Malta. The correspondents include JB Chifley, AA Calwell, HJ Goodes, J Brophy, THE Heyes and AL Nutt.
|1943/3292 Pts 2-4|
|Displaced persons, 1947–51
Parliamentary questions and correspondence concerning accommodation, clothing and social services for displaced persons, the purchase of equipment for immigration depots and transit camps, financial arrangements for the Bonegilla Reception and Training Centre, immigrant education and shipping. The correspondents include HJ Goodes, HC Newman, THE Heyes, RC Mills, FH Rowe and LF Loder.
Appleyard, RT, British Emigration to Australia, Australian National University, Canberra, 1964.
Blakeney, Michael, Australia and the Jewish Refugees 1933–1948, Croom Helm Australia, Sydney, 1985.
Borrie, WD, The European Peopling of Australia: a demographic history 1788–1988, Australian National University, Canberra, 1994.
Borrie, WD, Immigration: Australia's problems and prospects, Angus and Robertson, Sydney, 1949.
Borrie, WD, Population Trends and Policies: a study in Australian and world demography, Australasian Publishing Company, Sydney, 1948.
Borrie, WD, 'Population policy and social services', Economic Papers, no. 7, 1947, pp. 30–42.
Brawley, Sean, The White Peril: foreign relations and Asian immigration to Australasia and North America 1919–1978, UNSW Press, Sydney, 1995.
Calwell, Arthur A, How Many Australians Tomorrow?, Reed & Harris, Melbourne, 1945.
Forsyth, WD, The Myth of Open Spaces: Australian, British and world trends of population and migration, Melbourne University Press, Melbourne, 1942.
Coldrey, Barry, Good British Stock: child and youth migration to Australia, National Archives of Australia, Canberra, 1999.
Gill, Alan, Likely Lads and Lasses: youth migration to Australia 1911–1983, BBM Ltd, Sydney, 2005.
Markus, Andrew, 'Labour and immigration: policy formation 1943–5', Labour History, no. 46, 1984, pp. 21–33.
Markus, Andrew, 'Labour and immigration 1946–9: the Displaced Persons Program', Labour History, no. 47, 1984, pp. 73–90.
Richards, Eric, Destination Australia: migration to Australia since 1901, UNSW Press, Sydney, 2008. | <urn:uuid:3e392de9-f1cb-4592-b7be-2c9ac664c2c8> | CC-MAIN-2019-47 | http://guides.naa.gov.au/land-of-opportunity/chapter27/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00498.warc.gz | en | 0.925192 | 7,939 | 3.5 | 4 |
1783 The American Revolution ends. Parishes in America that formerly were part of the Church of England under the authority of the Bishop of London are left without bishops or any church structure beyond the parish level.
1789 The Episcopal Church adopts a Constitution, and the Diocese of South Carolina is established.
1790 The Diocese of South Carolina adopts a Constitution, with the first sentence reading, “The Church in the Diocese of South Carolina accedes to the Constitution of the Protestant Episcopal Church in the United States of America,” the historical name for what is now called The Episcopal Church.
1872 The U.S. Supreme Court issues the first of several decisions upholding the hierarchical nature of The Episcopal Church.
2003-2006 Disagreements over issues related to human sexuality and other matters are the topic of debate among church leaders. In a few dioceses, leaders take steps to break away from The Episcopal Church. In South Carolina, resolutions are passed seeking to distance the diocese from decisions made at General Convention.
2006 South Carolina’s diocesan convention chooses Mark Lawrence as bishop-elect. His election does not receive consents from a majority of the Standing Committees of other dioceses across The Episcopal Church (a requirement of church canons), as concerns are raised about whether he intends to remain part of The Episcopal Church. A second election is held, and this time Mark Lawrence states his intent to “remain in The Episcopal Church.” As a result, a majority of other dioceses give their consent for him to become bishop of South Carolina.
October 2010 Bishop Lawrence presides over a diocesan convention where changes are made in the Constitution and Canons of the diocese to remove its accession to the Constitution and Canons of The Episcopal Church.
October 2011 Bishop Lawrence, as president of the diocese’s nonprofit corporation, files amendments to the corporate charter deleting all references to The Episcopal Church and obedience to its Constitution and Canons.
November 2011 Bishop Lawrence and some diocesan bodies issue quitclaim deeds to every parish of the Diocese of South Carolina, in which the diocese disclaims any interest in the parishes’ properties.
October 2012 A Disciplinary Board of The Episcopal Church issues a “Certificate of Abandonment,” saying the actions taken by Bishop Lawrence in 2010-2011 constituted “abandonment of The Episcopal Church by an open renunciation of the Discipline of the Church.” The Presiding Bishop, the Most Reverend Katharine Jefferts Schori, notifies Bishop Lawrence that she has placed a restriction on his ministry, as required by TEC’s Canons, preventing him from engaging in ministerial acts until the House of Bishops can look into the allegations of abandonment and make a decision.
Two days later, an announcement is placed on the Diocese of South Carolina website stating that the leadership of the Diocese had put resolutions in place earlier that would become effective if any disciplinary action was taken by TEC regarding Bishop Lawrence. “As a result of TEC’s attack against our Bishop, the Diocese of South Carolina is disassociated from TEC, that is, its accession to the TEC Constitution and its membership in TEC have been withdrawn,” according to the announcement.
Two days after that, Bishop Lawrence announces to a meeting of all diocesan clergy that he and the diocese are “no longer associated with the Episcopal Church.”
November 17, 2012 Addressing a special meeting of parish leaders who support breaking away from TEC, Bishop Lawrence again announces that he and the diocese have “withdrawn” from The Episcopal Church.
December 2012 The Presiding Bishop announces she has accepted Bishop Lawrence’s renunciation of the ordained ministry of The Episcopal Church on December 5, and declares him removed. Working in consultation with a steering committee of local Episcopalians, the Presiding Bishop calls a special Diocesan Convention for January, so a new bishop and standing committee can be elected for the continuing diocese.
January 2013 On January 4, a lawsuit is filed in South Carolina Circuit Court against The Episcopal Church by two corporations claiming to represent the Diocese of South Carolina and some of its parishes, seeking a declaratory judgment that they are the sole owners of the property, name and seal of the diocese. The complaint was later amended to include a total of 34 parishes.
On January 23, a judge issues a temporary restraining order preventing The Episcopal Church from using the name or symbols of the diocese. The court order later becomes a temporary injunction.
January 26, 2013 A special Diocesan Convention is held at Grace Church, Charleston, with clergy and lay delegates representing the continuing Episcopal Church parishes and missions. To comply with the restraining order, the diocese adopts a temporary name, “The Episcopal Church in South Carolina,” so it can conduct business.
The Right Reverend Charles G. vonRosenberg is elected provisional bishop and immediately invested by the Presiding Bishop. (Provisional bishops have all the authority and duties of other bishops, but typically serve for a limited period of time while a diocese is in period of reorganization.) A new Standing Committee and Diocesan Council are elected.
February 2013 With the consent of all parties, the state court lawsuit filed by the breakaway group is amended on February 28 to add “The Episcopal Church in South Carolina” as a defendant.
March 2013 Bishop vonRosenberg files a complaint in U.S. District Court against Bishop Lawrence, citing violations of the Lanham Act, the primary federal trademark law of the United States, which prohibits trademark infringement and false advertising. The suit, vonRosenberg v. Lawrence, says that by representing himself as bishop of the diocese, Mark Lawrence is engaging in false advertising.
Later in March, TECSC files its response to the breakaways’ lawsuit, saying that Mark Lawrence and the faction that followed him out of The Episcopal Church have no authority over the assets or property of the diocese, and engaged in a plan to damage the diocese.
On March 8-9, the 222nd Annual Diocesan Convention is held at Grace Church, Charleston, with the diocese continuing to operate under the name “The Episcopal Church in South Carolina.” Delegates representing the parishes and missions that remain part of TEC officially adopt amendments returning the Constitution and Canons to their 2007 version, so that the diocese again accedes to the Constitution and Canons of The Episcopal Church. St. Mark’s, Port Royal, is officially admitted as a mission church of the diocese.
August 2013 After several attempts to contact and engage in conversations with clergy in the breakaway group, more than 100 clergy receive official “Notice of Removal” from Bishop vonRosenberg, removing them from the ordained ministry of The Episcopal Church. In hope of an eventual reconciliation, the Bishop exercises his right to “release and remove” the clergy, rather than “depose” them on grounds of abandonment. This action left open a path for them to return to the ordained ministry in the future.
December 2013 In a hearing, TECSC brings forth an affidavit from the Rev. Thomas M. Rickenbaker, a retired priest. Under oath, Fr. Rickenbaker said he was contacted in 2005 as a potential nominee to become Bishop of South Carolina and was asked by search committee members, "What can you do to help us leave The Episcopal Church and take our property with us?" The affidavit supports TECSC's claim that the "withdrawal" in 2012 was the result of a long-planned scheme by several individuals.
At the hearing, Judge Diane S. Goodstein denied a motion to have four individuals – Mark Lawrence; Paul Fuener as a Standing Committee president; Jim Lewis as Canon to the Ordinary of the diocese; and Jeffrey Miller as a Standing Committee president – added to the suit. TECSC said the four were necessary parties because the actions they took to “withdraw” were outside the scope of their legal authority and violated state law.
January 2014 TECSC files an appeal with the SC Court of Appeals seeking access to correspondence between attorney Alan Runyan and Mark Lawrence prior to the 2012 split, when he represented the then-unified diocese and was jointly representing parties on both sides of the case. Mr. Runyan went on to be the lead attorney for the breakaway group. The appeal effectively stayed the case, which was in the discovery phase with multiple depositions being taken on all sides. The appeal was later transferred to the SC Supreme Court.
February 2014 The 223rd Annual Diocesan Convention is held at All Saints, Hilton Head Island, with Bishop Michael Curry as the convention preacher. Five congregations are admitted as missions of the diocese. Further changes to the Constitution and Canons are approved to formally acknowledge the diocese as part of the wider church.
May 2014 The SC Supreme Court dismisses TECSC’s appeal over the disputed correspondence between Mr. Runyan and Mark Lawrence.
June 2014 TECSC files an appeal in the SC Court of Appeals seeking to overturn Judge Goodstein's ruling against adding individual parties to the case, asking that Mark Lawrence and three others be added as parties.
July 3, 2014 The appeal is dismissed and Judge Goodstein rules that the trial must begin on July 8.
September 2014 The Reverend H. Dagnall Free Jr. is welcomed back into good standing as a priest in The Episcopal Church by Bishop vonRosenberg, having gone through a formal reconciliation process that was created in South Carolina in the hope that clergy who were “released and removed” in 2013 would choose to return to the church.
November 2014 The annual convention date having been moved to autumn, the 224th Annual Diocesan Convention is held at Church of the Holy Communion in Charleston. Three more new congregations are formally admitted as mission churches, bringing the total number of churches “in union” with the convention to 30. Bishop James Tengatenga, Chairman of the Anglican Consultative Council and former Bishop of the Diocese of Southern Malawi in Africa, is the preacher at the Convention Eucharist.
January 28, 2015 Oral arguments are heard before the Fourth Circuit Court of Appeals in Richmond, Va., in vonRosenberg v. Lawrence.
February 4, 2015 In the state court case, Judge Goodstein rules in favor of the breakaway group, giving the group the right to hold onto the name and property of the diocese. The ruling mirrors the language of the proposed order that was given to the judge by the plaintiffs after the trial concluded.
March 2015 TECSC and TEC appeal Judge Goodstein’s decision directly to the SC Supreme Court, seeking to bypass the SC Court of Appeals to avoid unnecessary expense and delay. Also in March, Bishop vonRosenberg welcomes a second returning priest, the Rev. H. Jeff Wallace, back into good standing in The Episcopal Church through the reconciliation process created for priests who had been “released and removed” in 2013 following the split.
March 31, 2015 The US Court of Appeals for the Fourth Circuit rules in favor of Bishop vonRosenberg, in the federal false-advertising lawsuit, sending vonRosenberg v. Lawrence back to the US District Court in Charleston for another hearing. The appeals court found that the lower court erred by applying the wrong legal standard in deciding to abstain from the case.
April 15, 2015 The SC Supreme Court agreed to hear the appeal of Judge Goodstein's decision. The court also denies the breakaway group's request for a greatly expedited schedule, and sets September 23 as the date for oral arguments. TECSC's initial brief is filed May 15.
June 2015 Seeking to end the bitter legal battle, TECSC proposes a settlement agreement to all the parties in the breakaway group, offering to let the disputed parishes keep their church properties whether or not they choose to remain part of TEC. In exchange, the proposal would require the breakaway group to return diocesan property, assets and identity to TECSC. Representatives of the breakaway diocesan group tell the media they have rejected the offer. The parishes included in the settlement offer were: All Saints, Florence Christ Church, Mount Pleasant Christ the King, Waccamaw Christ-St. Paul’s, Yonges Island Church of the Cross, Bluffton Epiphany, Eutawville Good Shepherd, Charleston Holy Comforter, Sumter Holy Cross, Stateburg Holy Trinity, Charleston Old St. Andrew’s, Charleston Church of Our Saviour, John's Island Prince George Winyah, Georgetown Redeemer, Orangeburg Resurrection, Surfside St. Bartholomew's, Hartsville St. David's, Cheraw St. Helena's, Beaufort St. James, James Island St. John's, Florence St. John's, John's Island St. Jude's, Walterboro St. Luke's, Hilton Head St. Luke and St. Paul, Charleston St. Matthew's, Darlington St. Matthew’s, Fort Motte St. Matthias, Summerton St. Michael's, Charleston St. Paul's, Bennettsville St. Paul's, Conway St. Paul's, Summerville St. Philip's, Charleston Trinity, Edisto Island Trinity, Pinopolis Trinity, Myrtle Beach
June 23-July 3, 2015 TECSC sends Bishop vonRosenberg and elected lay and clergy Deputies and Alternate Deputies to the 78th General Convention of The Episcopal Church in Salt Lake City, Utah. It is the first deputation to attend since members of the 2012 South Carolina deputation walked out of the 77th General Convention in protest over issues of human sexuality. The 2015 South Carolina deputation is recognized with applause on the HOD floor, and participates fully in the activities of the House of Bishops and House of Deputies, including the election of Michael Curry as Presiding Bishop. Prayers and condolences are offered after the tragic shooting deaths of nine people at Emanuel AME Church in Charleston, SC, on June 17, and the deputation participates in a march against gun violence in downtown Salt Lake City.
September 10, 2015 A special joint meeting of the Standing Committee, Diocesan Council and Trustees is held with Bishop vonRosenberg to discuss the long-term governance needs of the diocese. A blue-ribbon, ad-hoc committee, the Diocesan Future Committee, is formed to study various administrative models, and eventually to make a recommendation that can be brought to Diocesan Convention.
September 21 2015 U.S. District Judge C. Weston Houck issues a stay September 21 in the federal false-advertising lawsuit vonRosenberg v. Lawrence, again declining to hear the case until the final outcome of the state court case. That ruling has since been appealed to the U.S. Court of Appeals for the Fourth Circuit, which has not yet ruled (as of March 24, 2016).
September 23, 2015 The SC Supreme Court hears oral arguments in Columbia on the appeal by TECSC and TEC seeking to overturn the state court decision. The Supreme Court’s decision has not yet been issued (as of September 16, 2016). View video of the hearing here.
November 13-14, 2015 The 225th Annual Diocesan Convention is held at Holy Cross Faith Memorial Episcopal Church, Pawleys Island. Resolutions aimed at racial reconciliation are unanimously adopted. Grace Church is officially designated as the Cathedral of the Diocese (“Grace Church Cathedral”). St. Mark’s in Port Royal is officially recognized as a parish, having been admitted as a mission in March 2013. Bishop Robert Gillies of Aberdeen and Orkney in the Scottish Episcopal Church is the convention preacher.
January 14, 2016 Bishop vonRosenberg announces that he will retire as Provisional Bishop of TECSC after completing his calendar of episcopal visits in June 2016. February 11, 2016 The Standing Committee of TECSC votes to move forward with a discernment and selection process to find a new Provisional Bishop to lead the diocese after Bishop vonRosenberg’s retirement. Meanwhile, the Diocesan Future Committee continues to meet and investigate administrative models so it can make a recommendation on the governance of the diocese.
April 2016 Presiding Bishop Michael Curry visits the diocese, preaching and celebrating at five Charleston Episcopal churches. On April 10 he was part of a celebration at Grace Church Cathedral where the Very Rev. Robert Willis, Dean of Canterbury Cathedral, recognized Grace as the newest cathedral in the Anglican Communion and presented a stone from his cathedral carved with a Canterbury Cross, now affixed near the entrance of Grace.
June 30, 2016 The Standing Committee of TECSC announces the nomination of the Right Reverend Gladstone B. “Skip” Adams III as the next Provisional Bishop for the diocese, calling him to South Carolina as he prepares to retire as Bishop of the Episcopal Diocese of Central New York. (Read more here.) A Special Diocesan Convention is called for September 10 for delegates to gather and elect Bishop Adams. Bishop vonRosenberg remains in office until Bishop Adams can take office in September.
September 10, 2016 At a Special Diocesan Convention, the Right Reverend Gladstone B. “Skip” Adams III is elected by acclamation, and immediately invested and installed as the Provisional Bishop of the diocese. Read more here.
August 2, 2017 The South Carolina Supreme Court issues a decision in the appeal heard in 2015. Read the decision here.
September 2017 TECSC announces that mediation will take place on all issues in both state and federal litigation at the request of U.S. District Court Judge Richard Gergel. Appointed as mediator was Senior U.S. District Judge Joseph F. Anderson Jr. of Columbia. TECSC officials noted that the mediation could only result in action if all parties agreed to it in writing. October 4, 2017 Bishop Adams and members of the legal team for TECSC and The Episcopal Church attended a mediation planning session in Columbia with Senior U.S. District Judge Joseph F. Anderson, Jr. and representatives of the breakaway group involved in federal and state litigation resulting from the 2012 division in the Diocese. They announced mediation would take place for an initial 3 days beginning November 6, 2017.
November 7, 2017 At 10:45 am on the second day of mediation, TECSC officials announced that mediation with the breakaway group and Senior U.S. District Judge Joseph F. Anderson Jr. was recessed until December 4-5, 2017.
November 17, 2017 Ruling in favor of The Episcopal Church in South Carolina, the South Carolina Supreme Court denied motions from a disassociated group and upheld itsAugust 2 decision that property and assets of the Episcopal Diocese of South Carolina, and most of its parishes, must remain with The Episcopal Church.
December 4, 2017 TECSC announced that mediation with Judge Anderson would be in recess until January 11-12, 2018 in Columbia.
January 12, 2018 A fourth day of mediation took place on January 12, 2018 and resulted in this statement by TECSC: "Although not resolved, the parties agreed to move forward with good faith mediation efforts to amicably resolve the case." No further mediation sessions were held.
February 9, 2018 The breakaway group files a petition for a writ of certiorari with the U.S. Supreme Court asking for a review of the August 2017 decision by the South Carolina Supreme Court.
April 17, 2018 A federal judge grants motions to expand a federal false-advertising and trademark infringement lawsuit against the bishop of a group that left The Episcopal Church, adding as defendants the breakaway organization and parishes that followed Bishop Mark Lawrence in separating from The Episcopal Church.
April 26, 2018 The Episcopal Church in South Carolina introduces its new Diocesan Vision Statement, drawing inspiration from historic symbols of our diocese.
May 2, 2018 The Episcopal Church in South Carolina publishes a "Frequently Asked Questions" document to provide information and share hope for a future that remains grounded in the love of God.
May 7, 2018 The Episcopal Church and The Episcopal Church in South Carolina (TECSC) file their response to the petition for certiorari in the U.S. Supreme Court, saying the petition does not present any reviewable questions of federal law, and should be denied.
May 8, 2018 The Episcopal Church and The Episcopal Church in South Carolina (TECSC) petition the 1st Circuit Court of Common Pleas to execute the state Supreme Court's decision and return church properties to the Episcopal Church.
June 11, 2018 The U.S. Supreme Court denies a petition for a writ of certiorari, letting stand the South Carolina Supreme Court's decision of August 2017 regarding church property.
July 13, 2018 Canterbury Cathedral, the Mother Church of the Anglican Communion, commemorates the 90th anniversary of the martyrdom of William Alexander Guerry, 8th Bishop of South Carolina, at its daily Choral Evensong, with clergy and members of Grace Church Cathedral participating in the service. Guerry is remembered in Canterbury’s Chapel of the Saints and Martyrs of Our Time.
July 16-18, 2018 The Episcopal Church in South Carolina hosts three Open Conversations in Conway, Bluffton and Charleston, bringing together hundreds of Episcopalians and Anglicans to talk about reconciliation in the diocese and the churches of eastern South Carolina.
July 26, 2018 1st Judicial Circuit Judge Edgar W. Dickson holds a scheduling conference in Orangeburg with attorneys to begin setting a timetable for resolving how to implement the transfer of diocesan and parish property back to The Episcopal Church under the August 2017 decision by the South Carolina Supreme Court. The judge asks both sides to prepare a list of issues to be resolved. The lists were submitted August 2.
August 20, 2018 U.S. District Court Judge Richard M. Gergel sets March 1, 2019 as the target date for a trial in the federal false-advertising and trademark infringement lawsuit against a breakaway group that left The Episcopal Church.
September 25, 2018 The Episcopal Church in South Carolina announces that representatives of the 29 returning congregations are being invited to the 228th Diocesan Convention November 16-17, 2018 in Charleston. Each returning congregation is invited to send two representatives to this annual business meeting of the diocese.
September 26, 2018 Continuing the Open Conversation series that began in July, TECSC announces a Facebook Live Open Conversation on October 11, 2018 from 6:30-7:45 p.m. so Bishop Skip Adams and four other diocesan leaders can answer questions and hear ideas from people in Episcopal/Anglican churches in eastern South Carolina.
October 11, 2018 A Live Open Conversation is held via video on Facebook to answer questions and hear ideas. You can view the entire video on Facebook, or on YouTube.
November 19, 2018 After an 85-minute hearing in Orangeburg, 1st Circuit Court Judge Edgar W. Dickson told attorneys he will have more questions for them as he prepares to decide how to implement the South Carolina Supreme Court's decision on church property matters. The judge heard only one of the five motions currently before the court in the 85-minute hearing, listening as both sides addressed a “Motion for Clarification” filed by the plaintiffs, the group that left The Episcopal Church.
December 7, 2018 The Episcopal Church in South Carolina (TECSC) and The Episcopal Church ask the U.S. District Court to grant motions for summary judgment and call a halt to the “pervasive” public confusion caused by a group that broke away from the church, yet continues to use Episcopal names and marks.
February 13, ,2019 U.S. District Court Judge Richard M. Gergel sets May 1 as the earliest date when a trial could begin in the federal false-advertising and trademark infringement lawsuit against a breakaway group that left The Episcopal Church. The order represents a two-month extension from the previous schedule the judge set in August, which had called for a trial "on or after" March 1.
March 20, 2019 The Episcopal Church in South Carolina and The Episcopal Church file a petition for Writ of Mandamus, asking the South Carolina Supreme Court to order the Dorchester County Circuit Court to enforce the high court’s 2017 decision and return control of diocesan property and 29 parish properties to The Episcopal Church and its local diocese, TECSC.
June 28, 2019 Saying it is “confident” that the 1st Circuit Court will act "in an expeditious manner" to resolve the case, the South Carolina Supreme Court denies the Petition for Writ of Mandamus from The Episcopal Church in South Carolina and The Episcopal Church, which had asked the court to order enforcement of the 2017 decision to return control of diocesan property and 29 parish properties to The Episcopal Church and TECSC.
July 2, 2019 Circuit Judge Edgar W. Dickson schedules a hearing for Thursday, July 25 at 9:30 am concerning a lawsuit known as the Betterments Act case that was filed against The Episcopal Church in South Carolina and The Episcopal Church by the breakaway group. The hearing is set to take place at the Calhoun County Courthouse in St. Matthews, in the First Judicial Circuit. (The hearing date is later moved to July 23 at 10:30 am.) July 23, 2019 Circuit Judge Edgar W. Dickson orders all parties to enter into mediation in the dispute over implementation of the 2017 South Carolina Supreme Court decision. The action follows a hearing at the Calhoun County Courthouse in St. Matthews. | <urn:uuid:016fbef5-05d5-4714-a4f1-c6875b884462> | CC-MAIN-2019-47 | http://www.episcopalchurchsc.org/historical-timeline.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00101.warc.gz | en | 0.956875 | 5,347 | 3.21875 | 3 |
The German socialist Karl Marx was once asked what his favorite maxim was. He replied with a line by the Roman playwright Terence: “I am a man, and nothing that concerns a man, is a matter of indifference to me.”
If for the moment we ignore the use of sexist language in this ancient quotation, we get a feel for the profoundly humanitarian spirit of Marx and socialists since him.
Indeed, socialists are very concerned about the injustice and social ills in the world today—hunger, poverty, unemployment, illiteracy, disease, war, the exploitation of workers, the oppression of nations, races, women, and gays, the destruction of the environment, and the threat of nuclear annihilation.
Socialists obviously don’t have a monopoly on compassion, however. What distinguishes socialists from other socially concerned people is that we do not view these problems as normal, natural, eternal, or an inherent feature of the human condition. We believe that these problems are historically and socially created and that they can be solved by human beings through conscious, organized political struggle and change.
Socialist Action argues that the wealth and other advances produced by industry, technology, and science have made it possible to eliminate these problems but that these problems continue because of the dominant economic and political interests and values of society. We assert that capitalism is ultimately the main source of these problems in the United States and the world today.
Capitalism and the exploitation of workers
Under capitalism, the chief means of production—the factories, the railroads, the mines, the banks, the public utilities, the offices, and all of the related technology—are privately owned by a super-rich minority, the capitalist class. The capitalists then compete with each other in the marketplace and run production on the basis of what will bring them the biggest profit.
This drive to successfully compete and to maximize profit leads big business to exploit workers, to pay their employees as little as possible, a mere fraction of the actual value that they produce. It also leads big business to resist the efforts of workers to unionize and to obtain increased pay, reduced working hours, and improved working conditions.
This exploitation of workers results in a gross concentration of wealth, to the benefit of the capitalists and at the expense of working people. Even in the United States, the richest country in the world, where workers admittedly have one of the highest living standards, there is nonetheless a gross concentration of wealth. According to the Federal Reserve Survey of Consumer Finances, the top 1% of American families (834,000 households) own more than the bottom 90% (84 million households).
This social inequality is aggravated by mass unemployment, which is endemic to capitalism. Because the means of production is divided up among the individual capitalists competing with each other, there is no overall coordination or planning of the economy and consequently no consideration to provide jobs to everybody who is able and willing to work.
This anarchy of production for private profit also fuels the erratic boom-and-bust cycle of the capitalist economy. Periodically, the economy experiences crises of overproduction when the capitalists inadvertently glut the market with products that they cannot sell at a profit. The result is recessions and massive layoffs of workers, which ruin lives, idle factories, and deprive society of the benefits of production.
The basic irrationality of capitalism is highlighted by the glaring gap between unmet human needs on one hand and the untapped potential of the existing human and material resources to fulfill these needs on the other. For example, when inventors or scientists or technicians develop new, advanced labor-saving technology, this should be a cause for celebration for workers because it means that the work week could be cut with no cut in weekly pay. Workers could enjoy greater leisure time without a drop in income. Instead, the capitalists use labor-saving technology to lay off workers because, of course, it only makes good sense from the business point of view to cut labor costs in order to increase profits.
In the United States, there is a great need for a massive construction of more schools, hospitals, child-care centers, and recreation centers. There is also a great need to repair the nation’s deteriorating infrastructure, including its roads, bridges, mass transit, and water systems. The capital, raw materials, and labor for such development exists, but the corporate rich do not invest in such projects because they correctly judge that it would not be profitable for them to do so. The potential, overwhelmingly working-class consumers of such services simply would not be able to afford the prices that big business would have to charge in order to make a profit.
Nor does the capitalist government finance such a massive expansion as part of a public works program, for a couple of reasons. First of all, it would raise the public’s expectation, which is basically at odds with capitalist ideology, that society should be responsible to provide for its members. And secondly, it would raise the possibility that the public would force the government to tax the rich to fund such an expensive program.
The oppression of African-Americans and Hispanics
In addition to exploiting workers, capitalism contributes to the oppression of other groups in society. White racism and the oppression of African-Americans arose with the European slave trade, but they have been perpetuated under capitalism.
After the slaves were freed during the Civil War, the capitalists used racism to justify paying less to Black employees. The capitalists also used racism to pit white workers against Black workers in order to divide the working class and weaken the organized labor movement. Despite the gains of the civil rights movement of the 1950s and ’60s, white racist discrimination in employment and in other areas of life persists.
Furthermore, the second-class status of African-Americans has been deepened by color-blind free market forces, specifically by the recent movement of industry out of the cities, where the Black community is concentrated. The resulting loss of decent-paying working-class jobs has increased Black poverty and devastated Black neighborhoods and families, fueling crime, drug addiction, and hopelessness.
The oppression of Hispanics in the United states is similar to that of African-Americans in that it is based on widespread racist discrimination, combined with a decline in the number of available decent-paying jobs. The Anglo suppression of various aspects of Latin culture and identity worsens the plight of Hispanics in this country.
The oppression of women and GLBT
While the oppression of women predated the establishment of capitalism, the private profit system has perpetuated their subordination to men. The main basis of women’s oppression in capitalist society is the segregation of women in lower paying jobs in the labor market and the relegation of women to unequally shared child care and housework in the family.
These two spheres of women’s oppression—the labor market and the family—are mutually reinforcing. So long as women are unduly burdened by child care and housework, they will not be able to gain equality with men in employment. So long as women bring home a smaller paycheck, they will not be able to get their male partners to share domestic responsibilities equally.
These unequal labor relations between men and women sustain the sexist ideology that justifies different and unequal gender roles and the rigid, polarized norms for males and females in all aspects of life.
The oppression of gays, lesbians, bisexuals, and transgender people is largely derived from this sexist ideology. GLBT people are stigmatized because they defy the norm of exclusive heterosexuality and because they do not conform to conventional standards of masculinity and femininity.
Imperialism and U.S. foreign policy
On an international level, capitalism has led to the development of imperialism. Since the nineteenth century, the corporate rich of the advanced industrialized capitalist nations of Western Europe, the United States, and Japan have invested capital and exploited cheap labor and natural resources in the colonial world of Africa, Asia, and Latin America.
The economic domination of the imperialist nations has distorted the development of the Third World nations, condemning the masses of their populations to poverty and misery. The rivalry between the imperialist nations has also led to military conflicts, including two world wars, as they competed for new world markets and carved up the world. Since the Second World War, the imperialist nations have been forced to grant most of their former colonies formal political independence, but their economic domination continues.
Since its victory in World War II, the United States has been the leading imperialist power. At various points over the past fifty years, the U.S. government has defended American corporate interests abroad by supporting such repressive, undemocratic governments as the fascist dictatorship of General Francisco Franco in Spain, the apartheid regime in South Africa, the shah of Iran, the Marcos dictatorship of the Philippines, and the recently deposed Suharto dictatorship in Indonesia.
The U.S. government has also gone to war or used other forms of military intervention to defend big business interests, such as in Korea in the ’50s, Cuba in the ’60s, Vietnam in the ’60s and ’70s, Nicaragua in the ’80s, and Iraq in the ’90s. The U.S. imperialists have also overthrown democratically elected reform governments that encroached on U.S. corporate privilege, such as in Iran in 1953, Guatemala in 1954, the Dominican Republic in 1965, and Chile in 1973.
Additionally, the United States dropped the atom bomb in World War II and launched the arms race with the Soviet Union—all to intimidate the Soviet Union and to deter the people of the colonial world from challenging imperialist domination and going the route of socialist revolution.
The socialist solution
Socialist Action argues that the problems of exploitation and oppression in the world today can ultimately be solved by first replacing the capitalist system with a socialist system. The chief means of production should be socialized, that is, taken out of the private hands of the capitalists and put under public ownership, that is, government ownership.
The economy should then be run by councils of democratically elected representatives of workers and consumers at all levels of the economy. Instead of being run on the basis of what will maximize profit for a super-rich minority, the economy should be planned to meet the needs of the people—in employment, education, nutrition, health care, housing, transportation, leisure, and cultural development.
A socialist government could raise the minimum wage to union levels, cut the work week with no cut in weekly pay, and spread around the newly available work to the unemployed. A public works program, such as the one mentioned earlier, could be launched to provide yet more jobs and offer sorely needed social services. The government could provide free health care, from cradle to grave, and free education, from nursery school to graduate school.
A socialist government could also address the special needs and interests of the oppressed. Existing anti-discrimination legislation in employment could be strongly enforced, and pay equity and affirmative action for women and racial minorities could be expanded. Blacks and Hispanics could be granted community control of their respective communities. The racist, class-biased death penalty could be abolished.
The establishment of flexible working hours, paid parental leave, and child-care facilities, as well as the defense of safe, legal and accessible abortion, would provide women with alternatives to sacrificing work for the sake of their children and because of unwanted pregnancies, respectively. Same-sex marriage could be legalized, and a massive program, like the space program or the Manhattan Project, could be financed to find a vaccine and a cure for AIDS.
Money currently spent on the military could be spent instead on cleaning up the country’s air and waterways and developing environmentally safe technology. A socialist government of the United States would end this country’s oppression of Third World nations because it would not be defending corporate profit there but would be encouraging the workers and peasants of those countries to follow suit and make their own socialist revolutions.
The socialist system that Socialist Action advocates would be a multiparty system, with all of the democratic rights won and enjoyed in the most democratic capitalist nations, including freedom of speech, freedom of the press, freedom of association, freedom of assembly, and freedom of religion. A genuinely socialist system would be far more democratic than the most democratic capitalist system because in a socialist economy the common working people would democratically decide what should be produced and how it should be produced.
Social democracy and Stalinism
Many people often ask Socialist Action if we support the model of socialism offered by the social democratic parties and government administrations in Western European nations. We say “no.”
In those countries, the Labor, Social Democratic, and Socialist parties have helped their working-class constituencies to win important progressive reforms, such as universal suffrage, the eight-hour day, old-age pensions, free health care and education, and social services more extensive than those here in the United States. However, these parties and the trade unions affiliated with them have secured these reforms within the capitalist framework, which they have never fundamentally challenged or sought to replace with socialism. Therefore, the capitalists’ rule of the economy, their exploitation of the working class, and the resulting concentration of wealth continue.
People also ask us if the so-called Communist countries of the former Soviet bloc represented the model of socialism that we support. Again, as with the Social Democrats, our answer is “no.”
In the former Soviet bloc, the capitalist class was expropriated, and the economies were socialized. These socialized economies made possible great progress in raising the living standards of the masses of workers and peasants in the areas of employment, health care, education, and nutrition, and in upgrading the status of women. However, these countries were ruled through the Communist parties by privileged bureaucratic elites that denied socialist democracy and imposed repressive, totalitarian political systems on the people. These dictatorial governments not only violated basic democratic and human rights but mismanaged the planned economies, being responsible for inefficiency, waste, corruption, and stagnation.
The origins of these dictatorial bureaucratic regimes lie with the degeneration of the Russian Revolution in the 1920s and 1930s. One of the two leaders of the Bolshevik Revolution, Leon Trotsky, argued that the Bolshevik model of socialist democracy was never fully implemented and then was completely destroyed under the Stalin dictatorship because of a combination of factors. These factors included the failure of the socialist revolutions to triumph in Europe after the First World War, the resulting isolation of the Russian Revolution, the military attacks on the young Soviet republic by the imperialist nations, the devastation caused by the First World War and the Civil War that followed the revolution, the lack of democratic traditions in czarist Russia, and the general low educational and cultural levels of the masses of workers and peasants.
Currently, in the former Soviet bloc nations, the ruling Stalinist bureaucracies, allied with the Western imperialists and native capitalist “wannabes,” are trying to restore capitalism. So far, the introduction of the free market into the Soviet bloc has resulted in a gigantic drop in productivity and in the living standards of common working people, with increasing unemployment, poverty, and social inequality.
This right-wing attempt to restore capitalism and the corresponding attacks on social services and entitlements, such as free health care and full employment, in the former Soviet bloc have also made it easier for the capitalist governments of Western Europe to attack the various reforms and social services that the labor movements and social democratic parties of those countries have won over the past decades.
Socialist Action hailed the collapse of the repressive Communist Party regimes of the Soviet bloc, but we oppose the restoration of capitalism there. Instead, we call for a defense of the socialized economies and for the workers and their allies to overthrow the ruling Stalinist bureaucracies and establish socialist democracy in their place.
Socialism and human nature
Many critics say that socialism is a great idea in theory but that it is completely unrealistic and utopian because it goes against basic human nature. The critics claim that human beings are just too selfish, too greedy, too competitive, and too aggressive to create and sustain a cooperative and egalitarian society.
Socialists recognize that individual self-interest has always existed and will always exist in human beings. We also acknowledge that there will never be a perfect harmony between the individual and society.
But we argue that individual self-interest need not be the ruling principle of society. History and cross-cultural research suggest that basic human nature consists of many different, divergent, but co-existing capacities, and that human personality and behavior are largely shaped by the social institutions, practices, and ruling ideology of the given society.
The critics of socialism correctly perceive the hyper-individualism of people in capitalist society, but then they incorrectly generalize this historically specific characteristic to human beings across time and place. They cannot imagine or understand that a reorganization of society along socialist lines would elicit, facilitate, and reinforce the basic human capacities for cooperation and solidarity.
The revolutionary potential of workers & the oppressed
Still, the point about self-interest as a motivating factor for human behavior is an important one. Socialists believe that many people of conscience from different classes and backgrounds can be won to a socialist perspective through appeals to reason, morality, and political idealism. However, we believe that the main impetus for a socialist movement to sustain itself and successfully transform society must be collective self-interest and power.
We believe that the working class is the only social force that has both the necessary self-interest and power to lead the struggle for socialism. Socialism is in the interests of the working class because it will allow the workers to reclaim the wealth that they produced but which the capitalists appropriated from them through exploitation. The working class also has the power to overturn capitalism because of its strategic location at the point of production and its corresponding ability to shut down production by simply withdrawing its labor. Thus, a mass socialist movement can only grow out of a revitalized and radicalized labor movement, based on the trade unions and other organizations of the working class.
Similarly, we believe that only the oppressed possess sufficient self-interest to lead the struggles for their own liberation. Therefore, we support the autonomous movements of the oppressed–the Black movement, the Hispanic movement, the women’s movement, and the gay and lesbian movement–to insure that their respective needs and demands are met.
However, we do not believe that the oppressed by themselves possess sufficient power to fully achieve their liberation since their oppression is at least partly rooted in the capitalist system. Because only the organized working class possesses sufficient power to abolish capitalism and its concomitant forms of oppression, the oppressed must win the organized working class to support their respective struggles, as well as ultimately ally themselves with the working class in the struggle for socialism.
Independent mass action
Socialist Action does not believe that socialism can be voted into power through free elections. History has repeatedly shown that when workers and their allies try to use the existing democratic process to advance their interests and replace capitalism with a socialist system, the capitalist class and the armed forces of the capitalist state will smash democracy to save capitalism, as happened, for example, in Chile twenty-five years ago this month.
Socialist Action points out that progressive social change has been made in this country through mass action, not by voting in certain politicians or by working within the system.
American independence from England was gained through a revolution. The passage of the Bill of Rights was prompted by a rebellion of poor farmers. The abolition of slavery and the extension of suffrage to Black men was accomplished through a second revolution, the Civil War. Women won the vote through the women’s suffrage movement.
The labor movement won the twelve-hour day, then the eight-hour day, the right to strike, the right to form unions and bargain collectively, the minimum wage, unemployment compensation, worker’s disability. Social Security, welfare, and increased wages and benefits for union members.
The civil rights movement overthrew the segregationist “Jim Crow” laws of the South and forced the government to outlaw racist discrimination in employment and housing and to implement affirmative action.
The anti-war movement helped force the U.S. to end its imperialist war against the Vietnamese in their just struggle for self-determination.
The feminist movement won anti-discrimination legislation, affirmative action, pay equity in some public institutions, and the legalization of abortion.
The gay and lesbian movement, too, has secured anti-discrimination legislation and greater funding of AIDS research and patient care.
Additionally, the environmental and consumer protection movements have won important reforms that moderate big business’s destruction of the planet and manufacture of unsafe commodities in its relentless pursuit of profits.
Socialist Action advocates the independent political action of the workers and the oppressed to bring about further progressive change. We call for and build mass demonstrations, rallies, pickets, and strikes.
We counter-pose such mass action to reliance on the American two-party system, electoral campaigns, and behind-the scenes lobbying of capitalist politicians. The logic of working within the two-party system of the capitalist political establishment is to subordinate the needs, demands, and priorities of the workers and the oppressed to what is acceptable to the rulers of this country. The inevitable result is the demobilization and cooptation of the struggle for change.
We point out that the impetus for progressive social change has never come from the Democratic and Republican parties but that they can be forced by mass action to implement progressive policies and reforms, at least up to certain limits. However, we argue that socialism can only be achieved by a revolutionary culmination of mass action of the workers and their allies in opposition to the capitalist state and capitalist political parties.
Socialist Action aspires to play a leading role in building a popular mass socialist movement in this country. Our members have participated in the labor movement, the civil rights movement, the anti-Vietnam War movement, the women’s movement, the gay and lesbian movement, the environmental movement, the Central America solidarity movement, and the movement against the Gulf War, among others.
If you want to fight for a society and a world free of all forms of exploitation, oppression, and social injustice, join us! | <urn:uuid:d27af846-d877-46a4-98cb-22bb1838a002> | CC-MAIN-2019-47 | https://socialistaction.org/2011/08/12/what-is-socialism/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00341.warc.gz | en | 0.959633 | 4,534 | 2.8125 | 3 |
Present since the dawn of civilization, management can be briefly defined as getting the work done with the least possible errors, through other people with the intention of achieving the targets. Different definitions of management have been given by eminent personalities, which are: Management refers to the process of getting activities completed efficiently, with and through other people. (Stephen Robbins) Management is the art of getting things done through other people.
Don’t waste time! Our writers will create an original "Communication And Interpersonal Skills" essay for youCreate order
(Mary Parker Follett) Although defined by many in numerous ways, management is all about planning, leading, organizing and controlling. In order to properly manage a particular task, it is also important to effectively and efficiently use the available resources. Nevertheless, in the 21st century it has been noticed that the different management skills are also of utmost importance for the success of an organization. The three basic types of skills identified by Robert Katz (1970), which make up effective management, are technical, human and conceptual skills. In addition, with the new types of organizations and the new ways of doing business that are taking place in our current era, new trends, ideas, skills and techniques are equally essential. All of which are the result of Management Competencies, that is the combination of skills, behavior and requirements necessary to accomplish a particular task at its best.
According to Andrew May (1999), “Management competencies are used to build a framework for analyzing the resources available to achieve business strategies and forecast areas of control risk, a key factor in business continuity planning”. The different management competencies that are very important in today’s business environment are interpersonal communication skills, leadership and emotional intelligence.
With the aim of achieving success in a particular business, good communication skills are required, regardless of the size of the organization. Communication and interpersonal skills incorporate the following: Planning and structuring, Communicating in person and writing, Feedback, Presentations, Negotiations, persuasion and influence, and Better understanding others. Communication, defined as the transmission of a message whether verbally or non-verbally from one person to another, occupies an essential position in the work field just as in all other areas of life. Very often, communication is related to interpersonal skills which are the life skills we use daily to interact with other people and in groups. Consequently, known as the Interpersonal Communication Skills, this concept which was firstly introduced in the 1950s has been defined as the ability to work well with people and involve your acceptance of others without any discrimination (Berko et al., 1998/1378: 58). In other words, people exchange thoughts, information and meanings through verbal and non-verbal messages through this method. According to Avkiran (2000), Interpersonal Communication Skills are the ability to act in response to the staff’s requirements positively, while developing a non-discriminatory work environment where the staffs are capable of developing their full personal potentials. Interpersonal communication can be applied to: Provide and accumulate information, Manipulate the thoughts and behaviors of others, Build and preserve relationships, Make sense of the world and our experiences in it, Convey personal needs and recognize the requirements of others, Offer and obtain emotional support, Create decisions and resolve problems, Foresee behavior, and Regulate power. It has been noted in many different organizations that though the traditional skills like written and verbal communication, are still important, increasing emphasis is being placed on the capacity to create and nurture partnerships, to develop innovative new programs and to market the products and services that the organization is offering. According to Meyer et al. (1990), the organizational commitment concept which is multidimensional in nature have included three conceptualizations, namely the affective commitment (attachment or recognition), normative commitment (responsibility or obligation to norms) and continuance commitment (sacrifice and investment that increases an individual’s cost of leaving). Consequently it can be assumed that organizational commitment is made up of these three components.
With the purpose of maximizing efficiency and to achieve organizational goals, leadership has always been an important function of management. For Horner (1997), leadership has been defined as the traits, qualities and behaviors of a leader. In short, leadership is mostly concerned with motivation, initiating actions, creating confidence, providing guidance, building morale and work environment as well as co-ordination, in other words a person’s skills, abilities and degree of manipulation to get people moving in a direction, making decisions and do things that normally they would not have chosen to do. It is a known fact that the starting point in understanding responsible business behavior and the different competencies of management remains the leadership, especially relating to the personal attitudes and viewpoints.
In the past, leadership theories focused more the distinguished qualities of the leaders and followers whereas subsequent theories are paying more attention to variables such as skill levels and situational factors. The theories of leadership can be categorized as follows, in spite of the diverse leadership theories that have come into view: TRADITIONAL THEORIES OF LEADERSHIP “Great Man” Theories which presumes that the capability for leadership is innate (great leaders are born, not made). Great leaders are often represented as valiant and mythic, ordained to ascend to leadership when needed through these assumptions. Trait Theories presume that people receive certain traits and qualities making them better suited to leadership and thus often categorize certain behavioral or personality characteristics shared by leaders. Contingency Theories that might verify which particular approach of leadership is best suitable for the situation focuses on particular variables related to the environment and therefore look upon the fact that the leadership style is not same in all situations. Situational Theories recommends that based upon situational variables leaders opt the best course of action that is the most appropriate and effective styles of leadership for decision-making of certain categories. Based upon the idea that great leaders are made, not born, the Behavioral Theories of leadership, which is entrenched in behaviorism, does not focus on the internal states or mental qualities but on the actions of leaders and as such people can be taught through coaching and observation to develop into leaders. Participative Theories suggest that the best leadership style is one that takes the participation of others into account. Participation and contributions from group members are encouraged by these leaders with the aim of helping group members feel more important and committed to the decision-making process. CONTEMPORARY THEORIES OF LEADERSHIP Also known as transactional theories, the Management Theories focus on the function of organization, control and group performance and is based on a system of rewards and punishments. According to Burns (1978), transactional leadership originates from more traditional views of workers and organizations, emphasizing the leader’s position of power to use followers for task completion. Relationship Theories also called transformational theories focus upon the associations created among leaders and followers. Transformational leaders inspire and encourage people by helping group members see the significance and superiority of the task. Focused on the performance of group members these leaders in addition wish for each individual to accomplish his or her potential.
Management and Leadership have been used interchangeably, as they are two thinking describing two different perceptions. Managers relate to goals and objectives in an impersonal manner while being primarily concerned with developing plans and budget, organizing direction, co-coordinating and controlling resources whereas leaders have a high sense of active and personal involvement thus capable of influencing others. Quick and Nelson (1997) have stated that “Whereas leaders agitate for change and new approaches, managers advocate stability and status quo” and have also affirmed that though management and leadership are two different systems, they are also complementary wherein ‘Leadership is a sub-set of good management’. Many people believe that leadership is about positioning a new direction for a group to follow while management directs resources or people in a group according to the established values and principles. With the purpose of better understanding leadership and management, one must consider what happens when you have one with or without the other, that is: Leadership without management Æ’ sets a vision or direction that others follow, without taking into consideration the method through which the new direction is going to be accomplished. Management without leadership Æ’ organizes resources to preserve the status quo or else make sure things take place according to already-established strategies. Combination of leadership and management Æ’ does both – it both sets a fresh path and handles the resources to achieve it. For example a recently elected prime-minister or president. Consequently, leadership is concerning the setting of a new direction for a group whereas management is about controlling and directing according to the established principles.
Leadership qualities are normally assumed to be context-dependent since they show a discrepancy in the different companies, teams and situations. The perfect scenario in theory is for a leader to have unlimited flexibility that is being able to adapt the leadership style according to the situation. However, modern leadership theory has begun to realize that the ideal, flexible leader does not exist as everyone has both strengths and weaknesses and consequently there is a need to make an adjustment while trying to meet the needs of the situation
Emotional Intelligence (EI) is described as the ability to recognize and understand one’s own feelings and emotions as well as those of others and use that information to manage emotions and relationships. It has been noted that people with high EI are usually successful in most of their tasks especially because of their nature to make others feel good. EI is a unique fundamental element of one’s behavior, which can be improved with practice. Used for the first time in 1985 by Wayne Payne, in his doctoral thesis entitled ‘A study of emotion: developing emotional intelligence; self-integration; relating to fear, pain and desire’, emotional intelligence is mostly concerned with perceiving, understanding, reasoning with and managing emotions. Back in the 1990’s, when EI first acquired noteworthy media attention, for many people it was regarded as the explanation for a remarkable discovery. Many studies have confirmed that this relatively new intelligence was significant to the survival of organizations in this new world economy (Bloomsbury, Cherniss & Goleman). The US secretary of Labor’s Commission on Achieving Necessary Skills published a report referring to the important presence of this “soft skill” at the workplace. In order to achieve a high performance at work, according to this report along with good literacy and computational skills workers should also outshine in personal qualities such as self-esteem, responsibility, sociability or honesty (Secretary’s Commission on Achieving Necessary Skills, 1991). The key areas for EI in management competencies are: Reading people Æ’ Interacting, presenting, supporting and cooperation Using emotions Æ’ Leading, deciding, creating and conceptualizing Understanding emotions Æ’ Organizing, executing, analyzing and interpreting Managing emotions Æ’ Adapting, coping, enterprising and performing
There are four core emotional intelligence skills, grouped under two primary competencies, namely personal competence and social competence. Figure 1: Core emotional skills Self-Awareness Æ’ is about how exactly emotions can be identified in the moment and understands the tendencies across time and situation. Self-Management Æ’ describes how the awareness of one’s emotions is used to create the behavior one wants. Social Awareness Æ’ explains the degree to which the emotions of other people are understood. Relationship Management Æ’ gives details on how the previously mentioned skills are used to handle the interactions with other people. According to Dr Singh (2003), EI is the “capability of a person to properly and effectively respond to a huge variety of emotional stimuli being drawn out from the inner-self and immediate surroundings while comprising three psychological dimensions – emotional competency, emotional maturity and emotional sensitivity – which motivates an individual to recognize, interpret and handle diplomatically the dynamics of human behavior”. For Sterrett (2003), EI refers to a series of personal, managerial and social skills needed so as to help an individual succeed at the workplace and in life on the whole. It encompasses competencies such as character, intuition, integrity and good communication and interpersonal skills.
The creator of the field of EI stimulates huge discussion and due to the fact that EI is a young and ever growing field one has to keep an open mind on this topic while being willing to recognize the qualities of each of the models, and apply what them more effectively. So far, the three EI models that have been proposed are:
The Ability Model of Mayer and Salovey (1997) defines EI as the ‘intelligence’ in the traditional sense, that is, a set of mental abilities to do with emotions and also the processing of emotional information which are component of and contribute to reasonable thought and intelligence in general. Such mental abilities are arranged hierarchically from the basic psychological processes to more psychologically integrated and complex practices which can be developed through age and experience. This Ability Model also depicts that the emotionally intelligent individuals are more likely to: Have grown up in bio-socially adaptive households (with more emotionally sensitive parents), Be non-defensive, Be able to reframe emotions effectively, Choose good emotional role models, Be able to communicate and discuss feelings, Develop expert knowledge in particular emotional areas, such as aesthetics and social problem solving.
This model of EI by Goleman (2001) has been planned purposely for workplace applications (Gardner & Stough, 2002). Based on the theory of performance, it involves twenty competencies which help to distinguish individual differences in workplace performance. Clustered into four different general abilities, these competencies are: Self Awareness Æ’ Ability to recognize feelings and precise self-assessment, Self Management Æ’ Capability to handle internal states, desires and resources, Social Awareness Æ’ Ability to read people and groups’ emotions accurately, Relationship Management Æ’ Ability to induce desirable responses in others. Figure 2: Competency-Based Model
In this model, EI is defined as “an array of non-cognitive capabilities, skills and competencies that manipulate one’s ability to be successful in dealing with environmental demands and pressures”. This model consists of fifteen conceptual components that pertain to five specific dimensions which are as follows: Intra-personal skills Æ’ capabilities, competencies and expertise pertaining to the inner self, Inter-personal skills, Adaptability Æ’ how one can successfully manage environmental demands by successfully evaluating and dealing with challenging situations, Stress management Æ’ the ability to cope and manage stress effectively, General mood Æ’ the ability to enjoy life and maintain a positive disposition. Figure 3: Bar-on model of Emotional Intelligence
It has been found that employee behaviors which is focused on the fulfillment of customers’ needs and desires, by mediating a positive climate for services within the organization, will lead to an increase in customer satisfaction levels and consequently to increases in profitability (Keiningham and Vaura, 2001; Olivier, 1996).
Changes in today’s organization’s environment have been provoked by a variety of driving forces from both internal and external surroundings. These driving forces are elaborated below.
The use of information technology is highly important to enhance the whole of any organization and up to now the focus has been largely on the collection, transmission and storage of data. But currently, with the new information revolutions the focus is shifting towards the meaning and purpose of information since it is a known fact that unless organized in meaningful patterns, data is not information. The main task therefore is defining information, creating new ideas and generating latest examples that will help redefine the tasks to be done as well as the different institutions that perform these duties. The challenges are:
The rate of information growth is increasing rapidly. According to the Digital Universe study (2011), “Extracting value from chaos”, this expansion of information and ‘big data’ are changing all characteristics of business and society. In order to make sure that there is a high availability of information and to provide more up-to-date function, there has been duplication of data. This replication has enormously contributed to the expansion of information growth. Every two years the world’s information is doubling and it is assumed that by 2020 the world will make 50 times the amount of information and there will be 1.5 times less IT staff to handle it. New “information taming” technologies such as de-duplication, compression, and analysis tools are lessening the cost of creating, managing, capturing, and accumulating information to one-sixth the bill in 2011 in contrast to 2005. The International Data Corporation (IDC) is investigating the opportunities and development joined to control and take advantage of this unstable expansion of information (www.emc.com).
Essential to communication, information is a critical resource for performing work in organizations. The importance of information changes regularly. Consequently information that is valuable at present might turn out to be less important tomorrow, according to the needs and requirements of the job. The main reason that information is of such importance to organizations and individuals is that it drives communication, decision making, and reactions to the environment.
The strategic use of information plays an important role in determining the success of a business and provides competitive advantages in the marketplace. In this competitive world of ours, there is a must to have the right information at the right time to be able to make decisions. Failure to which might eventually result in making huge loss by the organization. Information helps managers to not only create mission, vision and set goals but also facilitate them in analyzing the environment and viewing different strategic alternatives so as to counteract moves or even providing better products and services than the competitors.
Diversity is a very sensitive subject and it can be harmful to an organization if it is not handled properly. It is imperative for any organization to properly implement programmes for diversity management due to globalization of industry and the pursuit of effective competition, since globalization mixes both economics and societies all over the world. In this modern moment, where people have divergent views on globalization, its effect on diversity is very important.
Margaret Tan-Solano (2001) defined Telecommuting as “the practice of an employee performing his normal office duties from a remote location”. With the arrival of telecommuting, several benefits have been achieved, namely more time to focus on work, as location is no more a constraint, flexible work schedules and increased productivity. It also allows closer proximity to and involvement with family, employee freedom, improves productivity as well as promoting safety.
Strategies are very important since these are the set of decision making procedures for the guidance of organizational behavior. According to managers, strategy means their outsized scale, future oriented procedures with the competitive surroundings to optimize accomplishment of organization aims. An influential weapon for surviving with the conditions of change, which surround an organization today, a strategy is quite complex and costly to implement. In accordance with Drucker, strategies must be regarded as the following five new convictions that rather than being economic, are more political and social.
Performance can be briefly defined as the production of valid results given over a period of time. Very often it is measured against certain predetermined known standards of completeness, accuracy and speed.
Used to describe the international market, global competitiveness often refers to the struggle of different organizations to prevail over the other on a worldwide basis. In this world of competition, it is a known fact that unless an organization measures up to the standards set by the leaders in its field, it cannot expect to survive for long.
Nowadays, businesses have to define their scope in terms of industries and services worldwide. While the national boundaries are creating certain types of obstructions, the political boundaries are also not moving. It has been noted that the national politics are still ruling against economic rationality within transnational economic organizations.
Today’s leaders and managers must deal with continual, rapid change and therefore management techniques must track the business environment continuously, to assess change and adapt. Managing change does not mean controlling it, but rather understanding it, being more sensitive and flexible, and guiding it as much as possible. According to the old paradigms, management was about dominance and control, centralized and hierarchical with rigid budgets, short-term solutions and top-down goal setting. However under the new management’s paradigm in most organizations the focus is more on cooperation and trust with continuous adaptation and long range optimization as well as teamwork and jobs selected to fit people rather than people selected to fit jobs. In today’s fast changing world, management are forced to apply and adapt to certain new standards of management due to the driving forces in order to be more flexible, responsive and adaptable to the demands and expectations of the stakeholder’s demands. Nowadays, managers can no longer refer to an earlier developed plan for direction since they must continuously deal with rapid change. In the 21st century, with the intention of being successful most organization can also strive to move from competition to networking. Competition has been progressive and successful as it literally changed the economic landscape of the world into modern industrial centers with the defining edge of technology. It is important to redefine competition now with the concept of networking and cooperation for the sustainability of business operations worldwide.
Presently change is the norm and unless perceived as the duty of the organization to guide change, the organization will not exist for long. In a period of rapid structural modification, the only individuals who live on are the change leaders. The four requirements of change leadership according to Drucker (1999) are as follows. Policies to create the future. Organized methods to seek and to foresee change. The precise approach to bring in modification, both inside and outside the organization. Strategies to balance amendments and stability. Change and continuity is seen as two extremities rather than mutually exclusive opposites by Drucker. It is essential to have internal and external continuity so as to be a change leader.
According to Boyatzis (2008), although the understanding of competencies themselves has been extended, perhaps the most important contributions in the last thirty years, has come about primarily in the last fifteen years.
We will send an essay sample to you in 2 Hours. If you need help faster you can always use our custom writing service.Get help with my paper | <urn:uuid:b5a80e1e-70a1-4f19-bd1f-4094574bdcfe> | CC-MAIN-2019-47 | https://studydriver.com/communication-and-interpersonal-skills/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00420.warc.gz | en | 0.955857 | 4,589 | 3.359375 | 3 |
Abu'l Muzaffar Muin ud-din Muhammad Shah Farrukh-siyar Alim Akbar Sani Wala Shan Padshah-i-bahr-u-bar (Persian: ابو المظفر معید الدین محمد شاه فرخ سیر علیم اکبر ثانی والا شان پادشاه بحر و بر), also known as Shahid-i-Mazlum (Persian: شهید مظلوم), or Farrukhsiyar (Persian: فرخ سیر) (20 August 1685 – 19 April 1719), was the Mughal emperor from 1713 to 1719 after he murdered Jahandar Shah. Reportedly a handsome man who was easily swayed by his advisers, he lacked the ability, knowledge and character to rule independently. Farrukhsiyar was the son of Azim-ush-Shan (the second son of emperor Bahadur Shah I) and Sahiba Nizwan.
|9th Mughal Emperor|
|Reign||11 January 1713 – 28 February 1719|
|Born||20 August 1685|
Aurangabad, Mughal Empire
|Died||19 April 1719 (aged 33)|
Delhi, Mughal Empire
Humayun's Tomb, Delhi
Bai Indira Kanwar
Bai Bhip Devi
His reign saw the primacy of the Sayyid brothers, who became the effective power behind the facade of Mughal rule. Farrukhsiyar's frequent plotting led the brothers to depose him.
- 1 Early life
- 2 War of succession
- 3 Reign
- 4 Deposition
- 5 Personal life
- 6 Legacy
- 7 Notes
- 8 References
Muhammad Farrukhsiyar was born on 20 August 1685 (9th Ramzan 1094 AH) in the city of Aurangabad on the Deccan plateau. He was the second son of Azim-ush-Shan. In 1696, Farrukhsiyar accompanied his father on his campaign to Bengal. Mughal emperor Aurangzeb recalled his grandson, Azim-ush-Shan, from Bengal in 1707 and instructed Farrukhsiyar to take charge of the province. Farrukhsiyar spent his early years in the capital city of Dhaka (in present-day Bangladesh); during the reign of Bahadur Shah I, he moved to Murshidabad (present-day West Bengal, India).
In 1712 Azim-ush-Shan anticipated Bahadur Shah I's death and a struggle for power, and recalled Farrukhsiyar. He was marching past Azimabad (present-day Patna, Bihar, India) when he learned of the Mughal emperor's death. On 21 March Farrukhsiyar proclaimed his father's accession to the throne, issued coinage in his name and ordered khutba (public prayer). On 6 April, he learned of his father's defeat. Although the prince considered suicide, he was dissuaded by his friends from Bengal.
War of successionEdit
In 1712 Jahandar Shah (Farrukhsiyar's uncle) ascended the throne of the Mughal empire by defeating Farrukhsiyar's father, Azim-ush-Shan. Farrukhsiyar wanted revenge for his father's death and was joined by Hussain Ali Khan (the subahdar of Bengal) and Abdullah Khan, his brother and the subahdar of Allahabad.
When they reached Allahabad from Azimabad, Jahandar Shah's military general Syed Abdul Ghaffar Khan Gardezi and 12,000 troops clashed with Abdullah Khan and Abdullah retreated to the Allahabad Fort. However, Gardezi's army fled when they learned about his death. After the defeat, Jahandar Shah sent general Khwaja Ahsan Khan and his son Aazuddin. When they reached Khajwah (present-day Fatehpur district, Uttar Pradesh, India), they learned that Farrukhsiyar was accompanied by Hussain Ali Khan and Abdullah Khan. With Abdullah Khan commanding the vanguard, Farrukhsiyar began the attack. After a night-long artillery fight, Aazuddin and Khwaja Ahsan Khan fled and the camp fell to Farrukhsiyar.
On 10 January 1713 Farrukhsiyar and Jahandar Shah's forces met at Samugarh, 9 miles (14 km) east of Agra in present-day Uttar Pradesh. Jahandar Shah was defeated and imprisoned, and the following day Farrukhsiyar proclaimed himself the Mughal emperor. On 12 February he marched to the Mughal capital of Delhi, capturing the Red Fort and the citadel. Jahandar Shah's head, mounted on a bamboo rod, was carried by an executioner on an elephant and his body was carried by another elephant.
Hostility with the Sayyid brothersEdit
Farrukhsiyar defeated Jahandar Shah with the aid of the Sayyid brothers, and one of the brothers, Abdullah Khan, wanted the post of wazir (prime minister). His demand was rejected, since the post was promised to Ghaziuddin Khan, but Farrukhsiyar offered him a post as regent under the name of wakil-e-mutlaq. Abdullah Khan refused, saying that he deserved the post of wazir since he led Farrukhsiyar's army against Jahandar Shah. Farrukhsiyar ultimately gave in to his demand, and Abdullah Khan became prime minister.
According to historian William Irvine, Farrukhsiyar's close aides Mir Jumla III and Khan Dauran sowed seeds of suspicion in his mind that they might usurp him from the throne. Learning about these developments, the other Sayyid brother (Hussain Ali Khan) wrote to Abdullah: "It was clear, from the Prince's talk and the nature of his acts, that he was a man who paid no regard to claims for service performed, one void of faith, a breaker of his word and altogether without shame". Hussain Ali Khan felt it necessary to act in their interests "without regard to the plans of the new sovereign".
Campaign against Ajit SinghEdit
Maharaja Ajit Singh captured Ajmer with the support of the Marwari nobles and expelled Mughal diplomats from his state. Farrukhsiyar sent Hussain Ali Khan to subjuguate him. However, the anti-Sayyid brothers faction in the Mughal emperor's court compelled him to send secret letters to Ajit Singh assuring him of rewards if he defeated Hussain Ali Khan.
Hussain left Delhi for Ajmer on 6 January 1714, accompanied by Sarbuland Khan and Afrasyab Khan. As his army reached Sarai Sahal, Ajit Singh sent diplomats who failed to negotiate a peace. As Hussain Ali Khan advanced to Ajmer via Jodhpur, Jaiselmer and Mairtha, Ajit Singh retreated to the deserts hoping to dissuade the Mughal general from a battle. As Hussain advanced, Ajit Singh surrendered at Mairtha. As a result, Mughal authority was restored in Rajasthan. Ajit Singh gave his daughter, Indira Kanwar, as a bride to Farrukhsiyar. His son, Abhai Singh, was compelled to accompany him to see the Mughal emperor.
Campaign against the JatsEdit
Due to Aurangzeb's 25-year campaign on the Deccan plateau, Mughal authority weakened in North India with the rise of local rulers. Taking advantage of the situation, the Jats advanced. In early 1713, Farrukhsiyar unsuccessfully sent subahdar of Agra Chabela Ram to defeat Churaman (the Jat leader). His successor, Samsamud Daulah Khan, compelled Churaman to negotiate with the Mughal emperor. Raja Bahadur Rathore accompanied him to the Mughal court, where negotiations with Farrukhsiyar failed.
In September 1716 Raja Jai Singh II undertook a campaign against Churaman, who lived in Thun (in present-day Rajasthan, India). By 19 November, Jai Singh II began besieging the Thun fort. In December Churaman's son, Muhkam Singh, marched from the fort and battled Jai Singh II; the Raja claimed victory. With the Mughals running out of ammunition, Syed Muzaffar Khan was ordered to bring gunpowder, rockets and mounds of lead from the arsenal at Agra.
By January 1718, the siege had lasted for more than a year. With rain coming late in 1717, prices of commodities increased and Raja Jai Singh II found it difficult to continue the siege. He wrote to Farrukhsiyar for reinforcement, saying that he had overcome "many encounters" with the Jats. This failed to impress Farrukhsiyar, so Jai Singh II (via his agent in Delhi) informed Syed Abdullah that he would give three million rupees to the government and two million rupees to the minister if he championed his cause to the emperor. With negotiations between Syed Abdullah and Farrukhsiyar successful, he accepted his demands and dispatched Syed Khan Jahan to bring Churaman to the Mughal court. He also gave a farman to Raja Jai Singh II, thanking him for the siege.
On 19 April 1718, Churaman was presented to Farrukhsiyar; they negotiated for peace, with Churaman accepting Mughal authority. Khan Jahan was given the title of Bahadur ("brave"). It was decided that Churaman would pay five million rupees in cash and goods to Farrukhsiyar via Syed Abdullah.
Campaign against Sikhs and execution of Banda BahadurEdit
Baba Banda Singh Bahadur was a Sikh leader who, by early 1700, had captured parts of the Punjab region. Mughal emperor Bahadur Shah I failed to suppress Bahadur's uprising.[full citation needed]
In 1714, the Sirhind faujdar (garrison commander) Zainuddin Ahmad Khan attacked the Sikhs near Ropar. In 1715, Farrukhisyar sent 20,000 troops under Qamaruddin Khan, Abdus Samad Khan and Zakariya Khan Bahadur to defeat Bahadur. After an eight-month siege at Gurdaspur, Bahadur surrendered after he ran out of ammunition. Bahadur and his 200 companions were arrested and brought to Delhi; he was paraded around the city of Sirhind.
Bahadur was put into an iron cage and the remaining Sikhs were chained. The Sikhs were brought to Delhi in a procession with the 780 Sikh prisoners, 2,000 Sikh heads hung on spears, and 700 cartloads of heads of slaughtered Sikhs used to terrorise the population. When Farrukhsiyar's army reached the Red Fort, the Mughal emperor ordered Banda Bahadur, Baj Singh, Bhai Fateh Singh and his companions to be imprisoned in Tripolia. They were pressurised to give up their faith and become Muslims. Although the emperor promised to spare the Sikhs who converted to Islam, according to William Irvine "not one prisoner proved false to his faith". On their firm refusal all were ordered to be executed. Every day, 100 Sikhs were brought out of the fort and murdered in public. This continued for approximately seven days. After three months of confinement, On 19 June 1716 Farrukhsiyar had Bahadur and his followers executed, despite the wealthy Khatris of Delhi offering money for his release. Banda Singh's eyes were gouged out, his limbs were severed, his skin removed, and then he was killed.
In 1717, Farrukhsiyar issued a farman giving the British East India Company the right to reside and trade in the Mughal Empire. They were allowed to trade freely, except for a yearly payment of 3,000 rupees. This was because William Hamilton, a surgeon associated with the company cured Farrukhsiyar of a disease. The company was given the right to issue dastak (passes) for the movement of goods, which was misused by company officials for personal gain.
Final struggle with the SayyidsEdit
By 1715, Farrukhsiyar had given Mir Jumla III the power to sign documents on his behalf: "The word and seal of Mir Jumla are my word and seal". Mir Jumla III began approving proposals for jagirs and mansabs without consulting Syed Abdullah, the prime minister. Syed Abdullah's deputy Ratan Chand accepted bribes for him to do work and was involved in revenue farming, which was forbidden by the Mughal emperor. Taking advantage of the situation, Mir Jumla III told Farrukhsiyar that the Sayyids were unfit to hold office and accused them of insubordination. Hoping to depose the brothers, Farrukhsiyar began making military preparations and increased the number of soldiers under Mir Jumla III and Khan Dauran.
After Syed Hussain learned about Farrukhsiyar's plans, he felt that their position could be cemented by controlling "important provinces". He asked to be appointed viceroy of the Deccan, instead of Nizam ul Mulk; Farrukhsiyar refused, transferring him to the Deccan instead. Fearing attack by Farrukhsiyar's supporters, the brothers began making military preparations. Although Farrukhsiyar initially considered giving the task of crushing the brothers to Mohammad Amin Khan (who wanted the position of prime minister in return), he decided against it because removing him would be difficult.
Arriving at the Deccan, Syed Hussain made a treaty with Maratha ruler Shahu I in February 1718. Shahu was allowed to collect sardeshmukhi in Deccan, and received the lands of Berar and Gondwana to govern. In return, Shahu agreed to pay one million rupees annually and maintain an army of 15,000 horses for the Sayyids. This agreement was reached without Farrukhsiyar's approval, and he was angry when he learned about it: "It was not proper for the vile enemy to be overbearing partners in matters of revenue and government."
State of the Mughal EmpireEdit
Farrukhsiyar appointed Sayid Abdullah Khan as chief minister and placed Muhammad Baqir Mutamid Khan in charge of the Exchequer. The title of bakshi was first conferred on Hussain Ali Khan (with the titles of Umdat-ul-Mulk, Amir-ul-umara and Bahadur Firuz Jung) and then to Chin Qilich Khan and Afrasayab Khan Bahadur.
|North India||South India|
|Province||Governor/Chief Minister||Province||Deputy governor|
|Agra||Shams ud Daula Shah Nawaz Khan||Berar||Iwaz Khan|
|Ajmer||Syed Muzaffar Khan Barha||Bidar||Amin Khan|
|Allahabad||Khan Jahan||Bijapur||Mansur Khan|
|Awadh||Sarbuland Khan||Burhanpur||Shukrullah Khan|
|Bengal||Farkhunda Bakht||Hyderabad||Yusuf Khan|
|Bihar||Syed Hussain Ali Khan||Karnataka||Saadatullah Khan|
|Delhi||Muhammad Yar Khan|
|Kabul||Bahadur Nasir Jang|
|Lahore||Abdus Samad Khan|
|Malwa||Raja Jai Singh of Amber|
|Orissa||Murshid Quli Khan|
To fight the Sayyids, Farrukhsiyar summoned Ajit Singh, Nizam-ul-Mulk and Sarbuland Khan to the court with their troops; the armies' combined strength was 80,000. He did not summon Mir Jumla III and Khan Dauran, since the former failed in a campaign in Bihar and he felt that the latter had conspired with the Sayyid brothers to depose him. However, Syed Abdullah's troop strength was about 3,000. According to Satish Chandra, Farrukhsiyar could have defeated him with the help of the nobles; he did not do it, since he believed it would be difficult to get rid of them afterwards. He appointed Muhammad Murad Kashmiri as the new wazir (prime minister), replacing Syed Abdullah. Kashmiri was a notorious for having sexual relationships with boys; this angered the nobles, who resigned from his court. Ajit Singh, alienated because he was removed from Gujarat for oppression, also sided with the Sayyids. By the end of 1718, when Syed Hussain began his march from the Deccan with 10,000 troops under Peshwa Balaji Vishwanath, Farrukhsiyar could only secure Jai Singh II's support. Syed Hussain's excuse for marching towards Delhi was to bring a son of the Maratha ruler Shahu to him.
With the support of Mohammad Amin Khan, Ajit Singh and Khan Dauran, Syed Hussain fought Farrukhsiyar; after a night-long battle, he was deposed on 28 February 1719. The Sayyid brothers placed Rafi ud-Darajat on the throne. Farrukhsiyar was imprisoned in Tripolia and blinded. During his imprisonment he was served bitter, salty food and deprived of water. He passed the time by reciting verses from the Quran. Although Farrukhsiyar tried to bribe jailer Abdullah Khan Afghan with the command of 7,000 troops if he released him and brought him to Jai Singh II, the bribe was refused. On 19 April 1719 he was strangled by unknown assailants and buried in Humayun's Tomb beside his father, Azim-ush-Shan.
Farrukhsiyar's first wife was Fakhr-un-nissa Begum, also known as Gauhar-un-nissa, the daughter of Mir Muhammad Taqi (known as Hasan Khan and then Sadat Khan). Taqi, from the Persian province of Mazandaran, married the daughter of Masum Khan Safawi; if she was the mother of Fakhr-un-nissa, this would account for her daughter's selection as the prince's wife.
His second wife was Bai Indira Kanwar, the daughter of Maharajah Ajit Singh. She married Farrukhsiyar on 27 September 1715, during the fourth year of his reign, and they had no children. After Farrukhsiyar's deposition and death she left the imperial harem on 16 July 1719, she returned to her father with her property and lived her remaining years in Jodhpur.
Farrukhsiyar's third wife was Bai Bhup Devi, daughter of Jaya Singh (the raja of Kishtwar, who had converted to Islam and received the name of Bakhtiyar Khan). After Jaya Singh's death he was succeeded by his son, Kirat Singh. In 1717, in response to a message from the Shaykh al-Islām, her brother Kirat Singh sent her to Delhi with her brother Mian Muhammad Khan. Farrukhsiyar married her, and she entered the imperial harem on 3 July 1717.
On coins issued during Farrukhsiyar's reign, the following phrase was inscribed: "Sikka zad az fazl-i-Haq bar sim o zar/ Padshah-i-bahr-o-bar Farrukhsiyar" (By the grace of the true God, struck on silver and gold, the emperor of land and sea, Farrukhsiyar). There are 116 coins from his reign on display at the Lahore Museum and the Indian Museum in Kolkata. The coins were minted in Kabul, Kashmir, Ajmer, Allahabad, Bidar and Berar.
| Mughal Emperor
- Sen, Sailendra (2013). A Textbook of Medieval Indian History. Primus Books. p. 193. ISBN 978-93-80607-34-4.
- Irvine, p. 198.
- Irvine, p. 199.
- Asiatic Society of Bengal, p. 273.
- Asiatic Society of Bengal, p. 274.
- Irvine, p. 255.
- Tazkirat ul-Mulk by Yahya Khan p.122
- Irvine, p. 282.
- Irvine, p. 283.
- The Cambridge Shorter history of India p.456
- Irvine, p. 288–290.
- Fisher, p. 78.
- Irvine, p. 290.
- Irvine, p. 322.
- Irvine, p. 323.
- Irvine, p. 324.
- Irvine, p. 325.
- Irvine, p. 326.
- Irvine, p. 327.
- "Marathas and the English Company 1707–1800". San Beck. Retrieved 18 March 2017.
- Richards, p. 258.
- Singha, p. 15.
- Duggal, Kartar (2001). Maharaja Ranjit Singh: The Last to Lay Arms. Abhinav Publications. p. 41. ISBN 9788170174103.
- Johar, Surinder (1987). Guru Gobind Singh. The University of Michigan: Enkay Publishers. p. 208. ISBN 9788185148045.
- Sastri, Kallidaikurichi (1978). A Comprehensive History of India: 1712-1772. The University of Michigan: Orient Longmans. p. 245.
- Singha, p. 16.
- Singh, Gurbaksh (1927). The Khalsa Generals. Canadian Sikh Study & Teaching Society. p. 12. ISBN 0969409249.
- Jawandha, Nahar (2010). Glimpses of Sikhism. Sanbun Publishers. p. 89. ISBN 9789380213255.
- Singh, Teja (1999). A Short History of the Sikhs: 1469-1765. Patiala: Publication Bureau, Punjabi University. p. 97. ISBN 9788173800078.
- Singh, Ganda (1935). Life of Banda Singh Bahadur: Based on Contemporary and Original Records. Sikh History Research Department. p. 229.
- Irvine, p. 319.
- Singh, Kulwant (2006). Sri Gur Panth Prakash: Episodes 1 to 81. Institute of Sikh Studies. p. 415. ISBN 9788185815282.
- Samaren Roy. Calcutta: Society and Change 1690–1990. iUniverse. p. 29. ISBN 978-0-595-34230-3.
- Vipul Singh, Jasmine Dhillon, Gita Shanmugavel. History and Civics. Pearson India. p. 73. ISBN 978-81-317-6320-9.CS1 maint: multiple names: authors list (link)
- Chandra, p. 476.
- Chandra, p. 477.
- Chandra, p. 478.
- Ram Sivasankaran (22 December 2015). The Peshwa: The Lion and the Stallion. Westland. p. 5. ISBN 978-93-85724-70-1.
- Chandra, p. 481.
- Irvine, p. 258.
- Irvine, p. 261.
- Irvine, p. 262.
- Irvine, p. 263.
- Chandra, p. 482.
- Chandra, p. 483.
- William Irvine. p. 379-418.
- Irvine, p. 390.
- Irvine, p. 391.
- Irvine, p. 392.
- Irvine, p. 392–93.
- Irvine, p. 400-1.
- Irvine, p. 400.
- Irvine, p. 401.
- Proceedings - Punjab History Conference - Volumes 29-30. Punjabi University. 1998. p. 85.
- Irvine, p. 398.
- Irvine, p. 399.
|Wikimedia Commons has media related to Farrukhsiyar.|
- Irvine, William, The Later Mughals, Low Price Publications, ISBN 81-7536-406-8
- Journal of the Asiatic Society of Bengal. (Bishop's College Press)47. 18 August 2009.
- Fisher, Michael H. (1 October 2015), A Short History of the Mughal Empire, I.B.Tauris, ISBN 978-0-85772-976-7
- Singha, H.S. (1 January 2005), Sikh Studies, Book 6, Hemkunt Press, ISBN 978-81-7010-258-8
- Chandra, Satish, Medieval India: From Sultanat to the Mughal Empire, Har Anand Publications Pvt Ltd., ISBN 978-81-241-1269-4 | <urn:uuid:68d59e6c-f256-412a-82d6-47154297e770> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Farrukhsiyar | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00500.warc.gz | en | 0.937467 | 5,493 | 2.796875 | 3 |
Библиотека сайта rus-linux.net
Table 12-1. rpm -b Command Syntax
When RPM is invoked with the -b option, the process of building a package is started. The rest of the command will determine exactly what is to be built and how far the build should proceed. In this chapter, we'll explore every aspect of rpm -b.
An RPM build command must have two additional pieces of information, over and above "rpm -b":
The names of one or more spec files representing software to be packaged.
The desired stage at which the build is to stop.
As we discussed in Chapter 10, the spec file is one of the inputs to RPM's build process. It contains the information necessary for RPM to perform the build and package the software.
There are a number of stages that RPM goes through during a build. By specifying that the build process is to stop at a certain stage, the package builder can monitor the build's progress, make any changes necessary, and restart the build. Let's start by looking at the various stages that can be specified in a build command.
The command rpm -bp directs RPM to execute the very first step in the build process. In the spec file, this step is labeled %prep. Every command in the %prep section will be executed when the -bp option is used.
First, RPM confirms that the
cdplayer package is
the subject of this build. Then it sets the umask and starts
executing the %prep section. At this point, the
%setup macro is doing its thing. It changes
directory into the build area and removes any old copies of
cdplayer's build tree.
Next, %setup unzips the sources and uses tar to create the build tree. We've removed the complete listing of files, but be prepared to see lots of output if the software being packaged is large.
cdplayer's build tree and changes ownership and file permissions appropriately. The
exit 0signifies the end of the %prep section, and therefore, the end of the %setup macro. Since we used the -bp option, RPM stopped at this point. Let's see what RPM left in the build area:
cdplayer-1.0, we find the sources are ready to be built:
We can see that %setup's chown and chmod commands did what they were supposed to — the files are owned by root, with permissions set appropriately.
If not stopped by the -bp option, the next step in RPM's build process would be to build the software. RPM can also be stopped at the end of the %build section in the spec file. This is done by using the -bc option:
We see that prior to the make command, RPM changes
cdplayer's top-level directory.
RPM then starts the make, which ends with the
groff command. At this point, the execution of the
%build section has been completed. Since the
-bc option was used, RPM stops at this point.
The next step in the build process would be to install the newly built software. This is done in the %install section of the spec file. RPM can be stopped after the install has taken place by using the -bi option:
After the %prep and %build
sections, the %install section is executed.
Looking at the output, we see that RPM changes directory into
cdplayer's top-level directory and issues the
make install command, the sole command in the
%install section. The output from that point until
exit 0, is from
The line responsible is %doc README. The
%doc tag identifies the file as being
documentation. RPM handles documentation files by creating a
/usr/doc and placing all
documentation in it. The
exit 0 at
the end signifies the end of the %install section.
RPM stops due to the -bi option.
The next step at which RPM's build process can be stopped is after the software's binary package file has been created. This is done using the -bb option:
After executing the %prep, %build, and %install sections, and handling any special documentation files, RPM then creates a binary package file. In the sample output, we see that first RPM performs automatic dependency checking. It does this by determining which shared libraries are required by the executable programs contained in the package. Next, RPM actually archives the files to be packaged, optionally signs the package file, and outputs the finished product.
The last part of RPM's output looks suspiciously like a section in the spec file being executed. In our example, there is no %clean section. If there were, however, RPM would have executed any commands in the section. In the absence of a %clean section, RPM simply issues the usual cd commands and exits normally.
The -ba option directs RPM to perform all the stages in building a package. With this one command, RPM:
Unpacks the original sources.
Applies patches (if desired).
Builds the software.
Installs the software.
Creates the binary package file.
Creates the source package file.
As in previous examples, RPM executes the %prep, %build, and %install sections, handles any special documentation files, creates a binary package file, and cleans up after itself.
The final step in the build process is to create a source package
file. As the output shows, it consists of the spec file and the
original sources. A source package may optionally include one or more
patch files, although in our example,
At the end of a build using the -ba option, the software has been successfully built and packaged in both binary and source form. But there are a few more build-time options that we can use. One of them is the -bl option:
There's one last letter that may be specified with rpm -b, but unlike the others, which indicate the stage at which the build process is to stop, this option performs a variety of checks on the %files list in the named spec file. When l is added to rpm -b, the following checks are performed:
Expands the spec file's %files list and checks that each file listed actually exists.
Determines what shared libraries the software requires by examining every executable file listed.
Determines what shared libraries are provided by the package.
Why is it necessary to do all this checking? When would it be useful? Keep in mind that the %files list must be generated manually. By using the -bl option, the following steps are all that's necessary to create a %files list:
Writing the %files list.
Using the -bl option to check the %files list.
Making any necessary changes to the %files list.
It may take more than one iteration through these steps, but eventually the list check will pass. Using the -bl option to check the %files list is certainly better than starting a two-hour package build, only to find out at the very end that the list contains a misspelled filename.
Looking at this more verbose output, it's easy to see there's a great
deal going on. Some of it is not directly pertinent to checking the
%files list, however. For example, the output
extending from the first line, to the line reading
Package: cdplayer, reflects processing that takes
place during actual package building, and can be ignored.
Following that section is the actual %files list
check. In this section, every file named in the
%files list is checked to make sure it exists. The
ADDING:, again reflects RPM's
package building roots. When using the -bl option,
however, RPM is simply making sure the files exist on the build
system. If the --timecheck option (described a bit
later, on the section called --timecheck
— Print a warning if files to be packaged are over
<secs> old) is
present, the checks required by that option are performed here, as
After the list check, the MD5 checksums of each file are calculated and displayed. While this information is vital during actual package building, it is not used when using the -bl option.
Finally, RPM determines which shared libraries the listed files
require. In this case, there are only two —
libncurses.so.2.0. While not strictly a part of
the list-checking process, displaying shared library dependencies can
be quite helpful at this point. It can point out possible problems,
such as assuming that the target systems have a certain library
installed when, in fact, they do not.
/usr/bin/bogus. In this example we made the name obviously wrong, but in a more real-world setting, the name will more likely be a misspelling in the %files list. OK, let's correct the %files list and try again:
Done! The moral to this story is that using rpm -bl and fixing the error it flagged doesn't necessarily mean your %files list is ready for prime-time: Always run it again to make sure!
Although it sounds dangerous, the --short-circuit option can be your friend. This option is used during the initial development of a package. Earlier in the chapter, we explored stopping RPM's build process at different stages. Using --short-circuit, we can start the build process at different stages.
One time that --short-circuit comes in handy is when you're trying to get software to build properly. Just think what it would be like — you're hacking away at the sources, trying a build, getting an error, and hacking some more to fix that error. Without --short-circuit, you'd have to:
Make your change to the sources.
Use tar to create a new source archive.
Start a build with something like rpm -bc.
See another bug.
Go back to step 1.
Pretty cumbersome! Since RPM's build process is designed to start with the sources in their original tar file, unless your modifications end up in that tar file, they won't be used in the next build.
But there's another way. Just follow these steps:
Place the original source tar file in RPM's
Create a partial spec file in RPM's
SPECSdirectory (Be sure to include a valid Source line).
Issue an rpm -bp to properly create the build environment.
Normally, the -bc option instructs RPM to stop the build after the %build section of the spec file has been executed. By adding --short-circuit, however, RPM starts the build by executing the %build section and stops when everything in %build has been executed.
There is only one other build stage that can be --short-circuit'ed, and that is the install stage. The reason for this restriction is to make it difficult to bypass RPM's use of pristine sources. If it were possible to --short-circuit to -bb or -ba, a package builder might take the "easy" way out and simply hack at the build tree until the software built successfully, then package the hacked sources. So, RPM will only --short-circuit to -bc or -bi. Nothing else will do.
What exactly does an rpm -bi --short-circuit do, anyway? Like an rpm -bc --short-circuit, it starts executing at the named stage, which in this case is %install. Note that the build environment must be ready to perform an install before attempting to --short-circuit to the %install stage. If the software installs via make install, make will automatically compile the software anyway.
RPM blindly started executing the %install stage,
but came to an abrupt halt when it attempted to change directory into
cdplayer-1.0, which didn't exist. After giving a
descriptive error message, RPM exited with a failure status. Except
for some minor differences, rpm -bc would have
failed in the same way.
i486architecture, due to the inclusion of the --buildarch option on the command line. We can also see that RPM wrote the binary package in the architecture-specific directory,
/usr/src/redhat/RPMS/i486. Using RPM's --queryformat option confirms the package's architecture:
For more information on build packages for multiple architectures, please see Chapter 19.
The package was indeed built for the specified operating system. For more information on building packages for multiple operating systems, please see Chapter 19.
The most obvious effect of adding the --sign option
to a build command is that RPM then asks for your private key's
passphrase. After entering the passphrase (which isn't echoed), the
build proceeds as usual. The only other difference between this and a
non-signed build is that the
signature: lines have a non-zero value.
The fact that there is a pgp in --checksig's output indicates that the packages have been signed.
Unlike a normal build, there's not much output. But the --test option has caused a set of scripts to be written and saved for you. The question is: Where are they?
rpmrcfile, the scripts will be written to the directory specified by the
rpmrcentry tmppath. If you haven't changed this setting, RPM, by default, writes the scripts in
/var/tmp. Here they are:
As we can see, this script contains the %prep section from the spec file. The script starts off by defining a number of environment variables and then leads into the %prep section. In the spec file used in this build, the %prep section consists of a single %setup macro. In this file, we can see exactly how RPM expands that macro. The remaining files follow the same basic layout — a section defining environment variables, followed by the commands to be executed.
only one script file, containing the %prep commands, would be written. In any case, no matter what RPM build command is used, the --test option can let you see exactly what is going to happen during a build.
In this example, we see a typical %prep section
being executed. The line "
+ echo Executing:
sweep" indicates the start of
--clean's activity. After changing directory into
the build directory, RPM then issues a recursive delete on the
package's top-level directory.
As we noted above, this particular example doesn't make much sense. We're only executing the %prep section, which creates the package's build tree, and using --clean, which removes it! Using --clean with the -bc option isn't very productive either, as the newly built software remains in the build tree. Once again, there would be no remnants left after --clean has done its thing.
Normally, the --clean option is used once the software builds and can be packaged successfully. It is particularly useful when more than one package is to be built, since --clean ensures that the filesystem holding the build area will not fill up with build trees from each package.
Note also that the --clean option only removes the files that reside in the software's build tree. If there are any files that the build creates outside of this hierarchy, it will be necessary to write a script for the spec file's %clean section.
The --buildroot option can make two difficult situations much easier:
Performing a build without impacting the build system.
Allowing non-root users to build packages.
Let's study the first situation in a bit more detail. Say, for
sendmail is to be packaged.
In the course of creating a
package, the software must be installed. This would mean that
sendmail files, such as
would be overwritten. Mail handling on the build system would almost
certainly be disrupted.
In the second case, it's certainly possible to set permissions such that non-root users can install software, but highly unlikely that any system administrator worth their salt would do so. What can be done to make these situations more tenable?
The --buildroot option is used to instruct RPM to
use a directory other than
/ as a "build root".
This phrase is a bit misleading, in that the build root is
not the root directory under which the software
is built. Rather, it is the root directory for the install phase of
the build. When a build root is not specified, the software being
packaged is installed relative to the build system's root directory
As the somewhat edited output shows, the %prep,
%build, and %install sections
are executed in RPM's normal build directory. However, the
--buildroot option comes into play when the
make install is done. As we can see, the
ROOT variable is set to
/tmp/foonly, which was the value following
--buildroot on the command line. From that point
on, we can see that make substituted the new build
root value during the install phase.
The build root is also used when documentation files are installed.
The documentation directory
/tmp/foonly/usr/doc, and the
README file is placed in it.
The only remaining difference that results from using
--buildroot, is that the files to be included in
the binary package are not located relative to the build system's root
directory. Instead they are located relative to the build root
/tmp/foonly. The resulting binary and source
package files are functionally equivalent to packages built without
the use of --buildroot.
Although the --buildroot option can solve some problems, using a build root can actually be dangerous. How? Consider the following situation:
A spec file is configured to have a build root of
/tmp/blather, for instance.
In the %prep section , there is an rm -rf $RPM_BUILD_ROOT command to clean out any old installed software.
You decide to build the software so that it installs relative to your system's root directory, so you enter the following command: "rpm -ba --buildroot / foo.spec".
The end result? Since specifying "
/" as the
build root sets
/", that innocuous little rm -rf
$RPM_BUILD_ROOT turns into rm -rf
/! A recursive delete, starting at your system's root
directory, might not be a total disaster if you catch it quickly,
but in either case, you'll be testing your ability to restore from
backup… Er, you do have backups, don't
The moral of this story is to be very careful
when using --buildroot. A good rule of thumb is
to always specify a unique build root. For example, instead of
/tmp as a build root (and possibly
losing your system's directory for holding temporary files), use the
/tmp/mypackage, where the directory
mypackage is used only by the package you're
While it's possible to detect many errors in the %files list using rpm -bl, there is another type of problem that can't be detected. Consider the following scenario:
A package you're building creates the file
Because of a problem with the package's makefile,
foois never copied into
An older, incompatible version of
foo, created several months ago, already exists in
RPM creates the binary package file.
Is the incompatible
/usr/bin/foo included in the
package? You bet it is! If only there was some way for RPM to catch
this type of problem…
/usr/doc/cdplayer-1.0-1/READMEis more than 3,600 seconds, or one hour, old. If we take a look at the file, we find that it is:
In this particular case, the warning from
--timecheck is no cause for alarm. Since the
README file was simply copied from the original
source, which was created November 10th, 1995, its date is unchanged.
If the file had been an executable or a library that was supposedly
built recently, --timecheck's warning should be
taken more seriously.
This value can still be overridden by a value on the command line, if
desired. For more information on the use of
rpmrc files, see Appendix B.
Most of the output generated by the -vv option is
preceded by a
D:. In this example,
the additional output represents RPM's internal processing during the
start of the build process. Using the -vv option
with other build commands will produce different output.
This is the entire output from a package build of
cdplayer. Note that warning messages (actually,
anything sent to stdout) are still printed.
The --rcfile option is used to specify a file
containing default settings for RPM. Normally, this option is not
needed. By default, RPM uses
/etc/rpmrc and a
.rpmrc located in your login
This option would be used if there was a need to switch between
several sets of RPM defaults. Software developers and package
builders will normally be the only people using the
--rcfile option. For more information on
rpmrc files, see Appendix B.
As we mentioned in Chapter 10, if the original sources need to be modified, the modifications should be kept as a separate set of patches. However, during development, it makes more sense to not generate patches every time a change to the original source is made.
Or the %clean section, it doesn't matter — the end result is the same.
It should be noted that the package was built substantially later than November of 1995! | <urn:uuid:bebf656f-159e-45b5-9b8f-b2dd9896f574> | CC-MAIN-2019-47 | http://rus-linux.net/MyLDP/BOOKS/max-rpm/ch-rpm-b-command.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00020.warc.gz | en | 0.917592 | 4,581 | 2.96875 | 3 |
Part 4 of 4 in the Calendar Series
It has been said that anything God (YHWH) has given has been somehow corrupted or polluted by man. Try to think of any exception to this observation or statement, if any?
Sadly, even God’s Sacred Calendar is NO exception. After reading Part 3 of this Calendar Series, we now focus on HOW this otherwise inspired, God-given calendar — trusted by God’s chosen people, churches of God, and other groups — has not been spared from massive corruptions. How, and why? Please read on to the end of this article.
The Top 10 Corruptions in the Sacred Calendar
In the same way that the man-made oral traditions of the Jews as recorded in their Talmud assumed a higher authority than the Word of God in the time of Christ (Yahshua), a similar situation sadly exists in our day.
Many of these human traditions are recorded in the gospels where Christ roundly contradicted their false teachings during His lifetime on earth (Matthew 15:3-9; 23:1-39; Mark 7:7-9, 13). Therefore, as corruptions already existed regarding God’s commands in Christ’s day, similar and other corruptions and pollutions sadly happened also with the Sacred Calendar, especially after the return of the Jews from their captivity in pagan Babylon.
What Are Some of These Corruptions?
1. A Change on Which is the First Calendar Month
Contrary to God’s command, the Calendar’s first month of the year was changed to the seventh month of the year (Tishri). But notice God’s original command:
This month [Abib or Aviv] shall be your beginning of months; it shall be the first month of the year to you.
~Exodus 12:2; 13:4
NOTE: Abib or Aviv literally means “green ears.” It should always have reminded them of springtime as the first month, when vegetation begins to bud new green leaves after winter.
2. Change in the Name of the First Calendar Month
Not only was the first month changed, but even the name of the first month was changed from the original Hebrew (Abib) into the Babylonian name “Nisan” which no longer reflects the true meaning of the original name. After the Babylonian captivity, we read about this new name.
In the first month, which is the month of Nisan, in the twelfth year of King Ahasuerus, they cast Pur (that is, the lot), before Haman to determine the day and the month, until it fell on the twelfth month, which is the month of Adar.
~Esther 3:7 (see also Nehemiah 2:1)
NOTE: The change in the name of the first month may have caused them to forget when the first month of the year should be, according to its intended meaning.
3. Change in the Names of ALL the Calendar Months
God called the names of the months of the year according to their given Ordinal Numbers (e.g.: First month, Second Month, Third month, etc., as in Noah’s time). However, after the Babylonian captivity, pagan names (Nisan, Iyar, Sivan, Tammuz, etc.) have been substituted for the God-given names. Please compare these new names with those used by Noah (Genesis 7:11; 8:4, 5, 13, 14).
4. Even a Pagan god’s Name was included for the Fourth Month
One popular, notorious name is Tammuz, which was a pagan god associated with abominable worship practices [from which the Easter Sunrise Service was copied] (Ezekiel 8:14-18).
NOTE: Not only that, but the Tammuz rituals included women worshippers who put the “branch” of Tammuz to their nose [obviously in some form of ardent adoration] (Ezekiel 8:14, 17). Studies show that the Tammuz branch has the form of a cross, like a letter “T” which is common to the Egyptian, Canaanite, Greek, and Roman alphabets and pronounced as “taw” in Hebrew (The New Bible Dictionary, Douglas; Eerdmans Publishing Co., pages 1238, 1346-1347).
5. Changing Passover from the 14th to the 15th of the First Month
Repeatedly, the Bible states that the Passover was to be observed on the 14th day of the first month (Exodus 12:6; Leviticus 23:4; Numbers 9:3; Joshua 5:10; Ezra 6:19).
The 15th was the day after the Passover when Israel left Egypt (Numbers 33:3).
Please also note that while Christ and His disciples already finished observing the Passover on the evening of the 14th, John records when the officers of the Jews led Christ from Caiaphas to the Praetorium (Judgment Hall) early the next morning, some Jews were still to observe their Passover that evening (John 18:28).
NOTE: Their serious error in NOT observing the correct Passover date led them to condemn the true Passover Lamb of God, the Messiah. Will a similar error be committed again with Messiah’s the second coming? With the Rules of Postponement in place, it is very possible!
6. The Jewish Pentecost is fixed on Sivan 6, instead of COUNTING 50 days
And you shall count for yourselves from the day after the Sabbath, from the day that you brought the sheaf of the wave offering: seven Sabbaths shall be completed. Count fifty days to the day AFTER the seventh Sabbath [which clearly and obviously brings us to a Sunday] …
NOTE: Since the Jews don’t want to be inconvenienced with a succession of two Holy Days in a row, they devised their own system [counting from Nisan 16] contrary to God’s Command. This human reasoning is the basis of their devising and adopting the “Rules on Postponement.”
7. Starting the Calendar Year from the Seventh Month’s New Moon
The right attitude should have been a humble, courageous, faithful, and obedient attitude to all of God’s commands, just as God encouraged Joshua (Moses’ successor) to have, as we read below:
This Book of the Law shall not depart from your mouth, but you shall meditate in it day and night, that you may observe to do according to all that is written in it…
Therefore be very courageous to keep and to do all that is written in the Book of the Law… lest you turn aside from it to the right hand or to the left…
Sadly, instead of determining first the exact first New Moon after the Spring Equinox of the Sacred Year, strangely their focus is on the seventh month, just to make very sure that by the process of applying the Rules on Postponement, there will be NO two Holy Days that will be consecutive to each other, making it somewhat inconvenient for them. Which leads us to Point Number 8…
8. The Rules of Postponement
From creation to the days of Noah, to Moses, and to the days of the Messiah (Christ) on earth, there were NO such “Rules of Postponement.” Sadly, certain myths and false information have been published even in some church of God websites falsely claiming that there were already such “Rules” during the time of Christ. And since He allegedly did not make any contrary statements, therefore, these “Rules” were accepted by Him. This is a totally false and blatant LIE.
A very simple historical check will reveal that the “Rules of Postponement” were first invented by Hillel II and made somewhat official only in A.D. 359. However, since it had poor acceptance among rationally thinking Jews, it did not gain popularity. It was not until about seven hundred years later when it was refined and promoted by Maimonides (also called Rambam), a Jewish scholar and physician in the 12th century when it gained more acceptance to this day.
The main purpose of the Rules of Postponement is to adjust the declared date of the New Moon of the Seventh Month (Tishri), so that the 10th day of that month (Day of Atonement) will not fall on either a Friday or a Sunday, and also to make sure that no other Holy Day in the autumn (Feast of Trumpets, First Day of the Feast of Tabernacles, and the Last Great Day) will fall on a Friday or a Sunday. Thus, the Postponement Rules may delay the Feast of Trumpets by as much as two days later! In other words, the objective is to avoid the “inconvenience” of having two Holy Days successively next to each other. (There are several websites commenting on the Rules of Postponement like this one.)
In summary, following the Rules of Postponement is tantamount to moving the commanded seventh day Sabbath to a Sunday or postponed even until a Monday, so that such Holy Days of God will not be an “inconvenience” to its observers. Now is this a religion of obedience, or rather a religion of convenience?
Biblical Evidence that the Rules of Postponement were NOT Observed in the Time of Christ
This evidence is based on the events surrounding the last Feast of Tabernacles in the life of Christ. John 7:2 gives us the original setting. On the Last Great Day of the Feast, He preached about the Holy Spirit (John 7:37-39). After that day’s event, everyone went home (John 7:53).
Early the following morning (the day after the Last Great Day), He was presented with a woman taken in adultery (John 8:2-11). Following this event was a long, heated discussion with the Pharisees (John 8:13). Events of that same day continued to the healing of the man blind from birth (John 9:1-7). Note that Christ’s healing of the blind man was on the Sabbath (John 9:14).
Therefore, we have just established that the Last Great Day (the 22nd day of the month) in that year was on a Friday! And that the following day (the 23rd day of the month) when He healed the blind man (of course, contrary to Jewish false traditions about healing) was a Sabbath Day.
From this landmark Last Great Day (22nd of the month), let us go back eight days, which brings us to the First Day of the Feast of Tabernacles (the 15th day of the month), and it was also on a Friday! And if we count back five days more, it brings us to the Day of Atonement (which was on the 10th day), and it was on a Sunday! And if we count back further 10 days more, it brings us to the First day of the Month of Tishri (the seventh month) which is the Day of Trumpets and it was again on a Friday! (As an illustration, to better understand and follow the stated dates I am talking about, try to go to a June 2018 calendar page.)
From this graphic illustration, you will notice that ALL the Rules of Postponement were broken: Feast of Trumpets on a Friday (followed by a weekly Sabbath), Day of Atonement on a Sunday (following a weekly Sabbath), First Day of the Feast of Tabernacles on a Friday (prior to a weekly Sabbath), and the Last Great Day on a Friday (prior to a weekly Sabbath)!
CONCLUSION: The Rules of Postponement were NOT known nor followed during the time of Christ!
9. Sighting of the New Moon
The movements of the heavenly bodies are very precise — more precise than any clock on earth. Therefore, God’s calendar system is likewise very precise, predictable, and calculable!
Since there was already an existing accurate calendar even in the days of Noah, he did NOT have to sneak out from his room in the ark to “sight” for the new moon each month for all the 12 months he stayed in the ark! Please remember that he and his family stayed in the ark for over a year — 12 calendar months plus 10 days to be exact! (Compare Genesis 7:11 and Genesis 8:13, 14).
During the time of Moses, Joshua, the Judges, and during the time of the Kings of Israel and Judah, there was no command for them to go to the mountains and “sight” for the New Moon. Why? They already had a precisely calculated calendar. What is our solid biblical proof?
David and Jonathan, as a case in point, already knew when the new moon would occur — even without any sighting activity from anyone! Notice these two very telling biblical verses:
Indeed tomorrow is the New Moon, and I should not fail to sit with the king to eat. But let me go, that I may hide in the field until the third day at evening.
~1 Samuel 20:5
Then Jonathan said to David, ‘Tomorrow is the New Moon; and you will be missed, because your seat will be empty.’
~1 Samuel 20:18
Even today, none among us (if ever) need to go out of our way, find a mountain and “sight” for the new moon! Why? We already have a calculated calendar, which tells us the exact date — many days, months, and years in advance. Because the calculated calendar is so precise, it has been automatically programmed into our watches, computers, and other devices. In fact, there are computer programs that can determine the exact time and date a thousand years in advance, or calculate the date and time going back thousands of years!
The Start of the Practice of “Sighting” for the New Moon
So, WHEN did this practice of “sighting” for the New Moon begin? History tells us that it started with the Jews coming from the Babylonian captivity. It was part of the pagan practices, which God had previously warned Israel against following:
And take heed, lest you lift your eyes to heaven, and when you see the sun, the moon, and the stars, all the host of heaven, you feel driven to worship them and serve them…
If there is found among you… a man or a woman … who has gone and … worshiped them, either the sun or moon or any of the host of heaven, which I have not commanded, and it is told you, and you hear of it, then you shall inquire diligently. And if it is indeed true … then you shall bring out to your gates that man or woman who has committed that wicked thing, and shall stone to death… Whoever is deserving of death shall be put to death on the testimony of two or three witnesses…
Even righteous Job was very careful to avoid such practice:
If I have observed the sun when it shines, Or the moon moving in brightness, So that my heart has been secretly enticed…
Brief History of the Crescent Moon Fascination
Fascination with the crescent of the moon has been an ancient pagan practice. There is a Hebrew word for crescent, but it is never used (not even once) when referring to the new moon. Rather, the Hebrew word used is “Rosh Chodesh” which literally means “head of the month” or beginning of the moon cycle. There is NO biblical evidence that the Israelites ever visibly sighted a crescent before the Babylonian captivity. (By depending solely on visual observation of the crescent, we can sometimes be one or two days late in reckoning the start of the month.)
But there is a major religious group in this world today which reckons time through a Lunar Calendar system and gives much importance to the crescent of the moon, even as conspicuously depicted in their symbols, their flag, and on top of all their places of worship.
[Scientifically speaking, it takes anywhere from 18-24 hours after astronomical new moon (conjunction) when the first crescent becomes visible. The start of the biblical new moon (or month) is usually the day that begins with the sunset after lunar conjunction or astronomical new moon. This should normally coincide with the moon’s first visible sliver of light. But if the conjunction is too close to the following sunset, using that sunset to start the month may begin the month a day too early. In other words, that previous month has 30 days, instead of 29. In such cases, the next day’s sunset starts day one of the new month.]
Concerning certain practices, God bluntly told Israel:
YOUR New Moons and YOUR appointed feasts My soul hates; They are a trouble to Me, I am weary of bearing them.
NOTE: Notice how God contrasts “YOUR FEASTS” with “MY FEASTS” (in Leviticus 23:2)
Is It Wrong to “Sight” the Moon or Use Visual Observation?
Some might ask, so is it wrong to sight the moon? No. After the Babylonian captivity, to re-establish and confirm the accuracy of their calendar, both calculation and observation were used during the Second Temple period, where the Calendar Court received two or three witnesses who have seen the new moon. After cross-examining the witnesses plus checking with their calculations, the temple priests proclaimed the new moon throughout the land. But once firmly established, God did not intend this to be a regular ritual.
Today, we realize that there may be some sincere and converted people of God who do not have a correct calendar, a computer, or Internet access. They may sight the new moon (not to worship it), but to accurately determine God’s appointed times, His Feasts. They may also look at the moon to confirm whether it is really a full moon on the opening of the spring and autumn Festivals. These people are not sinning, nor are they in violation of Scriptures.
10. Unbiblical Reckoning of the Start of the Sacred Year
Sadly, the present Hebrew Calendar system does not follow the God-ordained principle of starting the Sacred Year with the New Moon ON or AFTER the Spring Equinox. Rather, it sometimes arbitrarily, or even conveniently starts the year with the new moon closest to the Spring Equinox WITHOUT any biblical basis for doing so. You may also observe that there are years (for example, 2016 and 2019) when the modern Hebrew Calendar correctly starts the year — NOT with the closest new moon to the Spring Equinox — but in fact, with the farthest new moon (but the first new moon after the Spring Equinox.) Talk about inconsistency!
The start of the Sacred (or Hebrew) Calendar Year should never begin before the Spring Equinox because such season before it is still part of winter. The complete cycle (Hebrew tekufah) of the Calendar Year should first be ruled by our circuit around the sun, the “greater light” with a demarcation point at the Spring Equinox. When fully completed, then secondly, the new moon cycle, the “lesser light” is now to be factored in.
Below are actual examples of such violations in the recent past, present, and in the coming years. Please remember that the Spring Equinox in our current era is on March 21. Please note the premature starting dates:
- 2007: Hebrew Calendar declared the start of the year on March 20
- 2010: Hebrew Calendar declared the start of the year on March 16
- 2013: Hebrew Calendar declared the start of the year on March 12
- 2018: Hebrew Calendar declared the start of the year on March 17
- 2021: Hebrew Calendar declared the start of the year on March 14
All these stated Calendar Years began the Sacred Year before the Spring Equinox (March 21). In all of these cases, it appears that the required 13th month was not properly added that year.
Ignoring God’s Principle Which Means “Completion”
The Hebrew word used is the principle of “tekufah” which actually means the COMPLETION of a [time] cycle, as discussed in Part 3 of this article series. This is based on Exodus 34:22, among other verses.
NOTE the following biblically defined guidelines or parameters:
- A Day ends at sunset (NOT at any hour in the afternoon)
- A Week ends at Sabbath sunset (NOT at any hour on a Sabbath afternoon)
- A Month ends at or very soon after new moon conjunction (NOT at any point in the last quarter of the moon cycle)
- Similarly, a Year should end at the new moon ON or AFTER the Spring Equinox (NOT at any point in winter or still approaching the end of the old solar year cycle). By the way, there is absolutely no biblical verse which supports the yearly cycle beginning before the Spring Equinox.
Therefore, to begin the start of the Sacred Calendar Year at any other time before the Spring Equinox VIOLATES the principle of “tekufah” which means COMPLETION of a cycle.
These rules are not new. Even the ancient Babylonian Calendar (and the Persian calendar) supports this rule of always starting their year from the new moon on or after the Spring Equinox. It started reckoning years this way beginning in 499 B.C. They seem to have faithfully retained the Calendar Rules from the original Hebrew Calendar, introduced during the time of Daniel in Babylon (discussed in Part 3 of this article series). Various Hebrew Calendar researchers and experts such as Frank Nelte, Herb Solinsky, and Don Esposito (among others) unanimously affirm these important principles concerning calendar matters. Even the recent discoveries in the 12th cave at Qumran of more Dead Sea Scrolls confirm the fact of the tekufah time reckoning. For more details, you may also check the Patterns of Evidence website.
This Roman Year of 2018 is Unusual in Some Respects
First, there are two Full Moons in both months of January and March. Second, the New Moon in March (the 17th) occurs before the Spring Equinox of March 21.
Therefore, by God’s Calendrical Rules, the true start of the Sacred Calendar Year is on the next new moon, which occurs April 17. In other words, this year is a special 13-month year in the 19-year time cycle. From the Spring Equinox of March 2017 to the Spring Equinox of 2018, there are actually 13 new moons. You can easily verify this with NASA, US Naval Observatory, or TimeandDate.com. The implications for this is that the Spring and Fall Festivals would be a month later than usual. This sometimes happens when an intercalary (or 13th) month is added to keep God’s Festivals in their proper seasons.
God’s Festival Dates for the Year 2018
The Official Data for this year 2018 is from NASA (all based on Jerusalem time):
(VERNAL EQUINOX, March 20, 6:15 p.m. — or March 21, since after sunset)
(AUTUMNAL EQUINOX, September 23, 3:54 a.m.)
(First astronomical new moon after the Spring Equinox: April 16, 3:57 a.m.)
(Seventh astronomical new moon after the Spring Equinox: October 9, 5:47 a.m.)
- [First Day of Sacred Year (First Day of Abib/Nisan): April 17]
- Passover: April 30 (observed the previous evening [April 29])
- [Night to Be Much Observed: April 30 in the evening (FULL MOON)]
- Days of Unleavened Bread (DUB): May 1 to 7
- Pentecost: June 24
- Feast of Trumpets (First Day of Ethanim/Tishri): October 11
- Day of Atonement: October 20
- [Opening Night Service: October 24 in the evening (FULL MOON)]
- Feast of Tabernacles: October 25 to 31
- Last Great Day (The Eighth Day): November 1
[NOTE: Since the cycle of the moon is very precise, a secret counter-checking formula is to count 177 days (29.5 x 6) from the first new moon after the Spring Equinox to the new moon (or first day) of the seventh month. The number 29.5 represents the approximate number of days in a complete moon cycle, while 6 is the number of months from the first month after the Spring Equinox to the seventh month around the Autumn Equinox.]
Possible Objections or Justifications
Did Paul not say, we are to follow the oracles given to the Jews? (Romans 3:1-2)
ANSWER: The word “oracles” is from the Greek word, “logion” [Strong’s #G3051]. It basically means a brief utterance, but it can also include the Mosaic law. However, it did not mention the Jewish Calendar, except perhaps the verses governing its guidelines. But one very clear point to remember here is that it certainly did not include the present Jewish Calendar with its corruptions and Rules of Postponement, which was only introduced centuries later by Hillel II in A.D. 359 and further refined by Maimonides in the 12th century — long after the apostle Paul had died.
Didn’t Christ also say, the Scribes and Pharisees sit in Moses’ seat, and whatsoever they bid you to do, do, but do not do [follow] after their works? (Matthew 23:2-3)
ANSWER: Sadly, the Greek to English translation is not very accurate, somewhat confusing, and above all contradictory. A better translation is based on Matthew’s original Hebrew:
Therefore, whatever he [Moshe (Moses)] commands you to observe; that observe and do — but do not follow the takanot [rules enacted by the Pharisees which change or negate biblical law] and ma’asim [acts of the rabbis which serve as a legal precedent for righteous behavior] of the Prushim [Pharisees] — for they say [they follow Moshe (Moses)] but they do not do [what Moshe (Moses) says to do!]…
~Matthew 23:3, The Chronological Gospels
Obviously, the correct Hebrew version is what Christ really meant, because starting from verse four to the end of chapter 23 (rather than upholding their teachings and practices as what the verse seems to mean), it was instead a barrage of fiery condemnation of the hypocritical words and works of the Scribes and Pharisees.
If we do not reckon the Festivals accurately, like the Feast of Trumpets (symbolizing the return of Christ on earth, Matthew 24:29-31; 1 Thessalonians 4:16-18), we will likely also fail in that very important event in our life — in the same way that the Jews failed to recognize the identity of the true Messiah simply because they changed the true Passover date to the 15th of the month.
Let us not forget that the Great Deceiver [Satan] has not only both blinded and confused, but has also planted lies with regards to the reckoning of God’s Sacred Calendar. His handiwork is manifest in the ignorance, confusion, and lies that presently exists in this world — sadly including the Calendar.
Let us, therefore, be wise, knowing and doing the will of God in our lives now. God has promised that “those who are wise shall shine like the brightness of the firmament, and those who turn many to righteousness like the stars forever and ever” (Daniel 12:3). | <urn:uuid:d95e763e-7f15-4631-9a03-16ad16ae557d> | CC-MAIN-2019-47 | http://www.biblicaltruths.com/the-shocking-corruptions-in-the-present-hebrew-calendar/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00457.warc.gz | en | 0.947224 | 5,905 | 2.796875 | 3 |
Every visitor who comes to Hatteras or Ocracoke Islands from the north will encounter Oregon Inlet. Separating the small barrier islands of the southern Outer Banks from Bodie Island and the northern Outer Banks communities of Nags Head, Kill Devil Hills, Kitty Hawk and Manteo, Oregon Inlet is arguably the most visited and traversed inlet on the islands.
Always shifting and seemingly always a focal point of local history or controversy, Oregon Inlet is nonetheless a breathtaking area of open water that connects the Atlantic Ocean and the Pamlico Sound, providing an easy route for charter fishermen, commercial fishermen, and day trippers to get out and explore the waters. With lovely ocean or soundside beaches located on either side, and a historic US Coast Guard station, (formerly a Lifesaving station), perched at the edge of the inlet's shores, a visit to Oregon Inlet typically includes some enjoyable beach time, good fishing, and simply incredible views.
History of Oregon Inlet
Like most all of the past and present inlets along the Outer Banks, Oregon Inlet was created when a violent hurricane passed through the area in 1846, creating a wide watery gash in between Bodie Island and Pea Island. During the storm, a ship that was caught in the Pamlico Sound for the action, the Oregon, witnessed the sudden formation of the new inlet, and after the storm had passed and the crew collected their bearings, they headed to the mainland to spread the word of the new wide inlet that now separated Hatteras Island from the rest of the Outer Banks. As a result of their initial accounts, the inlet was named after the ship that spread the news, and has been called Oregon Inlet ever since.
Less than 40 years after its formation, the US Lifesaving Service, the predecessor to the modern day US Coast Guard, was undergoing a lengthy process of erecting lifesaving stations all along the Outer Banks, and decided to build a small station along the borders of Oregon Inlet. The Coast Guard had received funds to build 29 stations along the East Coast, and this station, they reasoned, would protect passing mariners both along the oceanfront and along the soundside. With easy access to both bodies of water, enabling rescue boats to access the ocean quickly and easily, an inlet-bordering station seemed to be a smart move.
The station was built in 1883, and unfortunately, just five years later, had to be relocated. One of the defining characteristics of an inlet is its always variable geography, and as the inlet moved, the original station came closer and closer to falling into the water.
The station was moved in 1888, but suffered another bout of bad luck when a hurricane passed through and demolished the station to the ground less than a decade later. Another station was quickly erected further west, and this 1897 station had a much longer and productive life. The Oregon Inlet station enjoyed decades of service with the US Coast Guard before it was decommissioned with the arrival of more modern stations, and, eventually, abandoned in 1988 with the Coast Guard vacating the premises entirely and turning the station and the 10 acres that surrounded it over to Dare County.
With the station no longer of any use to either the Coast Guard or the county, it simply sat at the edge of Oregon Inlet, exposed to the elements, drifting sand, and even passing vandals. For the next 20 years, visitors who crossed the Herbert C. Bonner Bridge en route to Hatteras Island would barely notice the deteriorating structure that was virtually buried by piles of drifting sand.
This all changed in 2000 and 2001, when Dare County gave the property to the state, who in turn assigned it to the North Carolina Aquariums, and a plan began to bring life back to the historical 11,361 square foot wood station. After years of planning, work began in 2008 to raise the station onto pilings, renovate both the exterior and interior, and return it to its formal glory.
The careful renovations were completed by 2011, and today, visitors to the beaches of Oregon Inlet would never guess that the historical structure was ever abandoned and approaching total disrepair.
Oregon Inlet is also famous for igniting the NC Ferry System that serves the coastal islands of the Outer Banks. When the inlet was formed, after all, Hatteras Island was already home to healthy population of tradesmen, lifesaving station employees, and local villagers, and just a decade before Oregon Inlet came to fruition, Hatteras Inlet had also opened up at the southern end of the island, just south of Hatteras Village. This meant that the population was essentially stranded, and the only way to get on or off the island was by boat.
During the 1850s and beyond, this wasn't necessarily a concern since with no roads or vehicles, a boat was the only logical method of transport anyways. However, as the outside world progressed, and vehicles became more common, it became clear that Hatteras Islanders and the handful of people that wanted to visit the southern Outer Banks needed a way to get across Oregon Inlet.
As a result, an enterprising local, Captain Toby Tillet, began to make ferry runs across Oregon Inlet with a tug boat and barge that could carry a small number of vehicles across the water. These initial ferry runs began in the 1920s, and by 1934, the North Carolina Highway Commission had taken notice and decided to subsidize these ferry trips, keeping tolls affordable for residents and visitors alike. By 1950, Tillet's entire ferry business was sold to the state, and soon other state-sponsored ferry operations began to pop up along the Outer Banks, most notably across the Croatan Sound, which became the first official NC Ferry route.
Today, the NC ferries transport over 1.1 million vehicles across North Carolina's coastline and bodies of water every year, but he NCDOT's Ferry Department has humble roots, beginning less than a century ago with the arrival of Tillet's tug boat across Oregon Inlet.
By the late 1950s, the ferry traffic to and from Hatteras Island had increased to the point that a bridge was necessary, and the Herbert C. Bonner Bridge officially opened to vehicles in 1962, spanning across Oregon Inlet, and providing commuters and day trippers with an easy way on and off the island.
In recent years, Oregon Inlet has become a point of mild controversy. In synch with the nature of inlets which are continuously expanding or retracting, the inlet has begun to close up and efforts have been made by the state and federal governments to commence regular dredging to keep the inlet wide open for commercial and charter fishermen. (Dredging is essentially a method of digging up the shallow sand on the ocean floor and tossing the sand particles into the water, where they are carried away into the ocean or the Pamlico Sound.) For some folks, the costly endeavor may simply be delaying the inevitable closing of the inlet, while other parties believe the efforts a necessary step to keep the local fishing industry intact. With no definitive resolutions in sight, annual dredging efforts are still underway, and off-season or even summer season visitors passing over the inlet on the Bonner Bridge may spot these huge dredging ships, spouting tons of sand and water into the air above.
The Geography of Oregon Inlet
Oregon Inlet is one of the larger Outer Banks inlets at 2.5 miles wide, a distance which changes regularly with shifting sand bars or incoming or outgoing tides. The inlet is bordered on the southern side by Pea Island, which has a protective rock barrier to both limit erosion by the Oregon Inlet Coast Guard station, and to serve as a sort of bulk head for the inlet to remain deep and traversable. On the southern side of the inlet lies a wide, waterfront beach, bordered by a series of saltwater canals that snake through an expansive one half mile or so soundside marsh.
Also on the soundside, visitors will find a number of small sandbars and islands, a handful of which even have deer stands or small rustic cabins that are privately owned and can only be accessed by a boat.
Located close to the Gulf Stream for exceptional offshore fishing, Oregon Inlet is a popular waterway for commercial and charter fishermen alike, who make daily runs under the Bonner Bridge to the open ocean waters.
Things to do along Oregon Inlet
Like all Outer Banks inlets, the most popular activity along the perimeter of Oregon Inlet is fishing, and deep sea fishermen and surf fishermen alike will find plenty to love about this area.
Many of the charter boat businesses on the Outer Banks pass through Oregon Inlet to get to the Gulf Stream, particularly all of the businesses that cater to the central Outer Banks regions. A number of boats are located along the Wanchese boat docks, which are located just a 15 minute ride away, or even right next to the inlet itself at Oregon Inlet Marina, which is literally located on the northern banks of the inlet's borders. From here, it takes just moments to pull out of the boat slip and head towards the open water, making fishing trips from this marina one of the quickest runs out of the sound and into the open ocean waters.
Because of this, visitors who travel over the Bonner Bridge in the early morning hours or in the late afternoon will notice a distinctive line of charter boats parading to or from the ocean, across Oregon Inlet. Visitors who are staying in the central and northern Outer Banks areas who are interested in a half day or full day of charter fishing would be well advised to scope out the charter boats that leave from Oregon Inlet. Located close to the Gulf Stream, and providing an easy trip from the harbor to the inlet, these businesses are generally the most convenient for northern Outer Banks vacationers.
In addition, the Oregon Inlet Marina is also home to a number of local tour boats that take visitors out on scenic tours along the water. Whether vacationers embark on a wildlife tour, a dolphin tour, or simply a little sunset spin around the Pamlico Sound, Oregon Inlet provides an ideal gateway to a number of on-the-water tours and fishing adventures.
Vacationers who bring their own boat along can also be spotted anchored or trolling around the open Oregon Inlet waters, as the pilings from the Bonner Bridge can attract a lot of tasty game species, including sheapshead or even Amberjack, both of which love feeding and congregating near structures in deep waters.
For visitors who love to fish but don't necessarily need a boat ride to do so, there are also plenty of locations along the inlet to cast a line.
One of the most popular and unique spots along Oregon Inlet to go fishing is the Bonner Bridge itself. The southern end features a small walkway which is protected from vehicular traffic, but allows anglers to drop a line deep into the inlet's waters. In the summertime, it's not unusual to see several dozen anglers congregated next to the bridge, hoping for a big catch.
Also on the southern end of Oregon Inlet, anglers flock to the small soundside beach as well as the bulk headed inlet and ocean facing beaches for some fantastic surf fishing. Directly adjacent to the southern border of Oregon Inlet lies a large public parking area, perfect for anglers who want to enjoy a full day or evening of waterfront fishing. Seldom full and with plenty of elbow room even in the prime summer season, this area is perfect for surf fishermen who love a great view, as well as an easy stroll from the car directly to the water.
The northern side of Oregon Inlet also attracts surf fishermen as well, as this area features a wide beach that borders the inlet from the ocean to the soundside. Unless an angler is a good hiker, a 4WD vehicle is all but required to access the area and bring all the required fishing gear along. An entry ramp is located on the northern side of the Bonner Bridge, bordering the National Park Service station and Oregon Inlet campground. Once on the beach, many anglers will enjoy front-row access to the big game fish that frequent the inlet waters, including bluefish, mackerels, spots, croakers, mullets, cobias, and even the occasional shark.
A beach driving permit is required to drive on the beaches along Oregon Inlet, and can be purchased from the National Park Service station that borders the campground and access ramp. Also, visitors are advised to only access the beach via a 4WD vehicle. While the inlet bordering beaches are flat and generally consist of hard-packed sand, the ocean paths that lead out to the beach are soft and can only be traversed with a vehicle that has 4WD. Also, make sure you slack down your tires to 15-20 psi to further protect your vehicle from getting stuck in the soft sand.
While fishing is clearly one of the most popular Oregon Inlet activities, it certainly isn't the only way to enjoy the area. In recent years, kayakers and birders have discovered the northern soundside borders of Oregon Inlet for its exceptional maze of canals that wind through the grassy marshlands. Here, birders can spot dozens of white egrets, pelicans, seasonal white swans and great blue herons, all of which carve out temporary homes along the bait-fish rich marshy sound waters.
Kayakers will be challenged navigating through the small canals and finding their way in and out of the sound, and birders will find they have incredible up-close-and-personal views of some of the Outer Banks' most famous local species, all from the comfort of a small boat, or simply on foot with a pair of high waders. Birders should also consider a trip to the southern shoreline of the inlet, as the small sandbars that lie just offshore are popular congregation locales for dozens of cormorants and pelicans, especially in the off-season and winter months.
Sightseers, photographers, and history buffs will also appreciate a trip to Oregon Inlet for the incredible views and unique Outer Banks landscapes.
While the historic US Coast Guard station isn't currently open to the public, visitors can still pull into the public parking area and walk the perimeter of the building admiring the intricate handiwork that went into restoring the station to its former glory.
Also on the northern side of the inlet, sunset lovers should check out the small soundside beach located just at the end of the Herbert C. Bonner Bridge. When the sun sets, this parcel of water turns multiple shades of orange, pink and purple, interrupted only by the occasional sandbar peeking out above the inlet waters.
On the northern side, the beaches are an incredible sight as visitors who drive out to the soundside of the inlet will find themselves almost directly under the Bonner Bridge. A strange but exhilarating experience, vacationers can literally watch the traffic pass by as they enjoy their own secluded patch of shoreline. This vantage point is also ideal for photos of the inlet itself, the Bonner Bridge spanning across the inlet, and the small, tucked away Coast Guard station that stands watch over the watery area.
Tips and Tricks for Visiting Oregon Inlet
- The 4WD accessible northern beaches of Oregon Inlet are seasonally open, and generally close from mid-spring until early fall for threatened species breeding seasons. To see if the beach is currently open to vehicles or pedestrians, visit the National Park Service's website. Remember that this closure extends not just to vehicles, but to kayaks and boats too, and all vessels are encouraged to pay attention to any shoreline closure signs when making an expedition around the inlet.
- While the soundside borders of Oregon Inlet are ideal for kayakers or explorers, everyone is advised to steer clear from the rushing waters of the inlet itself. This section of the inlet produces fast currents and deep waters, which flow well out into the sound and the ocean, and should only be traversed by a motorized boat.
- While the views of Oregon Inlet from the top of the Bonner Bridge are incredible to be sure, drivers are reminded to maintain the 55 mph speed limit, and not stop or slow down on the top of the bridge. There is limited sight distance on either side of the top of the bridge, and a slowed or stopped vehicle can cause a major traffic hazard.
- Searching for a fantastic photo opt, or a much needed break from the road? Simply pull over to the public parking area on the northern side of the inlet and stretch your legs. From here, visitors can stroll around the original late 1800s Coast Guard station, take a walk to the ocean or soundside beaches, or hop up to the small walkway that borders the Bonner Bridge. Any section of this area is worthy of exploring, and can provide some fantastic photos to boot.
- Kayakers who want a closer view of the inlet but want an out-of-the-way route can launch from the New Inlet boat launch located approximately 3-4 miles away from the Bonner Bridge on Pea Island. This area was once a legitimate inlet until it was filled in, and the area offers incredible access to the Pamlico Sound and a moderate paddle towards the inlet for both kayaks and small skiffs.
At a little over 150 years old, it's amazing that Oregon Inlet has both a deep and influential history, as well as a devout following among recreational and commercial fishermen alike. In addition, the beaches bordering the inlet as well as the small metal walkway that extends above the water, are incredible fishing grounds as well as home to some of the most unique and incredible views on the Outer Banks.
On your next vacation, whether you're looking for a truly out-of-the-way place to enjoy the beach, or just need an amazingly scenic break from the road, take a stop by Oregon Inlet. A small collection of history, active marinas, and amazing views, Oregon Inlet is an area of the Outer Banks that virtually every visitor can easily take in and admire.
Where is Oregon Inlet?
When was Oregon Inlet created?
Where did the name Oregon Inlet come from?
The legend is that the inlet is named after a sailing vessel from Edenton that first navigated through the newly created channel during an 1846 storm. While this explanation isn’t for certain, the inlet was decidedly not named after the state of Oregon, which wasn’t established until 13 years after Oregon Inlet formed.
How old is Oregon Inlet?
Oregon Inlet is more than 160 years old.
How did Oregon Inlet form?
Oregon Inlet formed during a hurricane in the year 1846, which created a new channel of ocean and sound water in between then-Pea Island and Bodie Island.
How wide is Oregon Inlet?
Oregon Inlet is 2.5 miles wide, which makes it one of the widest inlets on the Outer Banks.
How long is Oregon Inlet?
Oregon Inlet is 12,866 ft. long from Bodie Island to the edge of Hatteras Island.
How do you get across Oregon Inlet?
Visitors travel across Oregon Inlet via the Herbert C Bonner Bridge, or simply the “Bonner Bridge,” which connects Hatteras Island to the rest of the central and northern Outer Banks.
Is there a ferry at Oregon Inlet?
There is no ferry at Oregon Inlet, but there used to be ferry service for visitors and residents heading to Hatteras Island prior to 1963, before the Bonner Bridge was established.
What are the houses in the middle of Oregon Inlet?
There are several small and privately owned islands in the middle of Oregon Inlet, where a handful of houses can be found. At least one of these is a former hunt club dating back more than a century, while several others are more modern residences that were built within the past 20-30 years. The homes and islands are individually owned, and have no electricity except for generator power. They are also only accessible by boat.
What are the islands in the middle of Oregon Inlet?
Many of the islands in Oregon Inlet are known as “dredge spoil” islands, and are sandbars from discarded dredge sand and material that built up over the years. Today, some of the oldest islands can have trees, high sand dunes, vegetation, and native critters including birds, reptiles, and mammals such as foxes.
When was the Bonner Bridge built?
The original Bonner Bridge was built in 1963. The new Bonner Bridge broke ground in March of 2016.
When will the new Bonner Bridge be complete?
The new Bonner Bridge is scheduled to be completed by the fall of 2018.
Is the Bonner Bridge safe?
The Bonner Bridge is safe to drive on, thanks to numerous repairs and inspections over the years, however the replacement bridge being built is certainly needed. The original Bonner Bridge had a 30-year lifespan that it surpassed in 1993.
How long will the new Bonner Bridge last?
The new Bonner Bridge is slated to have a lifespan of 100 years.
How long is the Bonner Bridge?
The original Bonner Bridge is 2.7 miles long. The new Bonner Bridge is slightly longer at 2.8 miles long, not including the on ramps.
How do you access Oregon Inlet?
Visitors can access Oregon Inlet with a kayak or boat via the launching ramps next to the Oregon Inlet Fishing Center on Bodie Island. Beach-goers can head to the inlet via the parking area on the southern side of the Bonner Bridge, or via the beach access / ORV ramps near the Oregon Inlet Campground.
What is the old building at the southern end of Oregon Inlet?
The historic structure at the southern end of Oregon Inlet is an original U.S. Life-Saving station, (and later Coast Guard station), that was built in 1898.
Where is the Oregon Inlet Fishing Charter?
The Oregon Inlet Fishing Center is located next to the northern terminus of the Bonner Bridge.
How do you reserve an Oregon Inlet fishing trip?
Visitors can reserve an inshore or offshore fishing trip by calling the Oregon Inlet Fishing center directly, or by contacting a local charter fishing business that operates out of the fishing center’s docks.
Is there gas and food at the Oregon Inlet Fishing Center?
The Oregon Inlet Fishing Center has gas available for mariners - (not for vehicles) - as well as an on-site convenience store for drinks, snacks, and fishing gear and supplies.
What can you catch when fishing in Oregon Inlet?
Oregon Inlet connects with the sound and ocean waters, and as such, it is a very popular spot for a wide array of inshore saltwater species. Popular catches include bluefish, flounder, drum, croaker, and fish that are attracted to structures, (like the Bonner Bridge pilings), such as sheapshead.
How do you fish in Oregon Inlet?
Visitors can access the Oregon Inlet waters via the two beaches on either side of the inlet, an inshore fishing charter, a privately owned boat or kayak, or by heading to the fishing deck on the southern end of the Bonner Bridge.
Can you fish from the Bonner Bridge?
Fishing is allowed from the southern end of the Bonner Bridge on the designated pedestrian walkway. When the new bridge is complete in 2018, the old structure will be torn down, except for a portion of the southern end, which will continue to be used for fishing.
Where are the beach accesses near Oregon Inlet?
Visitors can park at the southern end of Oregon Inlet next to the original Coast Guard Station to reach the beach on foot. On the northern end of the inlet, visitors can park at the Oregon Inlet Campground, or use ORV Ramp 4 to access the beaches bordering the inlet with a 4WD vehicle.
Can you kayak in Oregon Inlet?
Kayaking is allowed in Oregon Inlet, and a boat ramp / launching site is located next to the Oregon Inlet Fishing Center.
Can you swim in Oregon Inlet?
Visitors should not swim in Oregon Inlet as the currents that flow through the channel are deceptively swift.
Can you travel through Oregon Inlet?
Boats can travel through Oregon Inlet, as the inlet is continually dredged to prevent shoaling.
How often is Oregon Inlet dredged?
Oregon Inlet is continually dredged year-round to keep it open to recreational mariners and emergency vessels.
Is there a Lighthouse at Oregon Inlet?
The Bodie Island Lighthouse is located just a couple miles north of Oregon Inlet.
Can you take photos of Oregon Inlet from the Bonner Bridge?
Visitors are urged to maintain the 55 mph speed limit on the Bonner Bridge, and not slow down at the top to take photos, in order to ensure the safety of other vehicles travelling on the bridge.
Are there shells on the Oregon Inlet beaches?
Shelling can be very good by the Oregon Inlet beaches, and especially near the 4WD accessible beaches on the northern side of the inlet. Shelling is also good at the dredge spoil islands in the middle of the inlet, which can be reached by a boat or kayak.
Can you camp at Oregon Inlet?
A campground that is managed by the National Park Service is located just north of Oregon Inlet. The Oregon Inlet campground is open seasonally, and can accommodate tent and RV campers.
Can you get to the Gulf Stream from Oregon Inlet?
Oregon Inlet is the closest inlet to the Gulf Stream for many vacationers in the central Outer Banks. A number of offshore charter boats use Oregon Inlet to access the Gulf Stream on fishing trips.
Can you drive on the beaches next to Oregon Inlet?
Driving on the beaches at the northern end of Oregon Inlet is seasonally permitted, as the shoreline is part of the Cape Hatteras National Seashore. A beach driving permit from the National Park Service is required, and stretches of shoreline may be seasonally closed for bird nesting activity.
Do you need a permit to drive on the Oregon Inlet beaches?
A permit is required to drive on the beaches that border Oregon Inlet. A beach driving permit can be picked up at the local ranger station just north of the Bonner Bridge, or online at the National Park Service website. | <urn:uuid:f91ae769-0fa9-446b-b703-9a7c55fe09f8> | CC-MAIN-2019-47 | https://www.outerbanks.com/oregon-inlet.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00499.warc.gz | en | 0.95548 | 5,584 | 2.71875 | 3 |
This article needs additional citations for verification. (January 2016) (Learn how and when to remove this template message)
A pyramid scheme is a business model that recruits members via a promise of payments or services for enrolling others into the scheme, rather than supplying investments or sale of products. As recruiting multiplies, recruiting becomes quickly impossible, and most members are unable to profit; as such, pyramid schemes are unsustainable and often illegal.
Concept and basic modelsEdit
In a pyramid scheme, an organization compels individuals who wish to join to make a payment. In exchange, the organization promises its new members a share of the money taken from every additional member that they recruit. The directors of the organization (those at the top of the pyramid) also receive a share of these payments. For the directors, the scheme is potentially lucrative—whether or not they do any work, the organization's membership has a strong incentive to continue recruiting and funneling money to the top of the pyramid.
Such organizations seldom involve sales of products or services with value. Without creating any goods or services, the only revenue streams for the scheme are recruiting more members or soliciting more money from current members. The behavior of pyramid schemes follows the mathematics concerning exponential growth quite closely. Each level of the pyramid is much larger than the one before it. For a pyramid scheme to make money for everyone who enrolls in it, it would have to expand indefinitely. This is not possible because the population of Earth is finite. When the scheme inevitably runs out of new recruits, lacking other sources of revenue, it collapses. Since the biggest terms in a geometric sequence are at the end, most people will be in the lower levels of the pyramid; accordingly, the bottom layer of the pyramid contains the most people. The people working for pyramid schemes try to promote the actual company instead of the product they are selling. Eventually, all of the people at the lower levels of the pyramid don’t make any money; only the people at the top turn a profit.
People in the upper layers of the pyramid typically profit, while those in the lower layers typically lose money. Since most of the members in the scheme are at the bottom, most participants will not make any money. In particular, when the scheme collapses, most members will be in the bottom layers and thus will not have any opportunity to profit from the scheme; still, they will have already paid to join. Therefore, a pyramid scheme is characterized by a few people (including the creators of the scheme) making large amounts of money, while subsequent members lose money. For this reason, they are considered scams.
The "eight ball" modelEdit
Many pyramids are more sophisticated than the simple model. These recognize that recruiting a large number of others into a scheme can be difficult, so a seemingly simpler model is used. In this model each person must recruit two others, but the ease of achieving this is offset because the depth required to recoup any money also increases. The scheme requires a person to recruit two others, who must each recruit two others, and so on.
Prior instances of this scheme have been called the "Airplane Game" and the four tiers labelled as "captain", "co-pilot", "crew", and "passenger" to denote a person's level. Another instance was called the "Original Dinner Party" which labeled the tiers as "dessert", "main course", "side salad", and "appetizer". A person on the "dessert" course is the one at the top of the tree. Another variant, "Treasure Traders", variously used gemology terms such as "polishers", "stone cutters", etc.
Such schemes may try to downplay their pyramid nature by referring to themselves as "gifting circles" with money being "gifted". Popular schemes such as "Women Empowering Women" do exactly this.
Whichever euphemism is used, there are 15 total people in four tiers (1 + 2 + 4 + 8) in the scheme—with the Airplane Game as the example, the person at the top of this tree is the "captain", the two below are "co-pilots", the four below are "crew", and the bottom eight joiners are the "passengers".
The eight passengers must each pay (or "gift") a sum (e.g., $5,000) to join the scheme. This sum (e.g., $40,000) goes to the captain, who leaves, with everyone remaining moving up one tier. There are now two new captains so the group splits in two with each group requiring eight new passengers. A person who joins the scheme as a passenger will not see a return until they advance through the crew and co-pilot tiers and exit the scheme as a captain. Therefore, the participants in the bottom three tiers of the pyramid lose their money if the scheme collapses.
If a person is using this model as a scam, the confidence trickster would take the majority of the money. They would do this by filling in the first three tiers (with one, two, and four people) with phony names, ensuring they get the first seven payouts, at eight times the buy-in sum, without paying a single penny themselves. So if the buy-in were $5,000, they would receive $40,000, paid for by the first eight investors. They would continue to buy in underneath the real investors, and promote and prolong the scheme for as long as possible to allow them to skim even more from it before it collapses.
Although the "captain" is the person at the top of the tree, having received the payment from the eight paying passengers, once they leave the scheme they are able to re-enter the pyramid as a "passenger" and hopefully recruit enough to reach captain again, thereby earning a second payout.
Matrix schemes use the same fraudulent, unsustainable system as a pyramid; here, the participants pay to join a waiting list for a desirable product, which only a fraction of them can ever receive. Since matrix schemes follow the same laws of geometric progression as pyramids, they are subsequently as doomed to collapse. Such schemes operate as a queue, where the person at head of the queue receives an item such as a television, games console, digital camcorder, etc. when a certain number of new people join the end of the queue. For example, ten joiners may be required for the person at the front to receive their item and leave the queue. Each joiner is required to buy an expensive but potentially worthless item, such as an e-book, for their position in the queue.The scheme organizer profits because the income from joiners far exceeds the cost of sending out the item to the person at the front. Organizers can further profit by starting a scheme with a queue with shill names that must be cleared out before genuine people get to the front. The scheme collapses when no more people are willing to join the queue. Schemes may not reveal, or may attempt to exaggerate, a prospective joiner's queue position, a condition that essentially means the scheme is a lottery. Some countries have ruled that matrix schemes are illegal on that basis.
Relation to Ponzi schemesEdit
While often confused for each other, pyramid schemes and Ponzi schemes are different from each other. They are related in the sense that both pyramid and Ponzi schemes are forms of financial fraud. However, pyramid schemes are based on network marketing, where each part of the pyramid takes a piece of the pie / benefits, forwarding the money to the top of the pyramid. They fail simply because there are not sufficient people. Ponzi schemes, on the other hand, are based on the principle of "Robbing Peter to pay Paul"—early investors are paid their returns through the proceeds of investments by later investors. In other words, one central person (or entity) in the middle taking money from one person, keeping part of it and giving the rest to others who had invested in the scheme earlier. Thus, a scheme such as the Anubhav teak plantation scheme (teak plantation scam of 1998) in India can be called a Ponzi scheme. Some Ponzi schemes can depend on multi-level marketing for popularizing them, thus forming a combination of the two.
Connection to multi-level marketingEdit
According to the U.S. Federal Trade Commission legitimate MLM, unlike pyramid schemes:
- "have a real product to sell." "Not all multi-level marketing plans are legitimate. If the money you make is based on your sales to the public, it may be a legitimate multilevel marketing plan. If the money you make is based on the number of people you recruit and your sales to them, it’s probably not. It could be a pyramid scheme."
Pyramid schemes however "may purport to sell a product, but they often simply use the product to hide their pyramid structure". While some people call MLMs in general "pyramid selling", others use the term to denote an illegal pyramid scheme masquerading as an MLM.
The Federal Trade Commission warns, "It’s best not to get involved in plans where the money you make is based primarily on the number of distributors you recruit and your sales to them, rather than on your sales to people outside the plan who intend to use the products." It states that research is your best tool and gives eight steps to follow:
- Find—and study—the company's track record.
- Learn about the product.
- Ask questions.
- Understand any restrictions.
- Talk to other distributors. Beware of shills.
- Consider using a friend or adviser as a neutral sounding board, or for a gut check.
- Take your time.
- Think about whether this plan suits your talents and goals.
Some commentators contend that MLMs in general are nothing more than legalized pyramid schemes. A pyramid scheme will ask their recruiters to sign up to the business with a big front-up cost. These are different laws in every state, and they take different actions toward pyramid schemes, but multi-level-marketing is legal. Multi-level marketing lobbying groups have pressured US government regulators to maintain the legal status of such schemes.
Pyramid schemes are illegal in many countries or regions including Albania, Australia, Austria, Belgium, Bahrain, Brazil, Canada , China, Colombia, Denmark, the Dominican Republic, Estonia, Finland, France, Germany, Hong Kong, Hungary, Iceland, India, Indonesia, Iran, the Republic of Ireland, Italy, Japan, Malaysia, Maldives, Mexico, Nepal, the Netherlands, New Zealand, Norway,Peru, Philippines, Poland, Portugal, Romania, Russian Federation, Serbia, South Africa,, Singapore, Spain, Sri Lanka, Sweden, Switzerland, Taiwan, Thailand, Turkey, Ukraine, the United Kingdom, and the United States.
Pyramid schemes—also referred to as franchise fraud or chain referral schemes—are marketing and investment frauds in which an individual is offered a distributorship or franchise to market a particular product. The real profit is earned, not by the sale of the product, but by the sale of new distributorships. Emphasis on selling franchises rather than the product eventually leads to a point where the supply of potential investors is exhausted and the pyramid collapses.
Notable recent casesEdit
In 2003, the United States Federal Trade Commission (FTC) disclosed what it called an Internet-based "pyramid scam." Its complaint states that customers would pay a registration fee to join a program that called itself an "internet mall" and purchase a package of goods and services such as internet mail, and that the company offered "significant commissions" to consumers who purchased and resold the package. The FTC alleged that the company's program was instead and in reality a pyramid scheme that did not disclose that most consumers' money would be kept, and that it gave affiliates material that allowed them to scam others.
In early 2006, Ireland was hit by a wave of schemes with major activity in Cork and Galway. Participants were asked to contribute €20,000 each to a "Liberty" scheme which followed the classic eight-ball model. Payments were made in Munich, Germany to skirt Irish tax laws concerning gifts. Spin-off schemes called "Speedball" and "People in Profit" prompted a number of violent incidents and calls were made by politicians to tighten existing legislation. Ireland has launched a website to better educate consumers to pyramid schemes and other scams.
On 12 November 2008, riots broke out in the municipalities of Pasto, Tumaco, Popayan and Santander de Quilichao, Colombia after the collapse of several pyramid schemes. Thousands of victims had invested their money in pyramids that promised them extraordinary interest rates. The lack of regulation laws allowed those pyramids to grow excessively during several years. Finally, after the riots, the Colombian government was forced to declare the country in a state of economic emergency to seize and stop those schemes. Several of the pyramid's managers were arrested, and are being prosecuted for the crime of "illegal massive money reception."
The Kyiv Post reported on 26 November 2008 that American citizen Robert Fletcher (Robert T. Fletcher III; aka "Rob") was arrested by the SBU (Ukraine State Police) after being accused by Ukrainian investors of running a Ponzi scheme and associated pyramid scam netting US$20 million. (The Kiev Post also reports that some estimates are as high as US$150 million.)
In the United Kingdom in 2008 and 2009, a £21 million pyramid scheme named 'Give and Take' involved at least 10,000 victims in the south-west of England and South Wales. Leaders of the scheme were prosecuted and served time in jail before being ordered to pay £500,000 in compensation and costs in 2015. The cost of bringing the prosecution was in excess of £1.4 million.
Throughout 2010 and 2011 a number of authorities around the world including the Australian Competition and Consumer Commission, the Bank of Namibia and the Central Bank of Lesotho have declared TVI Express to be a pyramid scheme. TVI Express, operated by Tarun Trikha from India has apparently recruited hundreds of thousands of "investors", very few of whom, it is reported, have recouped any of their investment. In 2013, Tarun Trikha was arrested at the IGI Airport in New Delhi.
BurnLounge, Inc. was a multi-level marketing online music store founded in 2004 and based in New York City. By 2006 the company reported 30,000 members using the site to sell music through its network. In 2007 the company was sued by the Federal Trade Commission for being an illegal pyramid scheme. The company lost the suit in 2012, and lost appeal in June 2014. In June 2015, the FTC began returning $1.9 million to people who had lost money in the scheme.
In August 2015, the FTC filed a lawsuit against Vemma Nutrition Company, an Arizona-based dietary supplement MLM accused of operating an illegal pyramid scheme. In December 2016, Vemma agreed to a $238 million settlement with the FTC, which banned the company from "pyramid scheme practices" including recruitment-focused business ventures, deceptive income claims, and unsubstantiated health claims.
In March 2017, Ufun Store registered as an online business for its members as a direct-sales company was declared operating a pyramid scheme in Thailand. The Criminal Court handed down prison terms totaling 12,265 to 12,267 years to 22 people convicted over the scheme, which conned about 120,000 people out of more than 20 billion baht.
- Smith, Rodney K. (1984). Multilevel Marketing. Baker Publishing Group. p. 45. ISBN 0-8010-8243-9.
- "Pyramid Schemes". Federal Bureau of Investigation. Retrieved 2019-04-18.
- Commission, Australian Competition and Consumer (May 14, 2015). "Pyramid schemes". Australian Competition and Consumer Commission.
- Tracy McVeigh (2001-08-05). "Pyramid selling scam that preys on women to be banned". Guardian. Retrieved 2013-04-02.
- Zuckoff, Mitchell (10 January 2006). Ponzi's scheme - the true story of a financial legend. New York: Random House Trade Paperbacks. ISBN 0812968360.
- "Book reading by Mitchell Zuckoff at olsson's Books and Records, Washington D.C." The Film Archives. Retrieved 27 October 2016.
- Keep, William W; Vander Nat, Peter J. (2014). "Multilevel marketing and pyramid schemes in the United States: An historical analysis" (PDF). Journal of Historical Research in Marketing. 6 (2): 188–210. doi:10.1108/JHRM-01-2014-0002. Retrieved 26 March 2016.
- Valentine, Debra (May 13, 1998). "Pyramid Schemes". Federal Trade Commission. Retrieved 25 March 2016.
- Edwards, Paul (1997). Franchising & licensing: two powerful ways to grow your business in any economy. Tarcher. p. 356. ISBN 0-87477-898-0.
- "Pyramid Schemes". 18 July 2013.
- "Multilevel Marketing". 14 July 2016.
- Clegg, Brian (2000). The invisible customer: strategies for successive customer service down the wire. Kogan Page. p. 112. ISBN 0-7494-3144-X.
- Higgs, Philip; Smith, Jane (2007). Rethinking Our World. Juta Academic. p. 30. ISBN 978-0-7021-7255-7.
- Kitching, Trevor (2001). Purchasing scams and how to avoid them. Gower Publishing Company. p. 4. ISBN 0-566-08281-0.
- Mendelsohn, Martin (2004). The guide to franchising. Cengage Learning Business Press. p. 36. ISBN 1-84480-162-4.
- Blythe, Jim (2004). Sales & Key Account Management. Cengage Learning Business Press. p. 278. ISBN 1-84480-023-7.
- "Multilevel Marketing". Federal Trade Commission. Retrieved 20 June 2018.
There are multi-level marketing plans – and then there are pyramid schemes. Before signing on the dotted line, study the company’s track record, ask lots of questions, and seek out independent opinions about the business.
- Carroll, Robert Todd (2003). The Skeptic's Dictionary: A Collection of Strange Beliefs, Amusing Deceptions, and Dangerous Delusions. Wiley. p. 235. ISBN 0-471-27242-6.
- Coenen, Tracy (2009). Expert Fraud Investigation: A Step-by-Step Guide. Wiley. p. 168. ISBN 978-0-470-38796-2.
- Salinger (Editor), Lawrence M. (2005). Encyclopedia of White-Collar & Corporate Crime. 2. Sage Publishing. p. 880. ISBN 0-7619-3004-3.CS1 maint: extra text: authors list (link)
- Seth, Shobhit. "What is a Pyramid Scheme?". Investopedia. Retrieved 2019-03-29.
- Commission, Australian Competition and Consumer (2015-05-14). "Pyramid schemes". Australian Competition and Consumer Commission. Retrieved 2019-03-29.
- "Pyramid Schemes". Findlaw. Retrieved 2019-03-29.
- "How lobbying dollars prop up pyramid schemes". TheVerge. Retrieved 2019-08-20.
- "Pyramid schemes". Australian Competition and Consumer Commission. Retrieved 2012-04-08.
- "Competition and Consumer Act 2010 - Schedule 2, Division 3 - Pyramid schemes". Austlii.edu.au. Retrieved 2013-04-02.
- Trade Practices Amendment Act (No. 1) 2002 Trade Practices Act 1974 (Cth) ss 65AAA - 65AAE, 75AZO
- "Pyramid scams". www.antifraudcentre-centreantifraude.ca. March 11, 2015.
- "Regulation on Prohibition of Pyramid Selling". www.fdi.gov.cn. Retrieved 2019-01-23.
- "Colombia scam: 'I lost my money'". BBC News. November 18, 2008. Retrieved April 12, 2010.
- "Proyecto De Ley Que Prohíbe La Venta Bajo El Esquema Piramidal" (PDF) (in Spanish).
- "Tarbijakaitseseadus" [The Consumer Protection Law of Estonia] (in Estonian). §12³(8) #14. Retrieved October 11, 2010.
- "Rahankeräyslaki 255/2006 - Ajantasainen lainsäädäntö - FINLEX ®".
- "Pyramid Schemes Prohibition Ordinance". Retrieved 23 January 2019.
- "Skema Piramida Dilarang, Nasabah Terlindungi". Finansialku.com. Retrieved 28 October 2019.
- "Key GoldQuest members arrested in Iran Airport". Presstv.ir. Retrieved 2013-04-02.
- "Pyramid Schemes". Competition and Consumer Protection Commission. Retrieved 12 June 2015.
- "L 173/2005". www.parlamento.it.
- 無限連鎖講の防止に関する法律 (in Japanese)
- "Sentence by the High Council of the Netherlands regarding a pyramid scheme". Zoeken.rechtspraak.nl. Retrieved 2013-04-02.
- Laws and Regulations Covering Multi-Level Marketing Programs and Pyramid Schemes Consumer Fraud Reporting.com
- "Lovdata.no". Lovdata.no. Retrieved 2013-04-02.
- NYtimes.com, "Investors in Philippine Pyramid Scheme Lose over $2 Billion"
- Explozia piramidelor Ziarul Ziua, 12.07.2006
- , "Trading law"
- Whitecollarcrime.co.za, Pyramid Schemes
- http://www.asianlii.org/sg/legis/consol_act/mmapsac190538/mmapsac190538.html MULTI-LEVEL MARKETING AND PYRAMID SELLING (PROHIBITION) ACT
- Pyramid Schemes Illegal Under Section 83c of the Banking Act of Sri Lanka Department of Government Printing, Sri Lanka
- lagen.nu - Tillkännagivande (2008:487) med anledning av marknadsföringslagen (2008:486)
- TDSA.gov Archived 2009-02-20 at the Wayback Machine, ข้อมูลเพิ่มเติมในระบบธุรกิจขายตรงและธุรกิจพีระมิด by Thai Direct Selling Association (in Thai)
- "Saadet zinciri operasyonu: 60 gözaltı". Cnnturk.com. 2010-05-14. Retrieved 2013-04-02.
- Rada at first reading passes bill on banning financial pyramid schemes in Ukraine, Interfax-Ukraine (20 November 2013)
- "Pyramid scheme fraud". Action Fraud. City of London Police. Retrieved 5 July 2016.
- "FBI — Common Fraud Schemes". Fbi.gov. Retrieved 2013-04-02.
- "Finance and Development". Finance and Development | F&D. Retrieved 2018-07-04.
- FTC Charges Internet Mall Is a Pyramid Scam Federal Trade Commission
- "SS: Some WinCapita Ponzi scheme victims to soon receive compensation". Yle Uutiset. Retrieved 2018-07-04.
- Gardaí hold firearm after pyramid scheme incident Archived 2008-03-23 at the Wayback Machine Irish Examiner
- "National Consumer Agency Ireland". Consumerconnect.ie. Retrieved 2013-04-02.
- Colombians riot over pyramid scam. Colombia: BBC news. Nov 13, 2008.
- "Caught! American arrested in Kyiv on suspicion of fraud - Nov. 26, 2008". KyivPost. 2008-11-26. Retrieved 2018-07-04.
- "Prosecuting members of a £21m pyramid scheme cost taxpayers £1.4m". 14 September 2015 – via www.bbc.co.uk.
- ACCC obtains restraining orders against operators of alleged pyramid selling scheme TVI Express Archived 2011-06-02 at the Wayback Machine Australian Competition and Consumer Commission
- BoN warns public about TVI Express Bank of Namibia
- BoL warns public about TVI Express Archived 2011-10-03 at the Wayback Machine Bank of Lesotho
- South African Sunday Times on TVI Express South African Sunday Times
- Consumer Watchdog Botswana on TVI Express Consumer Watchdog Botswana
- "Trikha family to be investigated in online TVI Express scam". MailOnline India. 2013-04-20. Retrieved 2013-07-24.
- Hull, Tim (2 June 2014). "9th Circuit Affirms BurnLounge Judgment". Courthouse News. Retrieved 19 June 2014.
- "Vemma Agrees to Ban on Pyramid Scheme Practices to Settle FTC Charges". FTC.gov. Retrieved 22 December 2016.
- "Vemma reaches $238 million settlement with FTC". TruthInAdvertising.org. Retrieved 22 December 2016.
- Limited, Bangkok Post Public Company. "Ufun fraudsters sentenced to thousands of years". https://www.bangkokpost.com. Retrieved 2018-05-20. External link in
- Media related to Pyramid and Ponzi schemes at Wikimedia Commons | <urn:uuid:21f781d0-3f36-4636-b2af-08affcd008d0> | CC-MAIN-2019-47 | https://en.m.wikipedia.org/wiki/Pyramid_scheme | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00539.warc.gz | en | 0.894685 | 5,501 | 3.03125 | 3 |
Mention the word 'enzyme,' and most people immediately think about the little tablets we swallow before eating to make sure all the food we wolf down is properly digested so we don't end up with gas, bloating, indigestion or something even more painful, like gastroesophageal reflux disease (GERD), where stomach acid and food are regurgitated back up through the esophagus.
Enzymes are indeed vitally important for proper and efficient food digestion. But that's only the tip of the iceberg as far as what they do. Enzymes are catalysts produced by the body that accelerate and enhance thousands of biochemical reactions system wide. Almost all the chemical reactions that occur in every cell in our bodies depend on them. They're essential for metabolizing fats, carbohydrates and proteins, and without them, none of the vitamins, minerals or hormones in our bodies could do any work. In fact, without enzymes, all the chemical reactions in the body would be too slow for metabolic processes to occur, and we'd die.
Nobody knows for sure how many different enzymes there are in the human body. Approximately 3,000 have been identified, but it's believed that as many as 50,000 enzymes carry out thousands of metabolic functions controlling our organs—brain, heart, kidneys, lungs, liver, pancreas, spleen, etc.—as well as aiding immune system function and inflammatory responses.
Enzymes are broadly defined as metabolic, digestive or food enzymes depending on where they are active (in the digestive tract or elsewhere) and their source (produced within the body or consumed as part of the diet).
Metabolic enzymes play vital roles inside every living cell, handling everything from cell growth and repair—critical for maintaining organ function—to cell death and scavenging debris from the blood.
Digestive enzymes break down the food we eat—all that meat and vegetables, salad and sandwiches—into nutrients that can be absorbed and utilized by our cells. Created mainly in the pancreas and small intestine, digestive enzymes like proteases, amylases and lipases respectively work to break proteins down into amino acids, carbohydrates into sugars, and fats into fatty acids and cholesterol.
Food enzymes come from raw foods we eat, as well as supplements. They are extremely important because natural enzyme production in the human body starts to decrease somewhere in our twenties and continues to decline by approximately 13 percent every decade. The stomach also produces less hydrochloric acid as we get older, making the digestive enzymes we do produce less effective. In addition, factors such as low-grade inflammation from food allergies, pancreas problems and chronic stress also lead to enzyme deficiencies.
Eating raw foods helps relieve the body of having to make all the necessary enzymes for digestion and general health, and helps make up for enzyme deficiencies as we age. But, unfortunately, because of the increasingly depleted conditions of our soils and the use of pesticides and chemical fertilizers, raw foods are no longer a reliable source of enzymes.
"In order for enzymes to be really active and abundant in the body, you need all the cofactors found in the soil (such as minerals) to be present," says Dr Ellen Cutler of Mill Valley, California, an integrative natural health specialist and author of Micro Miracles: Discover the Healing Power of Enzymes. "Nowadays, people are pretty much enzyme deficient, which is why we see young kids with GERD. I never saw such things in my practice
According to Cutler, if we don't have sufficient enzyme production in our body and aren't getting enough enzymes from external sources, our immune surveillance, tissue repair and hormone production systems become compromised. As a result, a general overall fatigue sets in and other kinds of health issues occur. Typical symptoms of enzyme deficiency are inflammation, joint and myofascial pain, and bacterial overgrowth in the gastrointestinal tract resulting in problems such as bloating and indigestion, GERD, irritable bowel syndrome (IBS), brain fog, headaches, skin rashes, acne and mood swings.
"Enzymes saved my life," she says. "I was diagnosed with ulcerative colitis in my late 20s. I also suffered from severe chronic bloating and constipation. When I tried digestive enzymes, within two weeks I had very few symptoms. I had more energy, my hair and nails were healthier, I needed less sleep, and my immune system was considerably healthier. It was miraculous."
Systemic, proteolytic, fibrinolytic
Considering the incredible number of enzymes in the body and the number of biological systems that depend on them, it's not surprising that things can get a little complicated in the enzyme world. Enzymes are delicately engineered to have effects on specific chemical bonds, so the same enzyme can play a role in many different bodily functions—just making the same atomic-level snip or stitch in each one.
Many practitioners, including Cutler, recommend both digestive and systemic enzymes for people who want to address health issues. Systemic enzymes—also known as metabolic or proteolytic enzymes— are simply certain digestive enzymes that are taken between meals and away from food—at which point they are able to act on other parts of the body.
Cutler says that when they're ingested at least an
hour away from food consumption (early morning, after eating and late at night before bed is best), proteolytic enzymes can go to work system wide, combating inflammation, joint pain, bacterial overgrowth, immune system issues and other
Proteolytic enzymes are a subset of digestive enzymes ('proteolytic' means they specifically digest protein—and not just protein from foods). When proteolytic enzymes are taken systemically, they enter the bloodstream and start to break down abnormal proteins in circulation. Notably this includes viruses, which the enzymes recognize as foreign proteins to be eliminated, and a protein called fibrin that's involved in blood clotting and also linked to heart disease and other chronic illnesses.
Each fibrin molecule is shaped like a long thread. When an injury occurs, fibrin aggregates at the site, going to work with platelets and red blood cells to clot and seal the wound. After a few days of successful repair work, enzymes are normally directed to dissolve the excess fibrin in the muscles, blood and nerves.
But fibrin production can also get out of control. If the pain from the wound is blocked by taking painkillers such as nonsteroidal anti-inflammatory drugs (NSAIDs, like ibuprofen), the signal to stop fibrin production is overridden. If the body's enzyme reserves are depleted or there are other health issues, sometimes the enzymes that would ordinarily clean up the excess fibrin never get deployed. When this happens, excess fibrin in the body builds up, contributing to inflammation and other fibrin-related problems such as fibromyalgia, atherosclerosis (fibrin-based plaque buildup in the arteries) and endometriosis.
Enzymes that specifically target fibrin throughout the body are known as 'fibrinolytic.' "Nattokinase, seaprose (seaprose-S), serrapeptase (serrapeptidase or serratiopeptidase) are all huge fibrinolytic anti-inflammatories," says Kevin Nelson, a chiropractor in Minnesota with a PhD in holistic nutrition who specializes in gut health and systemic enzyme therapy. "They eat nonliving tissue and deal with scars and plaques in your gut and in your nerves and blood."
So far, enzyme therapy using proteolytic and fibrinolytic enzymes has proven clinically effective in the treatment of a wide assortment of conditions. Fibrinolytic enzymes produced by the bacteria in fermented foods have shown particular promise for slowing down the accumulation of fibrin in the blood vessels,1 which has been implicated in blood clots, myocardial infarction and other cardiovascular diseases. One such enzyme, nattokinase, is a potent fibrinolytic that can speed up the breakdown of fibrin in the body after an oral dose.2
The proteolytic enzyme serrapeptase has been used to treat infections from joint replacements, an often devastating complication.3 Seaprose-S reduces inflammatory venous disease, which leads to varicose veins and venous ulcers.4
Bromelain, a proteolytic enzyme derived from the pineapple plant, is widely used in for its anti-inflammatory and fibrinolytic effects.5 In the treatment of osteoarthritis, bromelain has also been demonstrated to show anti-inflammatory and analgesic
properties while being safer than standard NSAIDs and other painkillers.6
Systemic enzyme therapy has been found clinically effective in treating rheumatic disorders.7 And enzyme supplementation has shown promise in helping to mitigate digestive disorders related to pancreatic insufficiency and lactose intolerance.8
Enzyme therapy may even hold promise in treating various cancers. According to Nelson, fibrinolytic enzymes are effective for cancers because they eat away at the tough, protective fibrin coating of cancer cells, leaving them vulnerable to attack by the patient's immune system.
According to Cutler, proteases can be very useful for people suffering from cancer. "People who have cancer tend to have more coagulative blood," she says. "And enzymes address that. It's also helpful to take if you're going through radiation or chemotherapy—usually enzymes are not contraindicated."
Systemic enzyme therapy has also been clinically proven to decrease the side-effects caused by tumors and their treatment in patients with breast or colorectal cancer, including nausea, gastrointestinal problems, fatigue and weight loss.9 Most clinical studies of systemic enzyme therapy have investigated a combination of papain, trypsin and chymotrypsin, which has been shown to reduce the side-effects caused by radiotherapy and chemotherapy. With some types of tumors, systemic enzyme therapy may even prolong survival.10
Who needs enzymes?
According to many practitioners, just about everybody in the Western world needs enzymes. We eat highly processed foods that are enzyme deficient. We cook most of our food, and all the enzymes that are actually left in our food are deactivated at a water temperature of about 120°F or an air temperature of about 150°F. What raw food we do consume is raised in depleted soils contaminated with pesticides and pollutants. On top of all that, we've stressed the ability of our pancreas to produce enzymes by eating refined sugars and carbs to excess. Is it any wonder the Western world suffers from so many illnesses?
In addition, millions suffer from chronic pain that is driven by inflammation. According to Tina Marcantel, RN, NMD, a naturopathic doctor in Gilbert, Arizona, proteolytic enzymes assist in mitigating chronic pain, speeding healing and increasing the body's defense mechanisms by modulating the immune system. She says they also help maintain blood circulation throughout the body and reduce inflammation—the primary cause of pain in arthritis, sciatica, chronic back pain and sports injuries like muscle sprains.
Nelson agrees that enzymes are one of the major "go-to" remedies for people suffering from chronic pain. "Look, everything starts with digestion," he says. "You've got to fix that first. If you're in chronic pain you've got to heal the gut. If you eat something and then get symptoms afterward, that's because the food is basically sitting in your gut, undigested and rotting. And if you have an inflamed gut, you can end up with a leaky gut—and the inflammation goes anywhere in the body it wants to. Big protein molecules called circulating immune complexes end up swarming around in your body, and then they just get stuck somewhere—in your low back, in your left elbow, in your gallbladder, in your right knee—wherever they want to land. Then, all of a sudden, you've got a problem."
Most of this can be avoided in the first place by taking digestive enzymes with your food. If you're already in chronic pain, however, the approach is different. Using digestive enzymes to aid digestion is still vital. But if you're in chronic pain, you also need to start taking them systemically as well.
"Protease and serrapeptase [both proteolytic enzymes] are especially effective because they help break up inflammatory mediators," says Cutler. "It's harder to deal with systemic things like fatigue or headache or back pain and inflammation. With digestive issues, you'll see an improvement almost immediately. When things have gotten chronic, it takes longer. Everybody has some inflammation, and enzymes help reduce pathogens that the body has an auto-aggressive response to. They even help people with airborne allergies like hay fever. But the digestion of the foods is just as important. People just have no idea how important proper food digestion is."
Another enormous piece in the chronic pain picture is medications. Cutler says any medication is going to put stress on the liver and kidneys and impact proper digestion of food. In addition to that, most medications have additives and fillers in them that would surprise you—including gluten. "Frankly, I'm pretty obsessive when it comes to using enzymes," she adds. So are my patients. And I hear miracle stories all the time."
What enzymes do what?
The following are the main enzymes or enzyme classes used in most commercial enzyme products.
Amylase, secreted by the salivary glands and the pancreas. Aids in digestion, breaking down carbohydrates into simple sugars
Bromelain, a proteolytic enzyme that aids in digestion by breaking down food proteins, improves absorption of nutrients, aids circulation, treats inflammation and attacks arterial plaques that contribute to heart attacks
Catalase breaks down hydrogen peroxide within cells into water and oxygen
Cathepsin aids in digestion by breaking down meat
Cellulase aids in digestion by breaking down fiber and cellulose in fruits, vegetables, grains and seeds
Glucoamylase breaks down the sugar in grains (maltose)
Invertase helps the body utilize sucrose
Lactase breaks down lactose, the complex sugar in milk products, excellent for people who are lactose intolerant
Lipase breaks down fats into fatty acids and vitamins A, D, E, and F, aids metabolism and helps with cardiovascular
Nattokinase, another proteolytic enzyme that inhibits fibrinolytic activity, with positive effects for thrombosis, myocardial infarction and other cardiovascular diseases
Pancreatin, a mixture of amylase, lipase and protease enzymes used to treat conditions in which pancreatic secretions are deficient, such as surgical pancreatectomy, pancreatitis and
Papain aids in digestion, breaks down food proteins into smaller peptide chains
Pectinase aids in digestion by breaking down fruits and other pectin-rich foods such as carrots, beets, potatoes and tomatoes
Protease aids in digestion by breaking down protein into amino acids, acts on pathogens such as bacteria, viruses and cancer cells
Seaprose-S, another proteolytic that has an anti-inflammatory effect in conditions including arthritis, edema, pleurisy (inflammation of the lung lining) and peritonitis (inflammation of the lining of the abdomen) as well as inflammatory venous disease, which leads to varicose veins and ulcers
Serrapeptase can be used to treat infections and helps to mitigate against blood clots, plaque build-up and the side-effects of radiation and chemotherapy
Enzymes against arthritis
Tina Marcantel, a nurse and naturopathic doctor in Gilbert, Arizona, had a 34-year-old woman called Mary come into her office presenting with warmth, redness and swelling of the joints of the hands, fingers and feet. She had been diagnosed with rheumatoid arthritis by a rheumatologist and had been taking NSAIDs (aspirin and ibuprofen) for approximately five months without receiving sufficient pain relief. Her doctor assured her she could be put on stronger drugs for pain in the future.
"I started Mary on systemic proteolytic enzymes administered orally, one to two hours away from meals, for approximately three weeks," says Marcantel. "Then I reduced the enzymes by half for a maintenance dose to control pain and inflammation."
She also scheduled Mary for 10 weeks of acupuncture for pain control (one treatment/week) and performed a food sensitivity panel and eliminated all the foods from her diet that may have contributed to the inflammation.
At the end of her treatment plan, Mary no longer used aspirin or ibuprofen regularly. The amount of pain she was experiencing decreased by 70 percent, and the swelling in her joints decreased as well.
After six months, she came back to the office. She'd been on the maintenance dosage of systemic enzymes, doing nothing else for her condition, and said her pain was slowly starting to increase.
Marcantel doubled her enzyme intake back to the original level for four weeks, then cut back again to the maintenance dosage. At that point Mary happily reported her pain levels had diminished by as much as 80 percent and that she had also experienced relief from chronic sinusitis due to seasonal allergies—a condition that had plagued her for years.
A guide to taking enzymes
How much to take:
Taking a few commercial digestive enzymes with a meal is fine. But doctors Ellen Cutler and Kevin Nelson both maintain that you have to have a potent enough dosage for enzyme therapy to be effective.
For digestion: at least 50,000 units per dose.
For systemic uses: at least 150,000 units per dose and a proteolytic blend. Take three times a day, an hour away from food consumption (early morning, after eating and late night before bed).
How to read labels
The internationally recognized and accepted standard for measurement used on enzyme bottle labels is Food Chemicals Codex (FCC) units. These can be expressed in different activity units for each type of enzyme, and so they can be pretty confusing, particularly as some labels instead use milligrams for measurement. Here's a list of the various types of activity units you will find:
U (an enzyme unit)
HUT (hemoglobin units, tyrosine basis)
USP (United States Pharmacopeia)
FU (fibrinolytic units) refers to the ability of nattokinase to break down the blood clotting enzyme, fibrin
MCU (milk clotting units) based on how fast the enzyme digests milk protein
GDU (gelatin digesting units) based on how fast the enzyme digests gelatin
PU (papain units)
SKB (named after the creators of the test, Sandstedt, Kneen and Blish) measures the activity of amylases to break down
DU (used in brewing) equivalent to SKB
LU (lipase units)
FIP units (test methods of the Fédération Internationale Pharmaceutique)
Both Cutler and Nelson advise that reading labels is vital:
• Avoid enzyme products that do not list the amount of units for each enzyme and instead simply mention that they are a 'blend' of one or more enzymes.
• Ideally, opt for plant enzymes, which are far gentler on the body than animal-based enzymes. Says Nelson: "A lot of animal-based enzymes come from old dead horses and cows, which not only have lower amounts of enzymes in their meat, they also have high amounts of adrenaline running through them."
• Keep an eye out for excipients, especially in enzyme blends. "Magnesium stearate and sorbitol sometimes bother people's stomachs," says Cutler. "And the colorings, MSG/natural flavors, sorbitol and other stuff can really inhibit the product's effectiveness."
• Use with care if you are on blood thinners or have a bleeding disorder since proteolytic enzyme formulas work as natural blood thinners. For most other people, overdosing with enzymes is usually not a concern.
• If you start getting nose bleeds, diarrhea or any other kinds of symptoms or discomfort, decrease your dosage until the symptoms stop and your digestive system settles down. | <urn:uuid:7b6a855e-f482-4ab3-ad8a-17df29625d9c> | CC-MAIN-2019-47 | https://www.beta.wddty.com/magazine/2018/may/enzyme-blends-ending-pain-healing-your-gut-and-even-fighting-cancer.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00140.warc.gz | en | 0.954879 | 4,224 | 3.8125 | 4 |
Beyond the Great Mountains: A Visual Poem about China. Young, Ed (2005). San Francisco: Chronicle Books. ISBN: 0811843432
This highly conceptual picture book offers “a visual poem about China.” Opening the book (from the bottom rather than the side), the reader sees a strip of each page, since each is about one-half inch longer than the preceding one. The entire poem is visible immediately, with one line per page. Besides the line of verse, a typical page includes one or two ancient Chinese characters in red, with the English equivalent in black. Observers will note the resonance between the forms of the Chinese characters and the images in the illustrations, bold collages of cut and torn papers. The final end papers offer a chart of ancient and modern Chinese characters.
Chinese and English Nursery Rhymes
As Mother Goose has known for centuries, rhyme and rhythm are fun! And what could be a more enjoyable way for children and their parents to learn about different cultures and languages than through familiar rhymes and songs? In Chinese and English Nursery Rhymes, an innovative collection of favorite rhymes are put in pairs—one from China and the next in English—to show how the things that kids love are the same, no matter where in the world they live.
Chinese Mythology. Helft, Claude (2007). New York: Enchanted Lion Book. ISBN: 9781592700745
Chinese Mythology seeks to enchant readers young and old as both an author and a publisher. An avid traveler, she brings her rich experience of the world into each book she writes. Chen has been creating beautiful, entertaining, and deeply moving picture books over the past twelve years. To depict the old culture of ancient China, he has relied upon myth and legend in many of his books.
Confucius: Golden Rule. Freedman, Russell (2002). New York: Arthur A. Levine Books. ISBN: 9780439139571
Confucius: Golden Rule delves deep into Chinese history in his intelligent, comprehensive biography of the 5th-century B.C. philosopher Confucius, whose teachings have influenced the development of modern government and education in both China and the West. Freedman draws on stories, legends and collected dialogues from The Analects of Confucius, written by his students, to reveal a man of deep perceptions as well as great humor. The author reports that, when a disciple told the scholar that he did not know how to describe his teacher to a local governor, Confucius said, “Why didn’t you tell him that I’m a man driven by such passion for learning that in my enthusiasm I often forget to eat, in my joy I forget to worry, and I don’t even notice the approach of old age.” Skillfully and smoothly weaving Chinese history, culture and language into the narrative, Freedman also explains Confucian philosophy succinctly, without dumbing it down
Everyday Life: Through Chinese Peasant Art. Morrisey, Tricia (2009). Global Directions/Things Asian Press. ISBN: 9781934159187.
Everyday Life introduces children to the vibrant world created by Shanghai’s Jinshan artists. From a watermelon harvest to an autumn festival to a child’s winter game, vivid, friendly peasant art brings everyday life in rural China into our lives. Simple, rhythmic poems, presented in English, Simplified Chinese, and Pinyin, beautifully accent each painting. Everyday Life’s colorful, bustling illustrations will capture a child’s imagination, while descriptive bilingual text invites English and Chinese readers to enjoy the sweetness of each page.
Eyewitness China. Sebag-Montefiore , Poppy (2007) . London; New York: DK , ISBN: 9780756629755.
Eyewitness China, which includes a fully illustrated pull-out wall chart and CD with additional images, investigates China’s present-day culture and highlights everything from life in a rural village to changing fashions and technological innovations. With hundreds of real-life photographs, discover the secrets of traditional Chinese medicine, find out how China is surging ahead in international sports, trace each dynasty with the help of a comprehensive timeline, and much, much more!
Good Morning China. Yi, Hu Yong (2007). New Milford, CT.: Roaring Brook Press. ISBN: 9781596432406.
Enhanced with a foldout and animated illustrations, a day-in-the-life of a community in China is captured in this beautiful presentation of its people, places, and special happenings. Pictures and easy-to-read text portray the activities and routines of Chinese people on a typical morning in the park, with a fold-out page showing everyone in the park.
Lang Lang: Playing with Flying Keys
He started learning to play the piano when he was three years old in Shenyang, China. Today he is one of the world’s most outstanding pianists. In this engrossing life story, adapted by Michael French, Lang Lang not only recounts the difficult, often thrilling, events of his early days, but also shares his perspective on his rapidly changing homeland. He thoughtfully explores the differences between East and West, especially in the realm of classical music and cultural life. Shining through his rags-to-riches story of a child prodigy who came of age as a renowned musician.
Legend of the Chinese Dragon (English and Mandarin Chinese Edition). Sellier, Marie (2008). New York: NorthSouth Books. ISBN: 9780735821521.
In ancient China, the different tribes lived under the protection of benevolent spirits that took the form of animals–fish, ox, bird, horse, and serpent. But, as often happens, the tribes grew envious of each other and began to fight amongst themselves in the names of their spirits. The children decided to declare a war on war by creating a creature that combined the best of all the spirits and would protect all the people. To this day, the dragon is a symbol of peace and plays an especially important role in the celebration of the Chinese New Year.
Little Leap Forward: A Boy in Living in Beijing. Yue, Guo (2008). Cambridge, MA: Barefoot Books. ISBN: 9781846861147.
Little Leap Forward: A Boy in Living in Beijing with their large, loving families, Little Leap Forward and Little-Little are the best of friends. One day clever Little-Little captures a small yellow bird that he gives to Little Leap Forward. Though Little Leap Forward plays his flute and tries to get Little Cloud to sing, she remains silent. When the terrible disruptions of the Cultural Revolution begin, Little Leap Forward senses the fear and sadness of his friends and family. And as their lives become more and more constricted, he begins to understand why he must release his precious bird if he wants to hear her sing.
Life is fascinating to children who come to this world with fresh eyes, curious minds and a strong passion to explore, especially to Wen, a little girl who grew up in China. China is a large country with a long history, a lot of traditional values and countless social rules. How did Wen make room for herself to grow in a society where girls inquisitive nature was suppressed and lively personalities were shaped to confine to the rigid social expectations? Read the story, which is based on the author s early childhood memories.
Liu and the Bird: A Journey in Chinese Calligraphy. Louis, Catherine. (2006). New York: North-South Books. ISBN: 9780735820500.
One night as Liu sleeps, she hears the voice of her grandfather in her dreams. Inspired to visit him, she sets out on a journey across fields and mountains, facing harsh conditions and not always knowing the way. She finds her grandfather waiting for her, and he urges her to tell the story of her travels, making an interesting connection that makes you feel like perhaps you’ve actually just read Liu’s story as recorded by her own hands.
Lon Po Po: A Red-Riding Hood Story from China. Young, Ed. (1989). New York: Philomel Books. ISBN: 0399216197.
Three little girls spare no mercy to Lon Po Po, the granny wolf, in this version of Little Red Riding Hood where they tempt her up a tree and over a limb, to her death. The girls’ frightened eyes are juxtaposed against Lon Po Po’s menacing squint and whirling blue costume in one of the books numerous three-picture sequences, which resemble the decorative panels of Chinese tradition. Through mixing abstract and realistic images with complex use of color and shadow, artist and translator Young has transformed a simple fairy tail into a remarkable work of art and earned the 1990 Caldecott Medal in doing so.
The Magic Horse of Han Gan
Well-known-painter Hong introduces Han Gan, a ninth-century Chinese artist, in this beautifully illustrated, picture-book fantasy. Young Han Gan, who loves to draw, grows up to gain wide recognition for his original style and for his sole subject: horses that are always tethered: “My horses are so alive they might leap right off the paper.” A warrior challenges his claim, commissioning a steed that will spring to lifew. Han Gan meets the challenge, but his magnificent creation so abhors war’s violence that it races back to the two-dimensional world of painting.
Mao and Me. Chen, Jiang Hong (2008). New York : Enchanted Lion Books. ISBN: 9781592700790.
When the Cultural Revolution began, the author was a three-year-old living in a northern city. Cared for by his grandparents, he and his two sisters led a quiet, orderly life. His older sister, whom he describes as a deaf mute, taught her siblings to sign and Hong to draw. One day they heard on the radio that Mao had declared a Cultural Revolution, and life began to change. The text tells a straightforward story of the years between 1966 and 1976, while the illustrations shed a strong light on these years through the eyes of one child.
Maples in the Mist: Poems for Children from the Tang Dynasty. Ho, Minfong (1996). New York: Lothrop, Lee & Shepard . SBN: 068812044X.
A beautiful anthology of 16 short, unrhymed poems written 1000 years ago in China. Although the poems Ho has chosen reflect timeless themes and her translations are fresh and informal, most are too introspective for a young Western audience. An attentive fourth-grader might relate to “On the Pond,” in which two boys foolishly leave a trail betraying their mischief, or “Goose,” a straightforward observation of a paddling goose, humorously illustrated.
Moonbeams, Dumplings & Dragon Boats: A Treasury of Chinese Holiday Tales, Activities & Recipes. Simonds, Nina (2002). San Diego: Harcourt, Inc. ISBN: 0152019839.
So, each of a quartet of holidays includes a brief background and introduces a bevy of crafts, recipes and legends. “The Story of the Kitchen God” kicks off the section on the Chinese New Year (and the reason behind serving the traditional tanggua, or candied melons); a recipe for Five-Treasure Moon Cakes stuffed with apricot preserves, pitted dates, sweet coconut and raisins helps youngsters celebrate the Mid-Autumn Moon Festival.
A New Year’s Reunion
Little Maomao s father works in faraway places and comes home just once a year, for Chinese New Year. At first Maomao barely recognizes him, but before long the family is happily making sticky rice balls, listening to firecrackers, and watching the dragon dance in the streets below. Papa gets a haircut, makes repairs to the house, and hides a lucky coin for Maomao to find. Which she does! But all too soon it is time for Papa to go away again. This poignant, vibrantly illustrated tale, which won the prestigious Feng Zikai Chinese Children s Picture Book Award in 2009, is sure to resonate with every child who misses relatives when they are away and shows how a family s love is strong enough to endure over time and distance.
My Little Book of Chinese Words. Bradley, Mary Chris (2008). NorthSouth Books . ISBN: 9780735821743.
This handsome picture book focuses on the visual aspect of Chinese characters. Words are introduced on the verso with the modern Chinese character and a smaller ancient character in the upper left corner of the page, so one is immediately aware of the evolution of the visual form of the word. On the right, a full-page illustration is rendered in a way that echoes the strokes of the character. For example, the picture for the character “high” shows a pagodalike building similar to the form of the calligraphy. The words are grouped so that terms such as “see,” “look at,” and “ear” follow “eye,” whose written character is part of these other related characters, indicating the relationship of the basic word to the others.
School is in session! But this is no ordinary kindergarten class. Meet sixteen young giant panda cubs at the China Conservation and Research Center for the Giant Panda at the Wolong Nature Preserve. The cubs are raised together from infancy in a protected setting, where they grow strong. Under the watchful eyes of the scientists and workers, the cubs learn skills that will help prepare them to be released into the wild.
Revolution Is Not a Dinner Party. Compestine, Ying Chang (2007). New York: H. Holt . ISBN: 9780805082074.
This autobiographical novel chronicles four years in the life of Ling, the daughter of bourgeois parents, during China’s Cultural Revolution in the waning years of Mao Tse-tung’s government. Ling’s father is a Western-educated surgeon, and her mother is a practitioner of traditional Chinese medicine and a homemaker. Her family’s comfortable life in Wuhan slowly crumbles (her father is jailed) in the face of political unrest, but somehow Ling’s spirit survives, and she finds strength in the face of oppression and hardship. Long is a compelling reader of this riveting account. She uses a slight Chinese accent to portray the adults, but she voices Ling in an American accent, which is probably easier for young listeners to grasp.
Six Words, Many Turtles, and Three Days in Hong Kong. McMahon, Patricia (1997). Boston: Houghton Mifflin. ISBN: 0395686210
This attractive photo-essay opens with a double-page spread of Hong Kong in the early morning mist and closes with a shot of the city at sunset. Readers are introduced to eight-year-old Tsz Yan and her family. The “six words” of the title refer to the English writing homework that the girl works on throughout the story. The “many turtles” are what she thinks school children look like with their backpacks. The “three days” are Friday, Saturday, and Sunday, thus giving readers a glimpse of the child’s life at school and at home. The colorful and exciting photos are definitely the strength of the book, and are, for the most part, logical adjuncts to the text. Unfortunately, the lack of captions may cause confusion.
The Chinese Thought of It , Ye, Ting-xing (2009). Annick Press. ISBN: 9781554511952.
Acupuncture, gunpowder and the secrets to spinning silk are innovations that we have come to associate with China. But did you know that the Chinese also invented the umbrella? And toilet paper, initially made from rice straw clumped together, was first used in China! Through the ages, the Chinese have used the resources available to them to improve their lives. Their development of the compass and the paddleboat helped facilitate the often difficult tasks of travel and trade, and many foods associated with health and wellness — from green tea to tofu — have their origins in China.
The Five Chinese Brothers. Bishop, Claire Huchet (1996). New york: Putnam Juvenile. ISBN: 9780698113572
The classic story about five clever brothers, each with a different extraordinary ability is “a dramatic retelling of an old Chinese tale.” (The New York Public Library). ” . . . when Bishop makes the tall brother stretch, the sea-swallower work, or the robust one hold his breath, young children will laugh and laugh.
The Magic Horse of Han Gan. Chen, Jiang Hong (2006). New York: Enchanted Lion Books. ISBN: 1592700632.
Well-known-painter Hong introduces Han Gan, a ninth-century Chinese artist, in this beautifully illustrated, picture-book fantasy. Young Han Gan, who loves to draw, grows up to gain wide recognition for his original style and for his sole subject: horses that are always tethered: “My horses are so alive they might leap right off the paper.” A warrior challenges his claim, commissioning a steed that will spring to life. Han Gan meets the challenge, but his magnificent creation so abhors war’s violence that it races back to the two-dimensional world of painting.
Niemann, Christopher (2008). The Pet Dragon: A Story about Adventure, Friendship, and Chinese Characters. New York: Greenwillow Books. ISBN: 9780061577765
Lin, a young Chinese girl, receives a baby dragon for a gift. The two of them play together until they accidentally break a vase. Lin’s father is so angry that he insists the little creature be caged. The dragon escapes, and Lin goes to look for it. With the help of an old woman, a witch, she finds it living with the other dragons in the clouds, and grown up. The dragon returns Lin to her home, and her father agrees that they can visit often.
The Seven Chinese Brothers. Mahy, Margaret. (1990). New York: Scholastic Inc. ISBN: 9780590420570.
The seven brothers walk, talk, and look alike, but each has his own special power. When the third brother runs afoul of the emperor and is sentenced to be beheaded, the fourth brother, who has bones of iron, takes his place. The emperor then tries drowning and burning but each time a different brother foils his scheme. Mahy retells this traditional Chinese tale in graceful, witty prose. She uses classic storytelling elements.
Seven Magic Brothers
A long, time ago seven very special brothers were born. Each had a special power — one was immensely strong, one had powerful hearing, one was impossible to burn, one could drink an ocean in a single gulp, one could grow as tall as a mountain, one could not be cut, and one could dig through the earth faster than a mole. One day, the immensely strong brother saves the emperor from a falling boulder as the emperor and his court parades by the boy’s farm. But instead of being grateful, the evil emperor puts him in jail, jealous of his powers. The brother is to be killed the next morning. Using their special powers, the other brothers work together to save him. Time and again the angry emperor is fooled and humiliated, and in the end receives the punishment he deserves for his cruelty.
Standards for Students
A handbook for educating students based on the ancient Confucian classics. Virtues such as filiality, have been practiced in the Orient for millennia. The other five concepts discussed are: respect for elders, carefulness, trustworthiness, kindness and friendliness to all, and drawing near to good people. Once all these virtuous qualities are instilled in children, they are ready to start studying. Therefore, this book is ideal for children, parents and teachers. Bilingual Chinese/English format (with chu yin fu hao for pronunciation).
The Very Hungry Caterpillar/English/Chinese. Carle, Eric (1994). Mantra Lingua. ISBN: 9781852691264.
A caterpillar eats his way through different foods until he is full and weaves a cocoon transforming into a beautiful butterfly. Charming colorful illustrations of foods along with the fat caterpillar and catchy little holes in the foods where the caterpillar “had his snack” make this book a hit with young children.
Tofu Quilt. Russell, Ching Yeung (2009). New York: Lee & Low Books. ISBN: 978160060423.
This collection of free-verse poems is based on Russell’s childhood and her journey to becoming an author. Yeung Ying leaves Hong Kong to spend the summer with her Uncle Five and his children in mainland China. When she recites classical Chinese poems for him, he rewards her with a special treat—a bowl of custard known as dan lai. She loves this treat so much that she vows to be a good student and become a writer.
Voices of the Heart. Young, Ed (1997). New York: Scholastic Press . ISBN:0590501992.
The splendid scarlet-and-gold jacket will entice readers into this sumptuous picture book, but once in, they might well find themselves confused. At the beginning, Young lists 26 emotions with their modern Chinese characters. He then devotes a page to each emotion, breaking each character into its parts and creating a collage out of the parts and the figure of a heart to express the feeling of the emotion. For example, “Contentment” is defined as “a peaceful heart.” The parts of the character are symbols for a claw, work, and a hand; put together they mean “After a day of hard work, the heart feels peace of mind. It is content.”
What the Rat Told Me. Louis, Catherine (2008). New York: NorthSouth Books. ISBN: 9780735822207.
When the Great Emperor of Heaven invites the animals to visit him at sunrise, the rat promises to wake the cat at dawn. Instead, the rascal lets the cat sleep, rides atop the ox, and leaps off to be the first in line for the viewing, followed by the ox, the tiger, and nine other animals. The Emperor greets and assigns each creature a year in the 12-year cycle of the Chinese zodiac. When the cat discovers the rat’s ruse, their friendship dissolves, hence cats chase rats to this day.
Where the Mountain Meets the Moon. Lin, Grace. (2009 ). New York: Little, Brown and Co. ISBN: 9780316114271.
Living in the shadow of the Fruitless Mountain, Minli and her parents spend their days working in the rice fields, barely growing enough to feed themselves. Every night, Minli’s father tells her stories about the Jade Dragon that keeps the mountain bare, the greedy and mean Magistrate Tiger, and the Old Man of the Moon who holds everyone’s destiny. Determined to change her family’s fortune, Minli sets out to find the Old Man of the Moon, urged on by a talking goldfish who gives her clues to complete her journey.
Yeh-Shen: A Cinderella Story from China. Louie, Ai-Ling. New York: Philomel Books. ISBN: 039920900X.
This version of the Cinderella story, in which a young girl overcomes the wickedness of her stepsister and stepmother to become the bride of a prince, is based on ancient Chinese manuscripts written 1000 years before the earliest European version.
Ke Huang is from China and compiled this resource list as a doctoral student at the University of Arizona focusing on the portrayal of Chinese and Chinese-Americans in children’s literature. | <urn:uuid:b02734c8-301b-4e04-bddc-cbe38e3c15d9> | CC-MAIN-2019-47 | http://wowlit.org/links/booklists/chinese-language-and-culture-kit-book-list/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00501.warc.gz | en | 0.936611 | 5,025 | 2.78125 | 3 |
In the broadest definition, a sensor is a device, module, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics a computer processor. A sensor is always used with other electronics. Sensors are used in everyday objects such as touch-sensitive elevator buttons and lamps which dim or brighten by touching the base, besides innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure or flow measurement, for example into MARG sensors. Moreover, analog sensors such as potentiometers and force-sensing resistors are still used. Applications include manufacturing and machinery and aerospace, medicine and many other aspects of our day-to-day life. A sensor's sensitivity indicates how much the sensor's output changes when the input quantity being measured changes. For instance, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, the sensitivity is 1 cm/°C.
Some sensors can affect what they measure. Sensors are designed to have a small effect on what is measured. Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a higher speed and sensitivity compared with macroscopic approaches. A good sensor obeys the following rules:: it is sensitive to the measured property it is insensitive to any other property to be encountered in its application, it does not influence the measured property. Most sensors have a linear transfer function; the sensitivity is defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is a constant with the units; the sensitivity is the slope of the transfer function. Converting the sensor's electrical output to the measured units requires dividing the electrical output by the slope. In addition, an offset is added or subtracted.
For example, -40 must be added to the output. For an analog sensor signal to be processed, or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter. Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy: Since the range of the output signal is always limited, the output signal will reach a minimum or maximum when the measured property exceeds the limits; the full scale range defines the minimum values of the measured property. The sensitivity may in practice differ from the value specified; this is called a sensitivity error. This is an error in the slope of a linear transfer function. If the output signal differs from the correct value by a constant, the sensor has an offset error or bias; this is an error in the y-intercept of a linear transfer function. Nonlinearity is deviation of a sensor's transfer function from a straight line transfer function; this is defined by the amount the output differs from ideal behavior over the full range of the sensor noted as a percentage of the full range.
Deviation caused by rapid changes of the measured property over time is a dynamic error. This behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal. If the output signal changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor. Noise is a random deviation of the signal. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. If the sensor has a digital output, the output is an approximation of the measured property; this error is called quantization error. If the signal is monitored digitally, the sampling frequency can cause a dynamic error, or if the input variable or added noise changes periodically at a frequency near a multiple of the sampling rate, aliasing errors may occur.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. All these deviations can be classified as random errors. Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing, such as filtering at the expense of the dynamic behavior of the sensor; the resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. The resolution of a sensor with a digital output is the resolution of the digital output; the resolution is related to the precision with which the mea
Microsoft Robotics Developer Studio
Microsoft Robotics Developer Studio is a Windows-based environment for robot control and simulation. It is aimed at academic and commercial developers and handles a wide variety of robot hardware, it requires the Microsoft Windows 7 operating system. RDS is based on CCR: a. NET-based concurrent library implementation for managing asynchronous parallel tasks; this technique involves using message-passing and a lightweight services-oriented runtime, DSS, which allows the orchestration of multiple services to achieve complex behaviors. Features include: a visual programming tool, Microsoft Visual Programming Language for creating and debugging robot applications, web-based and windows-based interfaces, 3D simulation, easy access to a robot's sensors and actuators; the primary programming language is C#. Microsoft Robotics Developer Studio includes support for packages to add other services to the suite; those available include Soccer Simulation and Sumo Competition by Microsoft, a community-developed Maze Simulator, a program to create worlds with walls that can be explored by a virtual robot, a set of services for OpenCV.
Most of the additional packages are hosted on CodePlex. Course materials are available. There are four main components in RDS: CCR DSS VPL VSE CCR and DSS are available separately for use in commercial applications that require a high level of concurrency and/or must be distributed across multiple nodes in a network; this package is called the DSS Toolkit. The tools that allow to develop an MRDS application contain a graphical environment command line tools allow you to deal with Visual Studio projects in C#, 3D simulation tools. Visual Programming Language is a graphical development environment that uses a service and activity catalog, they can interact graphically, a service or an activity is represented by a block that has inputs and outputs that just need to be dragged from the catalog to the diagram. Linking can be done with the mouse, it allows you to define if signals are simultaneous or not, permits you to perform operations on transmitted values... VPL allows you to generate the code of new "macro" services from diagrams created by users.
It is possible in VPL to customize services for different hardware elements. RDS 3D simulation environment allows you to simulate the behavior of robots in a virtual world using NVIDIA PhysX technology that includes advanced physics. There are several simulation environments in RDS; these environments were developed by SimplySim Apartment Factory Modern House Outdoor Urban Many examples and tutorials are available for the different tools, which permits a fast understanding of MRDS. Several applications have been added to the suite, such as Maze Simulator, or Soccer Simulation, developed by Microsoft; the Kinect sensor can be used on a robot in the RDS environment. RDS includes a simulated Kinect sensor; the Kinect Services for RDS are licensed for both non-commercial use. They depend on the Kinect for Windows SDK. Princeton University's DARPA Urban Grand Challenge autonomous car entry was programmed with MRDS. MySpace uses MRDS's parallel computing foundation libraries, CCR and DSS, for a non-robotic application in the back end of their site.
Indiana University uses MRDS in a non-robotic application to coordinate a high-performance computing network. In 2008 Microsoft launched a simulated robotics competition named RoboChamps using MRDS, four challenges were available: maze, sumo and Mars rover; the simulated environment and robots used by the competition were created by SimplySim and the competition was sponsored by KIA Motors The 2009 robotics and algorithm section of the Imagine Cup software competition uses MRDS visual simulation environment. The challenges of this competition were developed by SimplySim and are improved versions of the RoboChamps challenges; the complication and overhead required to run MRDS prompted Princeton Autonomous Vehicle Engineering to convert their Prospect 12 system from MRDS to IPC++. The main RDS4 website hasn't been updated since 6/29/2012. Robotics Studio 1.0 -- Release Date: December 18, 2006 Robotics Studio 1.5 -- Release Date: May 2007 Robotics Studio 1.5 "Refresh" -- Release Date: December 13, 2007 Robotics Developer Studio 2008 Standard Edition, Academic Edition and Express Edition -- Release Date: November 18, 2008 Robotics Developer Studio 2008 R2 Standard Edition, Academic Edition and Express Edition -- Release Date: June 17, 2009 Robotics Developer Studio 2008 R3—Release Date: May 20, 2010.
With R3, Robotics Developer Studio 2008 is now free and the functionality of all editions and CCR & DSS Toolkit has been combined into the single free edition. R3 is no longer compatible with. NET Compact Framework development and it no longer supports Windows CE. Robotics Developer Studio 4 -- Release Date: March 8, 2012; this release adds full support for the Kinect sensor via the Kinect for Windows SDK V1. A Reference Platform Design is included in the documentation, with the first implementation being the Eddie robot from Parallax, it updates RDS to. NET 4.0 and XNA 4.0. ABB Group Robotics - ABB Connect for Microsoft R
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but originate from the face or hand. Current focuses in the field include emotion recognition from hand gesture recognition. Users can use simple gestures to interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait and human behaviors is the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or GUIs, which still limit the majority of input to keyboard and mouse and interact without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at this point will move accordingly.
This could make conventional input on devices such and redundant. Gesture recognition features: More accurate High stability Time saving to unlock a deviceThe major application areas of gesture recognition in the current scenario are: Automotive sector Consumer electronics sector Transit sector Gaming sector To unlock smartphones Defence Home automation Automated sign language translationGesture recognition technology has been considered to be the successful technology as it saves time to unlock any device. Gesture recognition can be conducted with techniques from image processing; the literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer. Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice; such implementations could enable a new range of hardware.
This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, mouse gesture recognition; this is computer interaction through the drawing of symbols with a pointing device cursor. In computer interfaces, two types of gestures are distinguished: We consider online gestures, which can be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are processed after the interaction is finished. Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu. Online gestures: Direct manipulation gestures, they are used to rotate a tangible object. Touchless user interface is an emerging type of technology in relation to gesture control. Touchless user interface is the process of commanding the computer via body motion and gestures without touching a keyboard, mouse, or screen.
For example, Microsoft's Kinect is a touchless game interface. Touchless interface in addition to gesture controls are becoming popular as they provide the abilities to interact with devices without physically touching them. There are a number of devices utilizing this type of interface such as, laptops and television. Although touchless technology is seen in gaming software, interest is now spreading to other fields including and healthcare industries. Soon to come, touchless technology and gesture control will be implemented in cars in levels beyond voice recognition. See BMW Series 7. There are a vast number of companies all over the world who are producing gesture recognition technology, such as: White Paper: Explore Intel's user experience research, which shows how touchless multifactor authentication can help healthcare organizations mitigate security risks while improving clinician efficiency and patient care; this touchless MFA solution combines facial recognition and device recognition capabilities for two-factor user authentication.
The aim of the project is to explore the use of touchless interaction within surgical settings, allowing images to be viewed and manipulated without contact through the use of camera-based gesture recognition technology. In particular, the project seeks to understand the challenges of these environments for the design and deployment of such systems, as well as articulate the ways in which these technologies may alter surgical practice. While our primary concerns here are with maintaining conditions of asepsis, the use of these touchless gesture-based technologies offers other potential uses. Elliptic Labs software suite delivers gesture and proximity functions by re-using the existing earpiece and microphone used only for audio. Ultrasound signals sent through the air from speakers integrated in smartphones and tablets bounce against a hand/object/head and are recorded by microphones integrated in these devices. In this way, Elliptic Labs' technology recognizes your hand gestures and uses them to move objects on a screen to the way bats use echolocation to navigate.
While these companies stand at the forefront of touchless technology for the future in this time, there are many other companies and products that are trending as well and may add value
Arduino is an open-source hardware and software company and user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices and interactive objects that can sense and control both physically and digitally. Its products are licensed under the GNU Lesser General Public License or the GNU General Public License, permitting the manufacture of Arduino boards and software distribution by anyone. Arduino boards are available commercially as do-it-yourself kits. Arduino board designs use a variety of controllers; the boards are equipped with sets of digital and analog input/output pins that may be interfaced to various expansion boards or breadboards and other circuits. The boards feature serial communications interfaces, including Universal Serial Bus on some models, which are used for loading programs from personal computers; the microcontrollers are programmed using a dialect of features from the programming languages C and C++. In addition to using traditional compiler toolchains, the Arduino project provides an integrated development environment based on the Processing language project.
The Arduino project started in 2003 as a program for students at the Interaction Design Institute Ivrea in Ivrea, aiming to provide a low-cost and easy way for novices and professionals to create devices that interact with their environment using sensors and actuators. Common examples of such devices intended for beginner hobbyists include simple robots and motion detectors; the name Arduino comes from a bar in Ivrea, where some of the founders of the project used to meet. The bar was named after Arduin of Ivrea, the margrave of the March of Ivrea and King of Italy from 1002 to 1014; the Arduino project was started at the Interaction Design Institute Ivrea in Italy. At that time, the students used a BASIC Stamp microcontroller at a cost of $50, a considerable expense for many students. In 2003 Hernando Barragán created the development platform Wiring as a Master's thesis project at IDII, under the supervision of Massimo Banzi and Casey Reas. Casey Reas is known with Ben Fry, the Processing development platform.
The project goal was to create simple, low cost tools for creating digital projects by non-engineers. The Wiring platform consisted of a printed circuit board with an ATmega168 microcontroller, an IDE based on Processing and library functions to program the microcontroller. In 2003, Massimo Banzi, with David Mellis, another IDII student, David Cuartielles, added support for the cheaper ATmega8 microcontroller to Wiring, but instead of continuing the work on Wiring, they renamed it Arduino. The initial Arduino core team consisted of Massimo Banzi, David Cuartielles, Tom Igoe, Gianluca Martino, David Mellis, but Barragán was not invited to participate. Following the completion of the Wiring platform and less expensive versions were distributed in the open-source community, it was estimated in mid-2011 that over 300,000 official Arduinos had been commercially produced, in 2013 that 700,000 official boards were in users' hands. In October 2016, Federico Musto, Arduino's former CEO, secured a 50% ownership of the company.
In April 2017, Wired reported that Musto had "fabricated his academic record.... On his company's website, personal LinkedIn accounts, on Italian business documents, Musto was until listed as holding a PhD from the Massachusetts Institute of Technology. In some cases, his biography claimed an MBA from New York University." Wired reported that neither University had any record of Musto's attendance, Musto admitted in an interview with Wired that he had never earned those degrees. Around that same time, Massimo Banzi announced that the Arduino Foundation would be "a new beginning for Arduino." But a year the Foundation still hasn't been established, the state of the project remains unclear. The controversy surrounding Musto continued when, in July 2017, he pulled many Open source licenses and code from the Arduino website, prompting scrutiny and outcry. In October 2017, Arduino announced its partnership with ARM Holdings; the announcement said, in part, "ARM recognized independence as a core value of Arduino... without any lock-in with the ARM architecture.”
Arduino intends to continue to work with all technology architectures. In early 2008, the five co-founders of the Arduino project created a company, Arduino LLC, to hold the trademarks associated with Arduino; the manufacture and sale of the boards was to be done by external companies, Arduino LLC would get a royalty from them. The founding bylaws of Arduino LLC specified that each of the five founders transfer ownership of the Arduino brand to the newly formed company. At the end of 2008, Gianluca Martino's company, Smart Projects, registered the Arduino trademark in Italy and kept this a secret from the other cofounders for about two years; this was revealed when the Arduino company tried to register the trademark in other areas of the world, discovered that it was registered in Italy. Negotiations with Gianluca and his firm to bring the trademark under control of the original Arduino company failed. In 2014, Smart Projects began refusing to pay royalties, they appointed a new CEO, Federico Musto, who renamed the company Arduino SRL and created the website arduino.org, copying the graphics and layout of the original arduino.cc.
This resulted in a rift in the Arduino development team. In January 2015, Arduino LLC filed a lawsuit against Arduino SRL. In May 2015, Arduino LLC created the worldwide tr
Rafael Lozano-Hemmer is a Mexican-Canadian electronic artist who works with ideas from architecture, technological theater and performance. He holds a Bachelor of Science in physical chemistry from Concordia University in Montreal. Lozano-Hemmer lives and works in Montreal and Madrid. Rafael Lozano-Hemmer was born in Mexico City in 1967, he emigrated to Canada in 1985 to study at the University of Victoria in British Columbia and at Concordia University in Montreal. The son of Mexico City nightclub owners, Lozano-Hemmer was drawn to science but could not resist joining the creative activities that his friends did, he worked in a molecular recognition lab in Montreal and published his research in Chemistry journals. Though he did not pursue the sciences as a direct career, it has influenced his work in many ways, providing conceptual inspiration and practical approaches to create his work. Lozano-Hemmer's work can be considered a blend of interactive art and performance art, using both large and small scales and outdoor settings, a wide variety of audiovisual technologies.
Lozano-Hemmer is best known for creating and presenting theatrical interactive installations in public spaces across Europe and America. Using robotics, real-time computer graphics, film projections, positional sound, internet links, cell phone interfaces and ultrasonic sensors, LED screens and other devices, his installations seek to interrupt the homogenized urban condition by providing critical platforms for participation. Lozano-Hemmer's smaller-scaled sculptural and video installations explore themes of perception and surveillance; as an outgrowth of these various large scale and performance-based projects Lozano-Hemmer documents the works in photography editions that are exhibited. In 1999, he created Alzado Vectorial, where internet participants directed searchlights over the central square in Mexico City; the work was repeated in Vitoria-Gasteiz in 2002, in Lyon in 2003, in Dublin in 2004 and in Vancouver in 2010. In 2007, he became the first artist to represent Mexico at the Venice Biennale, with a solo show at the Palazzo Soranzo Van Axel.
In 2006, his work 33 Questions Per Minute was acquired by The Museum of Modern Art in New York. Subtitled Public is held in the Tate Collection in the United Kingdom. Several of Lozano-Hemmer's installations include the use of words and sentences to add additional meaning; these texts are used to elaborate upon a deeper meaning that involves a viewer's actions, to change or create an effect upon the atmosphere and perception. Some of the text based installations, such as Third Person and Subtitled Public, place words upon the viewer himself; because of the random nature of these texts, the viewer has no control over what they are labeled as, incurring a sense of helplessness, experience the pleasant and unpleasant connotations that are associated with the words placed upon themselves. The text based installations such as 33 Questions Per Minute and There is No Business Like No Business are reliant upon the willing participation of the viewer; these two forms of text installations are externally reflective, while the first two are internally reflective.
33 Questions Per Minute is an installation consisting of several screens programmed to generate possible questions and display them at a rate of 33 per minute. The computer generating the questions can generate 55 billion unique questions, taking over 3,000 years to display them all. In addition to viewing the automatically displaying the questions, members of the public can submit their own questions into the system, their participation shows up on the screens and is registered by the program. Third Person is the second piece of the ShadowBox series of interactive displays with a built-in computerized tracking system; this piece shows the viewer's shadow, composed hundreds of tiny words that are in fact all the verbs of the dictionary conjugated in the third person. The portrait of the viewer is drawn in real time by active words, which appear automatically to fill his or her silhouette. There Is No Business Like No Business is a blinking neon sign, whose speed is directly proportional to the number of times that the word "economy" has appeared in online news items within the past 24 hours.
Subtitled Public consists of an empty exhibition space where visitors are detected by a computerized surveillance system. When people enter the space, the system generates a subtitle for each person and projects it onto him or her: the subtitle is chosen at random from a list of all verbs conjugated in the third person; the only way of getting rid of a subtitle is to touch another person, which leads to the two subtitles being exchanged. In 1994, Lozano-Hemmer coined the term "relational architecture" as the technological actualization of buildings and the urban environment with alien memory, he aimed to transform the dominant narratives of a specific building or urban setting by superimposing audiovisual elements to affect it, effect it and re-contextualize it. From 1997 to 2006, he built ten works of relational architecture beginning with Displaced Emperors and ending with Under Scan. Lozano-Hemmer says, "I want buildings to pretend to be something other than themselves, to engage in a kind of dissimulation"Solar Equation was a large-scale public art installation that consists of a faithful simulation of the Sun, scaled 100 million times smaller than the real thing.
Commissioned by the Light in Winter Festival in Melbourne, the piece featured the world's largest spherical balloon, custom-manufactured for the project, tethered over Federation Square and animated using five projectors. The solar animation on the balloon was generated by live mathematical
Dance Dance Revolution
Dance Dance Revolution known as Dancing Stage in earlier games in Europe, Central Asia, Middle East, South Asia and Oceania, some other games in Japan, is a music video game series produced by Konami. Introduced in Japan in 1998 as part of the Bemani series, released in North America and Europe in 1999, Dance Dance Revolution is the pioneering series of the rhythm and dance genre in video games. Players stand on a "dance platform" or stage and hit colored arrows laid out in a cross with their feet to musical and visual cues. Players are judged by how well they time their dance to the patterns presented to them and are allowed to choose more music to play to if they receive a passing score. Dance Dance Revolution has been met with critical acclaim for its originality and stamina in the video game market. There have been dozens of arcade-based releases across several countries and hundreds of home video game console releases, promoting a music library of original songs produced by Konami's in-house artists and an eclectic set of licensed music from many different genres.
The DDR series has inspired similar games such as Pump it Up by Andamiro and In the Groove by Roxor Games. The series' current version is Dance Dance Revolution A20, released in 2019; the core gameplay involves the player stepping their feet to correspond with the arrows that appears on screen and the beat. During normal gameplay, arrows scroll upwards from the bottom of the screen and pass over a set of stationary arrows near the top; when the scrolling arrows overlap the stationary ones, the player must step on the corresponding arrows on the dance platform, the player is given a judgement for their accuracy of every streaked notes. Additional arrow types are added in mixes. Freeze Arrows, introduced in DDRMAX, are long green arrows that must be held down until they travel through the Step Zone; each of these arrows awards an "O. K.!" if pressed or an "N. G." when the arrow is released too quickly. An "N. G." decreases the life bar and, starting with DDR X breaks any existing combo. DDR X introduced Shock Arrows, walls of arrows with lightning effects which must be avoided, awarding an "O.
K.!" if avoided or an "N. G." if any of the dancer's panels are stepped on. An "N. G." for shock arrows has the same consequences found with freeze arrows, but hitting a shock arrow additionally hides future steps for a short period of time. Hitting the arrows in time with the music fills the "Dance Gauge", or life bar, while failure to do so drains it. If the Dance Gauge is exhausted during gameplay, the player will fail the song, the game will be over. Otherwise, the player is taken to the Results Screen, which rates the player's performance with a letter grade and a numerical score, among other statistics; the player may be given a chance to play again, depending on the settings of the particular machine. The default limit is of three songs, though operators can set the limit between five. Aside from play style Single, Dance Dance Revolution provides two other play styles: Versus, where two players can play Single and Double, where one player uses all eight panels. Prior to the 2013 release of Dance Dance Revolution, some games offer additional modes, such as Course mode and Battle mode.
Earlier versions have Couple/Unison Mode, where two players must cooperate to play the song. This mode become the basis for "TAG Play" in newer games. Depending on the edition of the game, dance steps are broken into various levels of difficulty by colour. Difficulty is loosely separated into 3–5 categories depending on timeline: DDR 1st Mix established the three main difficulties and it began using the foot rating with a scale of 1 to 8. In addition, each difficulty rating would be labeled with a title. DDR 2nd Mix Club Version 2 increased the scale to 9, which would be implemented in the main series beginning in DDR 3rd Mix. DDR 3rd Mix renamed the Maniac difficulty to "SSR" and made it playable through a special mode, which can only be accessed via input code and is played on Flat by default; the SSR mode was eliminated in 3rdMix Plus, the Maniac routines were folded back into the regular game. In addition to the standard three difficulties, the first three titles of the series and their derivations featured a "Easy" mode, which provided simplified step charts for songs.
In this mode, one cannot access other difficulties, akin to the aforementioned SSR mode. While this mode is never featured again, it would become the basis for the accessible Beginner difficulty implemented in newer games. DDR 4th Mix removed the names of the song and made it simple by removing those names and organizing the difficulty by order. DDR 4th Mix Plus renamed several song's Maniac charts as Maniac-S and Maniac-D, while adding newer and harder stepcharts for the old ones as the "second" Maniac; these new charts were used as the default Maniac stepchart in DDR 5th Mix while the older ones were removed. Beginning in DDRMAX, a "Groove Radar" was introduced, showing how difficult a particular sequence is in various categories, such as the maximum density of steps, so on; the step difficulty was removed in favor of the Groove Radar. DDRMAX2 re-added the foot ratings and resto
Scott Snibbe is an interactive media artist and entrepreneur. He is one of the first artists to work with projector-based interactivity, where a computer-controlled projection onto a wall or floor changes in response to people moving across its surface, with his well-known full-body interactive work Boundary Functions, premiering at Ars Electronica 1998. In this floor-projected interactive artwork, people walk across a four-meter by four-meter floor; as they move, Boundary Functions uses a camera and projector to draw lines between all of the people on the floor, forming a Voronoi Diagram. This diagram has strong significance when drawn around people's bodies, surrounding each person with lines that outline his or her personal space - the space closer to that person than to anyone else. Snibbe states that this work "shows that personal space, though we call it our own, is only defined by others and changes without our control". Snibbe has become more broadly known for creating some of the first interactive art apps for iOS devices.
His first three apps—Gravilux, Bubble Harp, Antograph—released in May, 2010 as ports of screen-based artwork from the 1990s Dynamic Systems Series, all rose into the top ten in the iTunes Store's Entertainment section, have been downloaded over 400,000 times. Snibbe collaborated with Björk to produce Biophilia, the first full-length app album, released for iPad and iPhone in 2011. Snibbe received undergraduate and master's degrees in computer science and fine art from Brown University, where he studied with Dr. Andries van Dam and Dr. John Hughes. Snibbe studied animation at the Rhode Island School of Design with Amy Kravitz. After making several hand-drawn animated shorts, he turned to interactive art as his primary artistic medium, his first public interactive work, Motion Phone won an award from Prix Ars Electronica in 1996 and established him as a contributor to the field. Snibbe's work has been shown at the Whitney Museum of American Art, San Francisco Museum of Modern Art, The Kitchen, the NTT InterCommunication Center and the Institute of Contemporary Arts.
His work is shown and collected by science museums, including the Exploratorium, the New York Hall of Science, the Museum of Science and Industry, the Cité des Sciences et de l'Industrie, the London Science Museum, the Phaeno Science Center. He was featured on a December 18, 2011 episode of CNN's The Next List, he has received grants from the Rockefeller Foundation the National Endowment for the Arts, National Video Resources and awards from the Prix Ars Electronica Festival, the de:Trickfilmfestival Stuttgart, the Black Mariah Film Festival, the Student Academy Awards. Snibbe has taught media art and computer science at UC Berkeley, California Institute of the Arts, the San Francisco Art Institute, he worked as a Computer Scientist at Adobe Systems from 1994–1996, on the special effects and animation software Adobe After Effects, named on six patents for work in animation and motion tracking. He was an employee at Interval Research from 1996-2000 where he worked on Computer Vision, Computer Graphics and Haptics research projects receiving several patents in those fields.
Snibbe is the founder of Snibbe Interactive, which distributes and develops immersive interactive experiences for use in museums and branding. In 2009, Snibbe presented Sona Research's first research paper "Social Immersive Media" at the CHI 2009 conference, coining the term Social Immersive Media to describe interface techniques to create effective immersive interactive experiences focused on social interaction, winning the best paper of conference award. In November, 2013 Snibbe and Jaz Banga debated Laura Sydell and Christopher M. Kelty in an Oxford style debate entitled, Patent Pending: Does the U. S. Patent System stifle innovation? Interactive Art for the Screen Motion Sketch, 1989 Motion Phone, 1994 Bubble Harp, 1997 Gravilux, 1997 Myrmegraph, 1998 Emptiness is Form, 2000iPhone and iPad Apps Gravilux, 2010 Bubble Harp, 2010 Antograph, 2010 Tripolar, 2011 OscilloScoop, 2011Interactive Projections Boundary Functions, 1998 Shadow, 2002 Deep Walls, 2002 Shy, 2003 Impression, 2003 Depletion, 2003 Compliant, 2003 Concentration, 2003 Cause and Effect, 2004 Visceral Cinema: Chien, 2005 Shadow Bag, 2005 Central Mosaic, 2005 Outward Mosaic, 2006 Make Like a Tree, 2006 Falling Girl, 2008Electromechanical Sculpture Mirror, 2001 Circular Breathing, 2002 Blow Up, 2005Internet Art It's Out, 2001 Tripolar, 2002 Fuel, 2002 Cabspotting, 2005Public Art Installations You Are Here, New York Hall of Science, 2004 Women Hold up Half the Sky, Mills College, 2007 Transit, Los Angeles International Airport, 2009Performance In the Grace of the World, Saint Luke's Orchestra, 2008Film Lost Momentum, 1995 Brothers, 1990 Interactive art Electronic art Computer art Software art Abstract film Paul, Christiane.
Digital Art. London: Thames & Hudson. ISBN 0-500-20367-9. Wilson, Steve Information Arts: Intersections of Art and Technology ISBN 0-262-23209-X Bullivant, Lucy. Responsive Environments: architecture and design. London:Victoria and Albert Museum. ISBN 1-85177-481-5. Fiona Whitton, Tom Leeser, Christiane Paul. Visceral Cinema: Chien. Los Ang | <urn:uuid:0ee54b3d-4f6c-4086-b9a5-5d9d0e4405b5> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Physical_computing | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00540.warc.gz | en | 0.933535 | 7,555 | 3.578125 | 4 |
Mains electricity (as it is known in the UK and some parts of Canada; US terms include grid power, wall power, and domestic power; in much of Canada it is known as hydro) is the general-purpose alternating-current (AC) electric power supply. It is the form of electrical power that is delivered to homes and businesses, and it is the form of electrical power that consumers use when they plug domestic appliances, televisions and electric lamps into wall outlets.
The two principal properties of the electric power supply, voltage and frequency, differ between regions. A voltage of (nominally) 230 V and a frequency of 50 Hz is used in Europe, most of Africa, most of Asia, much of South America and Australia. In North America, the most common combination is 120 V and a frequency of 60 Hz. Other voltages exist, and some countries may have, for example, 230 V but 60 Hz. This is a concern to travellers, since portable appliances designed for one voltage and frequency combination may not operate with, or may even be destroyed by another. The use of different and incompatible plugs and sockets in different regions and countries provides some protection from accidental use of appliances with incompatible voltage and frequency requirements.
In the US, mains electric power is referred to by several names including "household power", "household electricity", "house current", "powerline", "domestic power", "wall power", "line power", "AC power", "city power", "street power".
In the UK, mains electric power is generally referred to as "the mains". More than half of power in Canada is hydroelectricity, and mains electricity is often referred to there as "hydro". This is also reflected in names of current and historical electricity monopolies such as Hydro-Québec, BC Hydro, Manitoba Hydro, Newfoundland and Labrador Hydro, and Ontario Hydro.
For a list of voltages, frequencies, and wall plugs by country, see Mains electricity by country
Worldwide, many different mains power systems are found for the operation of household and light commercial electrical appliances and lighting. The different systems are primarily characterized by their
- Plugs and sockets (receptacles or outlets)
- Earthing system (grounding)
- Protection against overcurrent damage (e.g., due to short circuit), electric shock, and fire hazards
- Parameter tolerances.
All these parameters vary among regions. The voltages are generally in the range 100–240 V (always expressed as root-mean-square voltage). The two commonly used frequencies are 50 Hz and 60 Hz. Single-phase or three-phase power is most commonly used today, although two-phase systems were used early in the 20th century. Foreign enclaves, such as large industrial plants or overseas military bases, may have a different standard voltage or frequency from the surrounding areas. Some city areas may use standards different from that of the surrounding countryside (e.g. in Libya). Regions in an effective state of anarchy may have no central electrical authority, with electric power provided by incompatible private sources.
Many other combinations of voltage and utility frequency were formerly used, with frequencies between 25 Hz and 133 Hz and voltages from 100 V to 250 V. Direct current (DC) has been almost completely displaced by alternating current (AC) in public power systems, but DC was used especially in some city areas to the end of the 20th century. The modern combinations of 230 V/50 Hz and 120 V/60 Hz, listed in IEC 60038, did not apply in the first few decades of the 20th century and are still not universal. Industrial plants with three-phase power will have different, higher voltages installed for large equipment (and different sockets and plugs), but the common voltages listed here would still be found for lighting and portable equipment.
Common uses of electricity
Electricity is used for lighting, heating, cooling, electric motors and electronic equipment. The US Energy Information Administration (EIA) has published:
Estimated US residential electricity consumption by end use, for the year 2016
|Televisions and related equipment1||83||6%|
|Furnace fans and boiler circulation pumps||32||2%|
|Computers and related equipment2||32||2%|
- 1 Includes televisions, set-top boxes, home theatre systems, DVD players, and video game consoles
- 2 Includes desktop and laptop computers, monitors, and networking equipment.
- 3 Does not include water heating.
- 4 Includes small electric devices, heating elements, exterior lights, outdoor grills, pool and spa heaters, backup electricity generators, and motors not listed above. Does not include electric vehicle charging.
Electronic appliances (such as those in the televisions, computer and related equipment categories above, representing 9% of the total), typically use an AC to DC converter or AC adapter to power the device. This is often capable of operation over the approximate range of 100 V to 250 V and at 50 Hz to 60 Hz. The other categories are typically AC applications and usually have much more restricted input ranges. A study by the Building Research Establishment in the UK states that "The existing 230 V system is well suited to the future of electricity whether through design or Darwinian processes. Any current perceived weakness is generally a result of cost reduction and market forces rather than any fundamental technical difficulties. Questions as to whether there are alternatives to the existing 230 V AC system are often overshadowed by legacy issues, the future smart agenda and cost in all but specific situations. Where opportunities do exist they are often for specific parts of the overall load and often small parts in terms of total demand."
- The line wire (in IEC terms 'line conductor') also known as phase, hot or active contact (and commonly, but technically incorrectly, as live), carries alternating current between the power grid and the household.
- The neutral wire (IEC: neutral conductor ) completes the electrical circuit—remaining at a voltage in proximity to 0 V—by also carrying alternating current between the power grid and the household. The neutral is connected to the ground (Earth), and therefore has nearly the same electrical potential as the earth. This prevents the power circuits from increasing beyond earth voltage, such as when they are struck by lightning or become otherwise charged.
- The earth wire, ground or, in IEC terms, Protective Earth (PE) connects the chassis of equipment to earth ground as a protection against faults (electric shock), such as if the insulation on a "hot" wire becomes damaged and the bare wire comes into contact with the metal chassis or case of the equipment.
- Mixed 230 V / 400 V three-phase (common in northern and central Europe) or 230 V single-phase based household wiring
In northern and central Europe, residential electrical supply is commonly 400 V three-phase electric power, which gives 230 V between any single phase and neutral; house wiring may be a mix of three-phase and single-phase circuits, but three-phase residential use is rare in the UK. High-power appliances such as kitchen stoves, water heaters and maybe household power heavy tools like log splitters may be supplied from the 400 V three-phase power supply.
Various earthing systems are used to ensure that the ground and neutral wires have zero voltage with respect to earth, to prevent shocks when touching grounded electrical equipment. In some installations, there may be two line conductors which carry alternating currents in a single-phase three-wire. Small portable electrical equipment is connected to the power supply through flexible cables (these exist with either two or three insulated conductors) terminated in a plug, which is inserted into a fixed receptacle (socket). Larger household electrical equipment and industrial equipment may be permanently wired to the fixed wiring of the building. For example, in North American homes a window-mounted self-contained air conditioner unit would be connected to a wall plug, whereas the central air conditioning for a whole home would be permanently wired. Larger plug and socket combinations are used for industrial equipment carrying larger currents, higher voltages, or three phase electric power. These are often constructed with tougher plastics and possess inherent weather-resistant properties needed in some applications.
Circuit breakers and fuses are used to detect short circuits between the line and neutral or ground wires or the drawing of more current than the wires are rated to handle (overload protection) to prevent overheating and possible fire. These protective devices are usually mounted in a central panel—most commonly a distribution board or consumer unit—in a building, but some wiring systems also provide a protection device at the socket or within the plug. Residual-current devices, also known as ground-fault circuit interrupters and appliance leakage current interrupters, are used to detect ground faults - flow of current in other than the neutral and line wires (like the ground wire or a person). When a ground fault is detected, the device quickly cuts off the circuit.
Most of the world population (Europe, Africa, Asia, Australia, New Zealand) and much of South America use a supply that is within 6% of 230 V. In the UK and Australia the nominal supply voltage is 230 V +10%/−6% to accommodate the fact that most transformers are in fact still set to 240 V. The 230 V standard has become widespread so that 230 V equipment can be used in most parts of the world with the aid of an adapter or a change to the equipment's plug to the standard for the specific country. The United States and Canada use a supply voltage of 120 volts ± 6%. Japan, Taiwan, Saudi Arabia, North America, Central America and some parts of northern South America use a voltage between 100 V and 127 V. Brazil is unusual in having both 110 V and 220 V systems at 60 Hz and also permitting interchangeable plugs and sockets. Saudi Arabia has mixed voltage systems, in residential and light commercial buildings the Kingdom uses 127 volts, with 220 volts in commercial and industrial applications, the government approved plans in August 2010 to transition the country to a totally 230/400 volts system.
A distinction should be made between the voltage at the point of supply (nominal voltage at the point of interconnection between the electrical utility and the user) and the voltage rating of the equipment (utilization voltage). Typically the utilization voltage is 3% to 5% lower than the nominal system voltage; for example, a nominal 208 V supply system will be connected to motors with "200 V" on their nameplates. This allows for the voltage drop between equipment and supply. Voltages in this article are the nominal supply voltages and equipment used on these systems will carry slightly lower nameplate voltages. Power distribution system voltage is nearly sinusoidal in nature. Voltages are expressed as root mean square (RMS) voltage. Voltage tolerances are for steady-state operation. Momentary heavy loads, or switching operations in the power distribution network, may cause short-term deviations out of the tolerance band and storms and other unusual conditions may cause even larger transient variations. In general, power supplies derived from large networks with many sources are more stable than those supplied to an isolated community with perhaps only a single generator.
Choice of voltage
The choice of supply voltage is due more to historical reasons than optimization of the electric power distribution system—once a voltage is in use and equipment using this voltage is widespread, changing voltage is a drastic and expensive measure. A 230 V distribution system will use less conductor material than a 120 V system to deliver a given amount of power because the current, and consequently the resistive loss, is lower. While large heating appliances can use smaller conductors at 230 V for the same output rating, few household appliances use anything like the full capacity of the outlet to which they are connected. Minimum wire size for hand-held or portable equipment is usually restricted by the mechanical strength of the conductors. Electrical appliances are used extensively in homes in both 230 V and 120 V system countries. National electrical codes prescribe wiring methods intended to minimize the risk of electric shock and fire.
Many areas, such as the US, which use (nominally) 120 V, make use of three-wire, split-phase 240 V systems to supply large appliances. In this system a 240 V supply has a centre-tapped neutral to give two 120 V supplies which can also supply 240 V to loads connected between the two line wires. Three-phase systems can be connected to give various combinations of voltage, suitable for use by different classes of equipment. Where both single-phase and three-phase loads are served by an electrical system, the system may be labelled with both voltages such as 120/208 or 230/400 V, to show the line-to-neutral voltage and the line-to-line voltage. Large loads are connected for the higher voltage. Other three-phase voltages, up to 830 volts, are occasionally used for special-purpose systems such as oil well pumps. Large industrial motors (say, more than 250 hp or 150 kW) may operate on medium voltage. On 60 Hz systems a standard for medium voltage equipment is 2400/4160 V (2300/4000 V in the US) whereas 3300 V is the common standard for 50 Hz systems.
Following voltage harmonisation, electricity supplies within the European Union are now nominally 230 V ±10% at 50 Hz. For a transition period (1995–2008), countries that had previously used 220 V changed to a narrower asymmetric tolerance range of 230 V +6%/−10% and those (like the UK) that had previously used 240 V changed to 230 V +10%/−6%. No change in voltage is required by either system as both 220 V and 240 V fall within the lower 230 V tolerance bands (230 V ±6%). Some areas of the UK still have 250 volts for legacy reasons, but these also fall within the 10% tolerance band of 230 volts. In practice, this allows countries to continue to supply the same voltage (220 or 240 V), at least until existing supply transformers are replaced. Equipment (with the exception of filament bulbs) used in these countries is designed to accept any voltage within the specified range. In the United States and Canada, national standards specify that the nominal voltage at the source should be 120 V and allow a range of 114 V to 126 V (RMS) (−5% to +5%). Historically 110 V, 115 V and 117 V have been used at different times and places in North America. Mains power is sometimes spoken of as 110 V; however, 120 V is the nominal voltage.
In 2000, Australia converted to 230 V as the nominal standard with a tolerance of +10%/−6%, this superseding the old 240 V standard, AS2926-1987. As in the UK, 240 V is within the allowable limits and "240 volt" is a synonym for mains in Australian and British English. In Japan, the electrical power supply to households is at 100 V. Eastern and northern parts of Honshū (including Tokyo) and Hokkaidō have a frequency of 50 Hz, whereas western Honshū (including Nagoya, Osaka, and Hiroshima), Shikoku, Kyūshū and Okinawa operate at 60 Hz. The boundary between the two regions contains four back-to-back high-voltage direct-current (HVDC) substations which interconnect the power between the two grid systems; these are Shin Shinano, Sakuma Dam, Minami-Fukumitsu, and the Higashi-Shimizu Frequency Converter. To accommodate the difference, frequency-sensitive appliances marketed in Japan can often be switched between the two frequencies.
The world's first public electricity supply was a water wheel driven system constructed in the small English town of Godalming in 1881. It was an alternating current (AC) system using a Siemens alternator supplying power for both street lights and consumers at two voltages, 250 V for arc lamps, and 40 V for incandescent lamps.
The world's first large scale central plant—Thomas Edison’s steam powered station at Holborn Viaduct in London—started operation in January 1882, providing direct current (DC) at 110 V. The Holborn Viaduct station was used as a proof of concept for the construction of the much larger Pearl Street Station in Manhattan, the world's first permanent commercial central power plant. The Pearl Street Station also provided DC at 110 V, considered to be a "safe" voltage for consumers, beginning September 4, 1882.
AC systems started appearing in the US in the mid-1880s, using higher distribution voltage stepped down via transformers to the same 110 V customer utilization voltage that Edison used. In 1883 Edison patented a three–wire distribution system to allow DC generation plants to serve a wider radius of customers to save on copper costs. By connecting two groups of 110 V lamps in series more load could be served by the same size conductors run with 220 V between them; a neutral conductor carried any imbalance of current between the two sub-circuits. AC circuits adopted the same form during the War of Currents, allowing lamps to be run at around 110 V and major appliances to be connected to 220 V. Nominal voltages gradually crept upward to 112 V and 115 V, or even 117 V. After World War II the standard voltage in the U.S. became 117 V, but many areas lagged behind even into the 1960s. In 1967 the nominal voltage rose to 120 V, but conversion of appliances was slow. Today, virtually all American homes and businesses have access to 120 and 240 V at 60 Hz. Both voltages are available on the three wires (two "hot" legs of opposite phase and one "neutral" leg).
In 1899, the Berliner Elektrizitäts-Werke (BEW), a Berlin electrical utility, decided to greatly increase its distribution capacity by switching to 220 V nominal distribution, taking advantage of the higher voltage capability of newly developed metal filament lamps. The company was able to offset the cost of converting the customer's equipment by the resulting saving in distribution conductors cost. This became the model for electrical distribution in Germany and the rest of Europe and the 220 V system became common. North American practice remained with voltages near 110 V for lamps.
In the first decade after the introduction of alternating current in the US (from the early 1880s to about 1893) a variety of different frequencies were used, with each electric provider setting their own, so that no single one prevailed. The most common frequency was 133⅓ Hz. The rotation speed of induction generators and motors, the efficiency of transformers, and flickering of carbon arc lamps all played a role in frequency setting. Around 1893 the Westinghouse Electric Company in the United States and AEG in Germany decided to standardize their generation equipment on 60 Hz and 50 Hz respectively, eventually leading to most of the world being supplied at one of these two frequencies. Today most 60 Hz systems deliver nominal 120/240 V, and most 50 Hz nominally 230 V. The significant exceptions are in Brazil, which has a synchronized 60 Hz grid with both 127 V and 220 V as standard voltages in different regions, and Japan, which has two frequencies: 50 Hz for East Japan and 60 Hz for West Japan.
To maintain the voltage at the customer's service within the acceptable range, electrical distribution utilities use regulating equipment at electrical substations or along the distribution line. At a substation, the step-down transformer will have an automatic on-load tap changer, allowing the ratio between transmission voltage and distribution voltage to be adjusted in steps. For long (several kilometres) rural distribution circuits, automatic voltage regulators may be mounted on poles of the distribution line. These are autotransformers, again, with on-load tapchangers to adjust the ratio depending on the observed voltage changes. At each customer's service, the step-down transformer has up to five taps to allow some range of adjustment, usually ±5% of the nominal voltage. Since these taps are not automatically controlled, they are used only to adjust the long-term average voltage at the service and not to regulate the voltage seen by the utility customer.
The stability of the voltage and frequency supplied to customers varies among countries and regions. "Power quality" is a term describing the degree of deviation from the nominal supply voltage and frequency. Short-term surges and drop-outs affect sensitive electronic equipment such as computers and flat panel displays. Longer-term power outages, brown-outs and black outs and low reliability of supply generally increase costs to customers, who may have to invest in uninterruptible power supply or stand-by generator sets to provide power when the utility supply is unavailable or unusable. Erratic power supply may be a severe economic handicap to businesses and public services which rely on electrical machinery, illumination, climate control and computers. Even the best quality power system may have breakdowns or require servicing. As such, companies, governments and other organizations sometimes have backup generators at sensitive facilities, to ensure that power will be available even in the event of a power outage or black out.
Power quality can also be affected by distortions of the current or voltage waveform in the form of harmonics of the fundamental (supply) frequency, or non-harmonic (inter)modulation distortion such as that caused by RFI or EMI interference. In contrast, harmonic distortion is usually caused by conditions of the load or generator. In multi-phase power, phase shift distortions caused by imbalanced loads can occur.
- "Access to electricity (% of population)". Data. The World Bank. Retrieved 5 October 2019.
- , How is electricity used in U.S. homes?, US Energy Information Administration, 21 April 2015, (retrieved 26 July 2015)
- , The Future of Electricity in Domestic Buildings – a review, Andrew Williams, 28 November 2015, (retrieved 26 July 2015)
- Electrical Inspection Manual, 2011 Edition], Noel Williams & Jeffrey S Sargent, Jones & Bartlett Publishers, 2012, p.249 (retrieved 3 March 2013 from Google Books)
- 17th Edition IEE Wiring Regulations: Explained and Illustrated], Brian Scaddan, Routledge, 2011, p.18 (retrieved 6 March 2013 from Google Books)
- Halliday, Chris; Urquhart, Dave. "Voltage and Equipment Standard Misalignment" (PDF). powerlogic.com.
- CENELEC Harmonisation Document HD 472 S1:1988
- British Standard BS 7697: Nominal voltages for low voltage public electricity supply systems — (Implementation of HD 472 S1)
- ANSI C84.1: American National Standard for Electric Power Systems and Equipment—Voltage Ratings (60 Hertz) Archived 27 July 2007 at the Wayback Machine, NEMA (costs $95 for access)
- CSA CAN3-C235-83: Preferred Voltage Levels for AC Systems, 0 to 50 000 V
- Hossain, J.; Mahmud, A. Renewable Energy Integration: Challenges and Solutions. Springer. p. 71. ISBN 9789814585279. Retrieved 13 January 2018.
- "Godalming: Electricity". Exploring Surrey's Past. Surrey County Council. Retrieved 6 December 2017.
- Electricity Supply in the United Kingdom (PDF), The Electricity Council, 1987, Archived from the original on 1 April 2017CS1 maint: BOT: original-url status unknown (link)
- "Milestones:Pearl Street Station, 1882". Engineering and Technology History Wiki. United Engineering Foundation. Retrieved 6 December 2017.
- Thomas P. Hughes, Networks of Power: Electrification in Western Society 1880-1930, The Johns Hopkins University Press,Baltimore 1983 ISBN 0-8018-2873-2 pg. 193 | <urn:uuid:13094085-64fd-4b16-9f1f-fc3ad8f2f244> | CC-MAIN-2019-47 | https://en.wikipedia.org/wiki/Mains_(electric_power) | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00539.warc.gz | en | 0.92497 | 4,927 | 3.6875 | 4 |
Mesoamerican pyramids or pyramid-shaped structures form a prominent part of ancient Mesoamerican architecture. Although similar to each other in some ways these New World structures with their flat tops and their stairs bear only a weak architectural resemblance to Egyptian pyramids; the Mesoamerican region's largest pyramid by volume – the largest pyramid in the world by volume – is the Great Pyramid of Cholula, in the east-central Mexican state of Puebla. The builders of certain classic Mesoamerican pyramids have decorated them copiously with stories about the Hero Twins, the feathered serpent Quetzalcoatl, Mesoamerican creation myths, ritualistic sacrifice, etc.written in the form of hieroglyphs on the rises of the steps of the pyramids, on the walls, on the sculptures contained within. The Aztecs, a people with a rich mythology and cultural heritage, dominated central Mexico in the 14th, 15th and 16th centuries, their capital was Tenochtitlan on the shore of Lake Texcoco – the site of modern-day Mexico City.
They were related to the preceding cultures in the basin of Mexico such as the culture of Teotihuacan whose building style they adopted and adapted. El Tepozteco Malinalco Santa Cecilia Acatitlan Templo Mayor Tenayuca Tenochtitlan The Maya are a people of southern Mexico and northern Central America with some 3,000 years of history. Archaeological evidence shows the Maya started to build ceremonial architecture 3,000 years ago; the earliest monuments consisted of simple burial mounds, the precursors to the spectacular stepped pyramids from the Terminal Pre-classic period and beyond. These pyramids relied on intricate carved stone. Many of these structures featured a top platform upon which a smaller dedicatory building was constructed, associated with a particular Maya deity. Maya pyramid-like structures were erected to serve as a place of interment for powerful rulers. Maya pyramidal structures occur in a great variety of forms and functions, bounded by regional and periodical differences. Aguateca Altun Ha Calakmul Caracol Chichen Itza Cholula Comalcalco Copan Dos Pilas Edzna El Mirador El Tigre La Danta Kaminaljuyu Lamanai La Venta Los Monos Lubaantun Moral_Reforma Nim Li Punit Palenque: Temple of the Inscriptions Tazumal Tikal: Tikal Temple I.
The region is inhabited by the modern descendants of the Purépecha. Purépechan architecture is noted for "T"-shaped step pyramids known as yácatas. Tzintzuntzan The Teotihuacan civilization, which flourished from around 300 BCE to 500 CE, at its greatest extent included most of Mesoamerica. Teotihuacano culture collapsed around 550 and was followed by several large city-states such as Xochicalco and the ceremonial site of Tula. El Castillo & High Priest's Temple in Chichen Itza Pyramids of the Sun, the Moon and Temple of the Feathered Serpent in Teotihuacan Xochicalco Tula Talud-tablero The site called Tula, the Toltec capitol, in the state of Mexico is one of the best preserved five-tier pyramids in Mesoamerican civilization; the ground plan of the site has two pyramids, Pyramid B and Pyramid C. The best known Classic Veracruz pyramid, the Pyramid of Niches in El Tajín, is smaller than those of their neighbours and successors but more intricate. El Tajín The Zapotecs were one of the earliest Mesoamerican cultures and held sway over the Valley of Oaxaca region from the early first millennium BCE to about the 14th century.
Monte Albán Mitla The following sites are from northern Mesoamerica, built by cultures whose ethnic affiliations are unknown: This astronomical and ceremonial center was the product of the Chalchihuite culture. Its occupation and development had a period of 800 years; this zone is considered an important archaeological center because of the astonishing, accurate functions of the edifications. The ones that stand out the most are: The Moon Plaza, The Votive Pyramid, the Ladder of Gamio and The labyrinth. In The Labyrinth you can appreciate with precision and accuracy, the respective equinoxes and the seasons. A great quantity of buildings were constructed on artificial terraces upon the slopes of a hill; the materials used here include stone clay. The most important structures are: The Hall of Columns, The Ball Court, The Votive Pyramid, The Palace and the Barracks. On the most elevated part of the hill is The Fortress; this is composed of a small pyramid and a platform, encircled by a wall, more than 800m long and up to six feet high.
La Quemada was occupied from 800 to 1200. Their founders and occupants have not been identified with certainty but belonged to either the Chalchihuites culture or that of the neighbouring Malpaso culture. List of Mesoamerican pyramids Egyptian pyramids Mesoamerican architecture Pyramid Platform mound South American pyramids Step pyramid Triadic pyramid Ziggurat Meso-American pyramids Photos and descriptions of Yaxha, Edzna, El Mirador and other Meso-American pyramids
Atlantis is a fictional island mentioned within an allegory on the hubris of nations in Plato's works Timaeus and Critias, where it represents the antagonist naval power that besieges "Ancient Athens", the pseudo-historic embodiment of Plato's ideal state in The Republic. In the story, Athens repels the Atlantean attack unlike any other nation of the known world giving testament to the superiority of Plato's concept of a state; the story concludes with Atlantis falling out of favor with the deities and submerging into the Atlantic Ocean. Despite its minor importance in Plato's work, the Atlantis story has had a considerable impact on literature; the allegorical aspect of Atlantis was taken up in utopian works of several Renaissance writers, such as Francis Bacon's New Atlantis and Thomas More's Utopia. On the other hand, nineteenth-century amateur scholars misinterpreted Plato's narrative as historical tradition, most notably in Ignatius L. Donnelly's Atlantis: The Antediluvian World. Plato's vague indications of the time of the events—more than 9,000 years before his time—and the alleged location of Atlantis—"beyond the Pillars of Hercules"—has led to much pseudoscientific speculation.
As a consequence, Atlantis has become a byword for any and all supposed advanced prehistoric lost civilizations and continues to inspire contemporary fiction, from comic books to films. While present-day philologists and classicists agree on the story's fictional character, there is still debate on what served as its inspiration; as for instance with the story of Gyges, Plato is known to have borrowed some of his allegories and metaphors from older traditions. This led a number of scholars to investigate possible inspiration of Atlantis from Egyptian records of the Thera eruption, the Sea Peoples invasion, or the Trojan War. Others have rejected this chain of tradition as implausible and insist that Plato created an fictional nation as his example, drawing loose inspiration from contemporary events such as the failed Athenian invasion of Sicily in 415–413 BC or the destruction of Helike in 373 BC; the only primary sources for Atlantis are Plato's dialogues Critias. The dialogues claim to quote Solon, who visited Egypt between 590 and 580 BC.
Written in 360 BC, Plato introduced Atlantis in Timaeus: For it is related in our records how once upon a time your State stayed the course of a mighty host, starting from a distant point in the Atlantic ocean, was insolently advancing to attack the whole of Europe, Asia to boot. For the ocean there was at that time navigable. For all that we have here, lying within the mouth of which we speak, is evidently a haven having a narrow entrance. Now in this island of Atlantis there existed a confederation of kings, of great and marvelous power, which held sway over all the island, over many other islands and parts of the continent; the four people appearing in those two dialogues are the politicians Critias and Hermocrates as well as the philosophers Socrates and Timaeus of Locri, although only Critias speaks of Atlantis. In his works Plato makes extensive use of the Socratic method in order to discuss contrary positions within the context of a supposition; the Timaeus begins with an introduction, followed by an account of the creations and structure of the universe and ancient civilizations.
In the introduction, Socrates muses about the perfect society, described in Plato's Republic, wonders if he and his guests might recollect a story which exemplifies such a society. Critias mentions a tale he considered to be historical, that would make the perfect example, he follows by describing Atlantis as is recorded in the Critias. In his account, ancient Athens seems to represent the "perfect society" and Atlantis its opponent, representing the antithesis of the "perfect" traits described in the Republic. According to Critias, the Hellenic deities of old divided the land so that each deity might have their own lot; the island was larger than Ancient Libya and Asia Minor combined, but it was sunk by an earthquake and became an impassable mud shoal, inhibiting travel to any part of the ocean. Plato asserted that the Egyptians described Atlantis as an island consisting of mountains in the northern portions and along the shore and encompassing a great plain in an oblong shape in the south "extending in one direction three thousand stadia, but across the center inland it was two thousand stadia."
Fifty stadia from the coast was a mountain, low on all sides... broke it off all round about... the central island itself was five stades in diameter. In Plato's metaphorical tale, Poseidon fell in love with Cleito, the daughter of Evenor and Leucippe, who bore him five pairs of male twins; the eldest of these, was made rightful king of the entire island and the ocean, was given the mountain of his birth and the surrounding area as his fiefdom. Atlas's twin Gadeirus, or Eumelus in Greek, was given the extremity of the island to
An amusement park is a park that features various attractions, such as rides and games, as well as other events for entertainment purposes. A theme park is a type of amusement park that bases its structures and attractions around a central theme featuring multiple areas with different themes. Unlike temporary and mobile funfairs and carnivals, amusement parks are stationary and built for long-lasting operation, they are more elaborate than city parks and playgrounds providing attractions that cater to a variety of age groups. While amusement parks contain themed areas, theme parks place a heavier focus with more intricately-designed themes that revolve around a particular subject or group of subjects. Amusement parks evolved from European fairs, pleasure gardens and large picnic areas, which were created for people's recreation. World's fairs and other types of international expositions influenced the emergence of the amusement park industry. Lake Compounce opened in 1846 and is considered the oldest continuously-operating amusement park in North America.
The first theme parks emerged in the mid-twentieth century with the opening of Santa Claus Land in 1946, Santa's Workshop in 1949, Disneyland in 1955. The amusement park evolved from three earlier traditions: traveling or periodic fairs, pleasure gardens and exhibitions such as world fairs; the oldest influence was the periodic fair of the Middle Ages - one of the earliest was the Bartholomew Fair in England from 1133. By the 18th and 19th centuries, they had evolved into places of entertainment for the masses, where the public could view freak shows, acrobatics and juggling, take part in competitions and walk through menageries. A wave of innovation in the 1860s and 1870s created mechanical rides, such as the steam-powered carousel, its derivatives, notably from Frederick Savage of King's Lynn, Norfolk whose fairground machinery was exported all over the world; this inaugurated the era of the modern funfair ride, as the working classes were able to spend their surplus wages on entertainment.
The second influence was the pleasure garden. An example of this is the world's oldest amusement park, opened in mainland Europe in 1583, it is located north of Copenhagen in Denmark. Another early garden was the Vauxhall Gardens, founded in 1661 in London. By the late 18th century, the site had an admission fee for its many attractions, it drew enormous crowds, with its paths noted for romantic assignations. Although the gardens were designed for the elites, they soon became places of great social diversity. Public firework displays were put on at Marylebone Gardens, Cremorne Gardens offered music and animal acrobatics displays. Prater in Vienna, began as a royal hunting ground, opened in 1766 for public enjoyment. There followed coffee-houses and cafés, which led to the beginnings of the Wurstelprater as an amusement park; the concept of a fixed park for amusement was further developed with the beginning of the world's fairs. The first World fair began in 1851 with the construction of the landmark Crystal Palace in London, England.
The purpose of the exposition was to celebrate the industrial achievement of the nations of the world and it was designed to educate and entertain the visitors. American cities and business saw the world's fair as a way of demonstrating economic and industrial success; the World's Columbian Exposition of 1893 in Chicago, Illinois was an early precursor to the modern amusement park. The fair was an enclosed site, that merged entertainment and education to entertain the masses, it set out to bedazzle the visitors, did so with a blaze of lights from the "White City." To make sure that the fair was a financial success, the planners included a dedicated amusement concessions area called the Midway Plaisance. Rides from this fair captured the imagination of the visitors and of amusement parks around the world, such as the first steel Ferris wheel, found in many other amusement areas, such as the Prater by 1896; the experience of the enclosed ideal city with wonder, rides and progress, was based on the creation of an illusory place.
The "midway" introduced at the Columbian Exposition would become a standard part of most amusement parks, fairs and circuses. The midway contained not only the rides, but other concessions and entertainments such as shooting galleries, penny arcades, games of chance and shows. Many modern amusement parks evolved from earlier pleasure resorts that had become popular with the public for day-trips or weekend holidays, for example, seaside areas such as Blackpool, United Kingdom and Coney Island, United States. In the United States, some amusement parks grew from picnic groves established along rivers and lakes that provided bathing and water sports, such as Lake Compounce in Connecticut, first established as a picturesque picnic park in 1846, Riverside Park in Massachusetts, founded in the 1870s along the Connecticut River; the trick was getting the public to the resort location. For Coney Island in Brooklyn, New York, on the Atlantic Ocean, a horse-drawn streetcar line brought pleasure seekers to the beach beginning in 1829.
In 1875, a million passengers rode the Coney Island Railroad, in 1876 two million visited Coney Island. Hotels and amusements were built to accommodate both the upper classes and the working class at the beach; the first carousel was installed in the 1870s, the first roller coaster, the "Switchback Railway", in 1884. In England, Blackpo
Antonio Zamperla S.p. A. is an Italian design and manufacturing company founded in 1966. It is best known for creating family rides, thrill rides and roller coasters worldwide; the company makes smaller coin-operated rides found inside shopping malls. Zamperla builds roller coasters, like the powered Dragon Coaster, Mini Mouse, Zig Zag, Volare. In 2006, Zamperla announced a motorcycle-themed roller coaster. Rights to some of S. D. C.'s rides were handed to Zamperla after the company went bankrupt in 1993. In 2005 the founder of the company, Mr. Antonio Zamperla, became the first Italian to be inducted into the IAAPA Hall of Fame by virtue of his significant contribution to the entire industry, joining other pioneers such as Walt Disney, George Ferris and Walter Knott. Unlike companies such as Intamin, Vekoma, or Bolliger & Mabillard that concentrate on larger and faster roller coasters, Zamperla focuses on more family-friendly roller coasters that can be mass-produced, taken down, transported to different locations.
They are a major manufacturer of flat rides with such names as: Balloon Race, Bumper cars, Disk'O, Ferris wheel, Water Flume Ride, Galleon/Swinging Ship, Sky Drop, Windshear, Energy Storm, Z-Force, Rotoshake, Turbo Force, Power Surge, Mini Jet. The company is organized in different departments, the Art Department that works on the study and creation of different themings of the rides, the Technical Department that designs the engineering of the attractions, the Production Department that handles their realization, the Sales Department, the Customer Care and the Park Development Department that works on the design and creation of an amusement park. In 2010 Antonio Zamperla S.p. A. was selected by CAI to restore and renovate the Coney Island area in New York City. The company managed Coney Island's Luna Park and installed only Zamperla rides, representing a perfect test bed for new attractions before to launch them. In 2003, Zamperla transformed the Trump Organization's Wollman Rink, within New York City's Central Park, into Victorian Gardens, a traditional-style amusement park with rides like the "Family Swinger", "Samba Balloon", "Aeromax", "Convoy", "Rocking Tug", "Kite Flyer".
Another famous Zamperla project is Kernwasser, north of Düsseldorf, a former nuclear power station, turned into an amusement park called Wunderland Kalkar. As of 2019, Zamperla has built 342 roller coasters around the world. Official website
A roller coaster is a type of amusement ride that employs a form of elevated railroad track designed with tight turns, steep slopes, sometimes inversions. People ride along the track in open cars, the rides are found in amusement parks and theme parks around the world. LaMarcus Adna Thompson obtained one of the first known patents for a roller coaster design in 1885, related to the Switchback Railway that opened a year earlier at Coney Island; the track in a coaster design does not have to be a complete circuit, as shuttle roller coasters demonstrate. Most roller coasters have multiple cars in which passengers are restrained. Two or more cars hooked together are called a train; some roller coasters, notably wild mouse roller coasters, run with single cars. The oldest roller coasters are believed to have originated from the so-called "Russian Mountains", specially constructed hills of ice located in the area, now Saint Petersburg, Russia. Built in the 17th century, the slides were built to a height of between 21 and 24 m, had a 50-degree drop, were reinforced by wooden supports.
In 1784, Catherine the Great is said to have constructed a sledding hill in the gardens of her palace at Oranienbaum in St. Petersburg; the name Russian Mountains to designate a roller coaster is preserved in many languages, but the Russian term for roller coasters is американские горки, which means "American mountains." The first modern roller coaster, the Promenades Aeriennes, opened in Parc Beaujon in Paris on July 8, 1817. It featured wheeled cars securely locked to the track, guide rails to keep them on course, higher speeds, it spawned half a dozen imitators. However, during the Belle Epoque they returned to fashion. In 1887 French entrepreneur Joseph Oller, co-founder of the Moulin Rouge music hall, constructed the Montagnes Russes de Belleville, "Russian Mountains of Belleville" with 656 feet of track laid out in a double-eight enlarged to four figure-eight-shaped loops. In 1827, a mining company in Summit Hill, Pennsylvania constructed the Mauch Chunk Switchback Railway, a downhill gravity railroad used to deliver coal to Mauch Chunk, Pennsylvania – now known as Jim Thorpe.
By the 1850s, the "Gravity Road" was selling rides to thrill seekers. Railway companies used similar tracks to provide amusement on days. Using this idea as a basis, LaMarcus Adna Thompson began work on a gravity Switchback Railway that opened at Coney Island in Brooklyn, New York, in 1884. Passengers climbed to the top of a platform and rode a bench-like car down the 600-foot track up to the top of another tower where the vehicle was switched to a return track and the passengers took the return trip; this track design was soon replaced with an oval complete circuit. In 1885, Phillip Hinkle introduced the first full-circuit coaster with a lift hill, the Gravity Pleasure Road, which became the most popular attraction at Coney Island. Not to be outdone, in 1886 Thompson patented his design of roller coaster that included dark tunnels with painted scenery. "Scenic Railways" were soon found in amusement parks across the county. By 1919, the first underfriction roller coaster had been developed by John Miller.
Soon, roller coasters spread to amusement parks all around the world. The best known historical roller coaster, was opened at Coney Island in 1927; the Great Depression marked the end of the golden age of roller coasters, theme parks, in general, went into decline. This lasted until 1972 when the instant success of The Racer at Kings Island began a roller coaster renaissance which has continued to this day. In 1959, Disneyland introduced a design breakthrough with Matterhorn Bobsleds, the first roller coaster to use a tubular steel track. Unlike wooden coaster rails, tubular steel can be bent in any direction, allowing designers to incorporate loops and many other maneuvers into their designs. Most modern roller coasters are made of steel, although wooden coasters and hybrids are still being built. There are several explanations of the name roller coaster, it is said to have originated from an early American design where slides or ramps were fitted with rollers over which a sled would coast. This design was abandoned in favor of fitting the wheels to the sled or other vehicles, but the name endured.
Another explanation is that it originated from a ride located in a roller skating rink in Haverhill, Massachusetts in 1887. A toboggan-like sled was raised to the top of a track; this Roller Toboggan took off down rolling hills to the floor. The inventors of this ride, Stephen E. Jackman and Byron B. Floyd, claim that they were the first to use the term "roller coaster"; the term jet coaster is used for roller coasters in Japan, where such amusement park rides are popular. In many languages, the name refers to "Russian mountains". Contrastingly, in Russian, they are called "American mountains". In the Scandinavian languages and German, the roller coaster is referred as "mountain-and-valley railway". German knows the word "Achterbahn", stemming from "Figur-8-Bahn", like Dutch "Achtbaan", relating to the form of the number 8; the cars on a typical roller coaster are not self-powered. Instead, a standard full circuit coaster is pulled up with a chain or cable along the lift hill to the first peak of the coaster track.
The potential energy accumulated by the rise in height is transferred to kinetic energy as the cars race down the first downward slope. Kinetic energy is converted back into potential energy as the train moves up again to the second peak; this hill is necessa
Jakarta the Special Capital Region of Jakarta, is the capital and largest city of Indonesia. Located on the northwest coast of the world's most populous island, Java, it is the centre of economics and politics of Indonesia, with a population of 10,075,310 as of 2014. Jakarta metropolitan area has an area of 6,392 square kilometers, known as Jabodetabek, it is the world's second largest urban agglomeration with a population of 30,214,303 as of 2010. Jakarta is predicted to reach 35.6 million people by 2030 to become the world's biggest megacity. Jakarta's business opportunities, as well as its potential to offer a higher standard of living, attract migrants from across the Indonesian archipelago, combining many communities and cultures. Established in the 4th century as Sunda Kelapa, the city became an important trading port for the Sunda Kingdom, it was the de facto capital of the Dutch East Indies. Jakarta is a province with special capital region status, but is referred to as a city; the Jakarta provincial government consists of five administrative cities and one administrative regency.
Jakarta is nicknamed the Big Durian, the thorny strongly-odored fruit native to the region, as the city is seen as the Indonesian equivalent of New York. Jakarta is an alpha world city and is the seat of the ASEAN secretariat, making it an important city for international diplomacy. Important financial institutions such as Bank of Indonesia, Indonesia Stock Exchange, corporate headquarters of numerous Indonesian companies and multinational corporations are located in the city; as of 2017, the city is home for two Fortune 500 and four Unicorn companies. In 2017, the city's GRP PPP was estimated at US$483.4 billion. Jakarta has grown more than Kuala Lumpur and Beijing. Jakarta's major challenges include rapid urban growth, ecological breakdown, gridlock traffic and congestion and inequality, potential crimes and flooding. Jakarta is sinking up to 17 cm per year, coupled with the rising of sea level, has made the city more prone to flooding. Jakarta has been home to multiple settlements: Sunda Kelapa, Batavia, Jakarta.
Its current name "Jakarta" derives from the word Jayakarta, derived from Sanskrit language. It was named after troops of Fatahillah defeated and drove away Portuguese invaders from the city in 1527. Before it was named "Jayakarta", the city was known as "Sunda Kelapa". In the colonial era, the city was known as Koningin van het Oosten in the 17th century for the urban beauty of downtown Batavia's canals and ordered city layout. After expanding to the south in the 19th century, this nickname came to be more associated with the suburbs, with their wide lanes, green spaces and villas. During Japanese occupation the city was renamed as Jakarta Tokubetsu Shi; the north coast area of western Java including Jakarta, was the location of prehistoric Buni culture that flourished from 400 BC to 100 AD. The area in and around modern Jakarta was part of the 4th century Sundanese kingdom of Tarumanagara, one of the oldest Hindu kingdoms in Indonesia; the area of North Jakarta around Tugu became a populated settlement at least in the early 5th century.
The Tugu inscription discovered in Batutumbuh hamlet, Tugu village, North Jakarta, mentions that King Purnawarman of Tarumanagara undertook hydraulic projects. Following the decline of Tarumanagara, its territories, including the Jakarta area, became part of the Hindu Kingdom of Sunda. From the 7th to the early 13th century, the port of Sunda was under the Srivijaya maritime empire. According to the Chinese source, Chu-fan-chi, written circa 1225, Chou Ju-kua reported in the early 13th century Srivijaya still ruled Sumatra, the Malay peninsula and western Java; the source reports the port of Sunda as strategic and thriving, mentioning pepper from Sunda as among the best in quality. The people worked in agriculture and their houses were built on wooden piles; the harbour area became known as Sunda Kelapa and by the 14th century, it was a major trading port for the Sunda kingdom. The first European fleet, four Portuguese ships from Malacca, arrived in 1513, while looking for a route for spices.
The Sunda Kingdom made an alliance treaty with the Portuguese by allowing them to build a port in 1522 to defend against the rising power of Demak Sultanate from central Java. In 1527, Fatahillah, a Javanese general from Demak attacked and conquered Sunda Kelapa, driving out the Portuguese. Sunda Kelapa was renamed Jayakarta, became a fiefdom of the Banten Sultanate, which became a major Southeast Asia trading centre. Through the relationship with Prince Jayawikarta of Banten Sultanate, Dutch ships arrived in 1596. In 1602, the English East India Company's first voyage, commanded by Sir James Lancaster, arrived in Aceh and sailed on to Banten where they were allowed to build a trading post; this site became the centre of English trade in Indonesia until 1682. Jayawikarta is thought to have made trading connections with
Dolphin is a common name of aquatic mammals within the order Cetacea, arbitrarily excluding whales and porpoises. The term dolphin refers to the extant families Delphinidae, Platanistidae and Pontoporiidae, the extinct Lipotidae. There are 40 extant species named as dolphins. Dolphins range in size from the 1.7 m long and 50 kg Maui's dolphin to the 9.5 m and 10 t killer whale. Several species exhibit sexual dimorphism, they have two limbs that are modified into flippers. Though not quite as flexible as seals, some dolphins can travel at 55.5 km/h. Dolphins use their conical shaped teeth to capture fast moving prey, they have well-developed hearing, adapted for both air and water and is so well developed that some can survive if they are blind. Some species are well adapted for diving to great depths, they have a layer of blubber, under the skin to keep warm in the cold water. Although dolphins are widespread, most species prefer the warmer waters of the tropic zones, but some, like the right whale dolphin, prefer colder climates.
Dolphins feed on fish and squid, but a few, like the killer whale, feed on large mammals, like seals. Male dolphins mate with multiple females every year, but females only mate every two to three years. Calves are born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a long period of time. Dolphins produce a variety of vocalizations in the form of clicks and whistles. Dolphins are sometimes hunted in places such as Japan, in an activity known as dolphin drive hunting. Besides drive hunting, they face threats from bycatch, habitat loss, marine pollution. Dolphins have been depicted in various cultures worldwide. Dolphins feature in literature and film, as in the film series Free Willy. Dolphins are sometimes trained to perform tricks; the most common dolphin species in captivity is the bottlenose dolphin, while there are around 60 captive killer whales. The name is from Greek δελφίς, "dolphin", related to the Greek δελφύς, "womb".
The animal's name can therefore be interpreted as meaning "a'fish' with a womb". The name was transmitted via the Latin delphinus, which in Medieval Latin became dolfinus and in Old French daulphin, which reintroduced the ph into the word; the term mereswine has historically been used. The term'dolphin' can be used to refer to, under the parvorder Odontoceti, all the species in the family Delphinidae and the river dolphin families Iniidae, Pontoporiidae and Platanistidae; this term has been misused in the US in the fishing industry, where all small cetaceans are considered porpoises, while the fish dorado is called dolphin fish. In common usage the term'whale' is used only for the larger cetacean species, while the smaller ones with a beaked or longer nose are considered'dolphins'; the name'dolphin' is used casually as a synonym for bottlenose dolphin, the most common and familiar species of dolphin. There are six species of dolphins thought of as whales, collectively known as blackfish: the killer whale, the melon-headed whale, the pygmy killer whale, the false killer whale, the two species of pilot whales, all of which are classified under the family Delphinidae and qualify as dolphins.
Though the terms'dolphin' and'porpoise' are sometimes used interchangeably, porpoises are not considered dolphins and have different physical features such as a shorter beak and spade-shaped teeth. Porpoises share a common ancestry with the Delphinidae. A group of dolphins is called a "school" or a "pod". Male dolphins are called "bulls", females "cows" and young dolphins are called "calves". Parvorder Odontoceti, toothed whales Family Platanistidae Ganges and Indus river dolphin, Platanista gangetica with two subspecies Ganges river dolphin, Platanista gangetica gangetica Indus river dolphin, Platanista gangetica minor Family Iniidae Amazon river dolphin, Inia geoffrensis Orinoco river dolphin, Inia geoffrensis humboldtiana Araguaian river dolphin, Inia Araguaiaensis Bolivian river dolphin, Inia boliviensis Family Lipotidae Baiji, Lipotes vexillifer Family Pontoporiidae La Plata dolphin, Pontoporia blainvillei Family Delphinidae, oceanic dolphins Genus Delphinus Long-beaked common dolphin, Delphinus capensis Short-beaked common dolphin, Delphinus delphis Genus Tursiops Common bottlenose dolphin, Tursiops truncatus Indo-Pacific bottlenose dolphin, Tursiops aduncus Burrunan dolphin, Tursiops australis, a newly discovered species from the sea around Melbourne in September 2011.
Genus Lissodelphis Northern right whale dolphin, Lissodelphis borealis Southern right whale dolphin, Lissodelphis peronii Genus Sotalia Tucuxi, Sotalia fluviatilis Costero, Sotalia guianensis Genus Sousa Indo-Pacific humpback dolphin, Sousa chinensis Chinese white dolphin, Sousa chinensis chinensis Atlantic humpback dolphin, Sousa teuszii Genus Stenella Atlantic spotted dolphin, Stenella frontalis Clymene dolphin, Stenella clymene Pantropical | <urn:uuid:5b2a905c-9c52-477e-9383-27331ddd5522> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Ancol_Dreamland | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00418.warc.gz | en | 0.953459 | 7,320 | 3.875 | 4 |
For their discovery of the two distinct classes of lymphocytes, B and T cells – a monumental achievement that provided the organizing principle of the adaptive immune system and launched the course of modern immunology
The 2019 Albert Lasker Basic Medical Research Award honors two scientists for discoveries that have launched the course of modern immunology. Max D. Cooper (Emory University School of Medicine) and Jacques Miller (Emeritus, The Walter and Eliza Hall Institute of Medical Research) identified two distinct classes of lymphocytes, B and T cells, a monumental achievement that provided the organizing principle of the adaptive immune system. This pioneering work has fueled a tremendous number of advances in basic and medical science, several of which have received previous recognition by Lasker Awards and Nobel Prizes, including those associated with monoclonal antibodies, generation of antibody diversity, MHC restriction for immune defense, antigen processing by dendritic cells, and checkpoint inhibition therapy for cancer.
When Miller began his research—around 1960—scientists had uncovered some features of the adaptive immune system, which protects our bodies from microbial invaders, underlies immunological memory, and distinguishes self from foreign tissue. They knew that antibodies, soluble proteins whose quantities surge after infection, perform jobs that differ from tasks that rely on live, intact cells such as rejection of transplanted grafts.
Key immune system activities were known to occur in the spleen and other lymphoid tissues. The thymus, in contrast, teemed with lymphocytes—cells thought by then to initiate immunological functions—but lymphocytes from the thymus could not transfer immune responses to other animals. Furthermore, removing the organ from adult animals exerted no harmful effects. The thymus, according to prevailing wisdom, was dispensable, an evolutionary relic.
Miller had no inherent interest in this useless body part, but observations about mouse lymphocytic leukemia drew him toward it when he began his Ph.D. work at the Institute of Cancer Research, London. A recently discovered virus caused this cancer when administered at birth, but not later, and preliminary experiments suggested that it targeted the thymus. To test whether it could multiply only in the newborn thymus, Miller removed the organ in newborn mice and then injected the virus.
He didn’t have a chance to find out the answer.
The pups remained healthy at first, but after weaning, they developed diarrhea and began wasting away. They were deficient in blood, lymph node, and spleen lymphocytes—and also in plasma cells, which produce antibody. The animals mounted a poor antibody response after bacterial immunization. Moreover, they failed to reject skin transplanted from unrelated mice and even rats, as would animals with an intact immune system. Two-thirds of the mice died prematurely.
Neonatal thymectomy was crippling immune activities, Miller concluded. Soon after publication of his initial results, other groups reported similar findings from studies on rats and rabbits. Miller subsequently established that the thymus is sometimes essential in adult animals too—to replenish the immune system if it is damaged by irradiation.
The thymus, he reasoned, might make cells that circulate and gain immune capabilities, poised to attack foreign threats throughout the body. To test this idea, he implanted thymuses into neonatally thymectomized mice and then introduced skin from different donors. The animals rejected grafts from unrelated individuals, but tolerated them from the strain that furnished the thymus. The thymus therefore, was supplying immunological function. Furthermore, in the recipient’s spleen—where lymphocytes multiply after stimulation— a significant fraction of dividing cells came from the donor thymus. As Miller had conjectured, cells from the thymus could migrate and mature.
Miller published the work in 1961 and 1962, and he presented his results at several meetings. Many scientists initially resisted his interpretations, including leading immunologists such as Peter Medawar (1915-1987) and MacFarlane Burnet (1899-1985). How could a disposable organ serve a vital purpose?
The strongest specific objection was that the lab in which Miller worked was a converted horse stable. Perhaps it was crawling with pathogens that put the animals in a stressed state, and the thymectomy pushed them into immunosuppression.
To address this possibility, Miller went to the U.S. National Institutes of Health, the only place in the world that had germ-free mice at the time. He repeated the experiments with those animals and confirmed that neonatally thymectomized mice accept foreign skin grafts. The immunological effects mapped to thymus removal, not to a pre-weakened immune system.
In the meantime, Cooper, a pediatrician, had become interested in conditions that make children unusually susceptible to infections and, in 1963, he joined the laboratory of the late Robert A. Good at the University of Minnesota Medical School to explore his ideas. Some of the syndromes seemed to represent discrete, isolated segments of the adaptive immune system. For instance, many children with Bruton type agammaglobulinemia make no detectable antibody, yet can perform cell-mediated immune tasks. Conversely, children with other immunodeficiency conditions display the opposite physiological signature.
Such observations suggested the existence of two distinct arms of adaptive immunity in humans. By this time, however, the prevailing model, derived from Miller’s and other animal studies, suggested that a single pathway leads from thymus-generated lymphocytes to plasma cells. In this scenario, lymphocytes emerge from the thymus and disseminate to other tissues, such as the spleen, where stimulation with foreign substances triggers some of them to become antibody factories. The single-pathway model did not easily square with the clinical observations, which showed that cell-mediated activities could wither without wiping out antibody production. If lymphocytes give rise to plasma cells, it was not straightforward to explain how plasma cells and their products could abound when lymphocytes were scarce.
Results published far from the immunology mainstream pointed toward the chicken as a way to resolve this conundrum. In 1956, the late Bruce Glick (then a graduate student at Ohio State University) had reported in the Journal of Poultry Science that removal of a lymphoid organ—the bursa of Fabricius— soon after hatching could subdue antibody production. Perhaps, Cooper reasoned, he could use the chicken to simulate the apparent split reflected in the human diseases and tease apart potential contributions of the bursa and the thymus.
Cooper’s initial attempts to eliminate thymus function by removing the chicken thymus yielded no effects, and he speculated that newly hatched chickens might carry thymus-derived cells that had escaped from their birthplace before hatching, thus obscuring defects that should, theoretically, result from thymectomy. To address this possibility, he removed the thymus or the bursa of Fabricius from newborn chickens and then irradiated the animals to destroy residual immune cells that might have been floating around the body. Then he let the animals recover.
Irradiated chickens without a bursa did not produce antibodies after injection with bacteria or a foreign protein, and they lacked plasma cells. Their thymuses developed normally and lymphocytes flourished. In contrast, irradiated chickens with no thymus had few lymphocytes, but more than half made antibodies when stimulated. Subsequent studies showed that these animals could not reject skin grafts normally or execute other cell-mediated immune reactions.
In 1965 and 1966, Cooper published these findings and proposed that the bursa of Fabricius is essential for antibody production and the thymus is essential for cell-mediated immune responses. The names of bursa- and thymus-derived cells eventually shrunk to B- and T-cells, respectively. The two-pathway model provided a new lens through which to view human immunodeficiency diseases and brought fresh insight to a vast range of basic and clinical issues (see Figure).
Even if these findings extended to mammals, which possess no obvious bursa equivalent, they raised perplexing questions. For instance, thymectomy plus irradiation of chickens softened antibody output—and neonatal thymectomies in mice stymied it. The split between the two pathways was not clean.
Miller (having joined the faculty at The Walter and Eliza Hall Institute of Medical Research) and his Ph.D. student, Graham Mitchell, decided to probe how the thymus might influence antibody production in mice. After obliterating thymus activity, they introduced thymus-derived cells, genetically distinct bone marrow-derived cells, or both before provoking antibody manufacture.
Neither cell type on its own reconstituted antibody production in the spleen. Only animals that received cells derived from the thymus and bone marrow achieved this feat. A collaboration was occurring between the two cell types.
Additional experiments revealed that the bone marrow-derived cells were the ones that spit out antibody, but only with assistance from the thymus-derived cells. Not only had Miller demonstrated that the dual system operates in mammals as well as birds, he also had exposed what we now know as helper T cells. This work ignited an explosion of interest in lymphocyte interactions.
Although mouse bone marrow contained cells that could give rise to antibody producers, the bone marrow might have received them from elsewhere, so the search for the mammalian bursa equivalent continued. In 1974, Cooper in collaboration with Martin Raff and John Owen (University College London) as well as independent investigators in Melbourne and Geneva, established that mammalian B cell precursors are generated in the blood-forming tissues—the liver in the fetus and the bone marrow after birth. This finding dovetailed with previous work on antibody development by Cooper. He had shown that two classes of antibody, IgM and IgG, each known for specific activities, come from a single precursor B cell that switches from IgM to IgG production.
Many laboratories subsequently elucidated the details of the system that Miller and Cooper unveiled, which operates in all jawed vertebrates. Cooper (at the University of Alabama, Birmingham, and then Emory University School of Medicine) went on to discover that jawless vertebrates—represented by the lamprey and hagfish—deploy a similarly organized scheme to recognize a vast array of foreign molecules, but with molecules that are structurally unrelated to the antibodies and receptors used by B and T cells.
By delineating the adaptive immune system’s two major branches, each of which performs distinct functions, Cooper and Miller opened a new era of cellular immunology. Virtually all fundamental discoveries in the field over the last 50 years can be traced to their pioneering work. Moreover, their historic findings have powered novel therapeutic strategies that have harnessed immune cells and their products to combat a vast range of illnesses—from cancer to autoimmune disorders to immunodeficiency conditions and far beyond.
by Evelyn Strauss
Cooper, M.D., Peterson, R.D.A., and Good, R.A. (1965). Delineation of the thymic and bursal lymphoid systems in the chicken. Nature. 205, 143-146.
Kincade, P.W., Lawton, A.R., Bockman, D.E., and Cooper, M.D. (1970). Suppression of immunoglobulin G synthesis as a result of antibody-mediated suppression of immunoglobulin M synthesis. Proc. Natl. Acad. Sci. USA. 67, 1918-1925.
Cooper, M.D., Lawton, A.R., and Bockman, D.E. (1971). Agammaglobulinaemia with B lymphocytes. Specific defect of plasma-cell differentiation. Lancet. 2, 791-794.
Owen, J.J.T., Cooper, M.D., and Raff, M.C. (1974). In vitro generation of B lymphocytes in mouse foetal liver – a mammalian “bursa equivalent”. Nature. 249, 361-363.
Pancer, Z., Amemiya, C.T., Ehrhardt, G.R.A., Ceitlin, J., Gartland, G.L., and Cooper, M.D. (2004). Somatic diversification of variable lymphocyte receptors in the agnathan sea lamprey. Nature. 430, 174-180.
Cooper, M.D. (2015). The early history of B cells. Nat. Rev. Immunol. 15, 191-197.
Miller, J.F.A.P. (1961). Immunological function of the thymus. Lancet. 2, 748-749.
Miller, J.F.A.P. (1962). Effect of neonatal thymectomy on the immunological responsiveness of the mouse. Proc. Roy. Soc. 156B, 410-428.
Miller, J.F.A.P. (1962). Immunological significance of the thymus of the adult mouse. Nature. 195, 1318-1319.
Miller, J.F.A.P., and Mitchell, G.F. (1968). Cell to cell interaction in the immune response. I. Hemolysin-forming cells in neonatally thymectomized mice reconstituted with thymus or thoracic duct lymphocytes. J. Exp. Med. 138, 801-820.
Miller, J.F.A.P. (2011). The golden anniversary of the thymus. Nat. Rev. Immunol. 11, 489-495.
Miller, J. (2019). How the thymus shaped immunology and beyond. Immunol. Cell Biol. 97, 299-304.
Watts, G. (2011). Jacques Miller: immunologist who discovered role of the thymus. Lancet. 178, 1290.
Gitlin, A.D., and Nussenzweig, M.C. (2015). Fifty years of B lymphocytes. Nature. 517, 139-141.
How the color theories of a chemist have ricocheted across centuries of artistic and scientific imagination.Read the Keynote by Lasker Jury Chair Joseph L. Goldstein
In this ongoing era of molecular biology, it is easy to forget that the functional unit on which biology is built, is the cell. Working independently, the two exceptional scientists that we honor today discovered that the adaptive immune system is composed of two distinct cell types, B cells and T cells. Through their studies of lymphocyte development, Max Cooper and Jacques Miller
have provided the framework on which modern immunology is built.
Miller and Cooper’s combined discoveries have also provided the foundation for most of today’s immune therapies. Monoclonal antibody production would not be possible without the antibody repertoire created and stored within the B cell lineage. One such antibody will be recognized with today’s Clinical Lasker Prize. Similarly, the success of T cell-based therapies, such as the recent development of CAR-T cells to treat cancer, depend on the immunologic properties that T cells acquire as they undergo lineage-specific development in the thymus.
The discovery of B and T cells also provided the foundation through which molecular biologists began to elucidate answers to the fundamental questions concerning how the immune repertoire is generated and how the immune system distinguishes self from non-self. The subsequent discoveries of V(D)J recombination, of dendritic cell antigen presentation, and of MHC-restriction of cell-mediated immunity have gone on to win recognition in the form of Lasker and Nobel prizes for providing the molecular details concerning how immune specificity is created and maintained. But none of these discoveries would have been possible without the immune framework involving the independent development of B and T cells discovered by Cooper and Miller.
So how did these two scientists discover that the adaptive immune system was composed of two independent but complementary cell types? When Cooper and Miller were starting their research careers in the early 1960s, antibodies were known to provide the protective immunity elicited by vaccines. Antibodies were produced during an immune response by terminally differentiated cells called plasma cells, but from where these cells arose was uncertain. The antibody theory of immunity was also recognized to be incomplete. Certain immunologic processes such as organ graft rejection did not appear to be result of antibody production. At the time that Cooper and Miller began, lymphocytes were considered a single cell type that originated in the spleen and lymph nodes and circulated throughout the body through the blood and lymphatics. The thymus, although known to contain lymphocytes, was considered an evolutionary remnant of no discernable significance. In fact, this matter was considered settled when it became routine, during the 1950s and 1960s, to remove the thymus during pediatric heart surgeries. Despite not having a thymus, these patients suffered no apparent immune consequences as they grew up.
It was Miller who first challenged the notion that the thymus was a vestigial organ. At the time, he was studying the factors that regulated the development of lymphocytic leukemia in mice. As part of his studies, he removed the thymus from mice at weaning and found that the incidence of viral and carcinogen-induced lymphocytic leukemia was dramatically reduced. He realized that this could be because either the thymus was the site of production of a humoral factor that contributed to lymphocytic leukemia or because the lymphocytes in the thymus were at a unique stage of development that rendered them susceptible to developing into a leukemia. To extend his studies, Miller decided to remove the thymus even earlier, at the time of birth, and then test the mice’s susceptibility to virally-induced leukemia. He never got to complete this experiment. Within weeks of weaning, the mice whose thymus was removed at birth developed a severe reduction in peripheral lymphocyte numbers, a systemic wasting syndrome, and an increase in opportunisitic infections. Today, we would recognize this constellation of symptoms as acquired immunodeficiency syndrome or AIDS, a disease that results from the loss of T cells. However, at the time, Miller’s results were completely unexpected. As Miller further characterized mice whose thymus was removed at birth, he found that they lacked the ability to reject foreign skin grafts, a hallmark of cell-mediated immunity, but they also displayed some defects in antibody production. This work established the thymus as an immunologic organ, critical for the development of cell-mediated immunity and apparently participating in the ability of animals to produce effective antibodies. Miller would over the next decade demonstrate that the thymus was the site of the development of T cells and helped to define the critical role of T cells in the successful generation of a cell-mediated immune response.
In contrast to T cells, the demonstration that B cells were a distinct lineage of immune cells did not result initially from studies in mice. Unique among vertebrates, birds were discovered to have a second thymus-like organ called the bursa of Fabricius. In the 1950s, Bruce Glick and his colleagues at Ohio State reported that when chickens had the bursa removed, their ability to produce antibodies was reduced, but other types of immune response were not examined. Shortly thereafter, Robert Good’s lab in Minnesota was undertaking experiments like Miller’s to understand the pathogenesis of lymphocytic leukemia in birds. In contrast to Miller’s results, removing the thymus in a chicken had no effects on the lymphocytic leukemia caused by avian leukosis virus. Instead, the group found that it was removing the bursa that reduced the incidence of leukemia. However, as they and others carried out more detailed immunologic analysis of chickens that had had either their thymus or bursa removed, the results proved highly variable. That is when Cooper enters our story.
It was Cooper who speculated that the variability resulted from the fact that when a chicken hatches, many lymphocytes had already completed their development in the thymus and bursa and could be observed seeding the peripheral lymphoid organs. To examine this possibility, Cooper eliminated the existing peripheral lymphocytes in newly hatched chicks through sublethal irradiation and then surgically removed the bursa of Fabricius or the thymus. The results were clear, the animals lacking a bursa failed to produce either plasma cells or antibodies and did not respond to an immunologic challenge with antibody production. In contrast, the chickens lacking a thymus had the same defects in cell-mediated immunity as those observed in the mice lacking a thymus reported by Miller. Based on this work, Cooper proposed that the adaptive immune system, was divided into two separate lymphoid lineages: the antibody producing lineage, derived in chickens from the bursa of Fabricius and named B cells, and the lineage responsible for cell-mediated immunity arising from the thymus and given the name T cells.
Even though mammals lacked a bursa of Fabricius, Cooper hypothesized that an equivalent mammalian B cell lineage must exist. He based his belief on the existence of human patients he was caring for who had a disease called Bruton’s agammaglobulinemia. Like Cooper’s bursectomized chickens, patients with Bruton’s agammaglobulinemia failed to produce plasma cells or antibodies but retained the ability to reject foreign tissue grafts.
In subsequent years, both Cooper and Miller played pivotal roles in identifying that B cells arise from hematopoietic stem cells in the liver and bone marrow in both birds and mammals. The rapid amplification of B cell numbers provided by subsequent development in the chicken bursa of Fabricius after hatching is apparently not needed in mammals. Instead it was found that the antibodies that protect newborn mammals from infection in the first 6 months of life come from maternal antibodies that cross the placenta in the third trimester of pregnancy.
Following their successful demonstration that B and T cells were independently produced during mammalian development, Miller and Cooper each extended their findings by demonstrating that many protective immune responses result from the independent development of specificity within the B and T cell lineages. They helped demonstrate that cooperation between these two arms of the lymphoid immune system were often required to mount a successful immune response and that it is the combined properties of B and T cells that provides us with lifelong immunity from foreign pathogens.
Please join me in giving a round of applause for this year’s winners of The Albert Lasker Basic Medical Research Award, Max Cooper and Jacques Miller, whose foundational discoveries, have informed and shaped the course of modern immunology.
I am truly honored to receive the Albert Lasker Award with Jacques Miller for our discovery of the T and B cell lineages and their pivotal roles in cellular and humoral immunity.
While growing up in rural Mississippi, nothing could have seemed more remote than a career in biomedical immunology. My first childhood encounters with cellular immunity came while fishing and exploring nearby streams; every summer I developed severe poison ivy, known now as a classic cellular immune reaction. My first memorable encounter with humoral immunity came from receiving rabies vaccination with painfully escalating inflammatory lesions that followed the 14 daily injections of rabies virus grown in rabbit brain. (This was well before Jeff Ravetch showed that this “Arthus phenomenon” was triggered by interaction between the constant regions of antibodies and their receptors.)
Ralph Platou, my pediatrics chief at Tulane, encouraged me to pursue an academic career and helped me obtain further training at the Hospital for Sick Children in London. A growing interest in congenital immune deficiencies and allergic diseases led to an immunology and allergy fellowship at University of California in San Francisco. My new mentor requested that on the way there I learn the immunofluorescence microscopy technique for use in studying cell-mediated immunity in a keratoconjunctivitis model. John Holborow, a British immunologist, agreed to teach me the immunofluorescence technique, but when I stated why I wished to learn it, he gently informed me that it was only useful for studying humoral immunity. Embarrassed by my naivety, I vowed if nothing else I would learn the difference between humoral and cell-mediated immunity.
A realistic opportunity to fulfill this vow came later in Minnesota, when I joined Robert Good’s research group soon after Jacques’s discovery of the critical role of the thymus in immune system development. Parenthetically, leading immunologists at the time were still vigorously debating whether antibodies were actually responsible for cell-mediated immunity.
My “eureka moment” was actually a “eureka week” as the results unfolded from our experiments coupling irradiation of newly hatched chicks with removal of their thymus or bursa of Fabricius. The complete elimination of B lineage cells and their antibody products in bursectomized and irradiated chicks, together with their restoration by non-irradiated bursal cells, clearly delineated the bursa-dependent differentiation pathway from the thymus-dependent pathway that is responsible primarily for cellular immunity.
The pieces of the puzzle provided by these results together with information derived from studies of immune system development in immunodeficient patients and thymectomized mice, alongside those of bone marrow stem cells, allowed us to draw a provisional map of how the T and B lymphocyte lineages are derived from hematopoietic stem cells.
Over the following decades this basic organizing principle has proven to be true for immune system development in all living vertebrate species. It has been extensively amplified and elucidated through the work of many immunologists to yield better understanding and treatment of a variety of diseases, prime examples of which we will hear more about today.
I am extremely humbled, honored and delighted to receive the 2019 Lasker Basic Medical Research Award and to share this prestigious award with my long-time colleague Max Cooper. We never worked together, but I have met him on many occasions and have read his published work with great interest. We have also shared both the 1990 Sandoz Prize for Immunology and the 2018 Japan Prize for Medicine and Medicinal Science.
My curiosity for how the body responds to infection began as a child. Although born in France, I spent part of my childhood in China and Switzerland, and escaped with my parents to Australia from China in 1941, because of the Japanese threat during World War II. That was the year my very beautiful eldest sister died from tuberculosis. She had contracted it at a boarding school in 1936 and, although my younger sister and I often played with her, even when she was coughing blood stained sputum, we never developed the disease. Her doctor was overheard stating to my mother that nothing was known about how the body resisted infection. That statement, and the fact that I grew up when World War II was raging in Europe and Asia, made me decide to study Medicine, and thereafter to be involved in Medical Research.
After my residency as an intern at the Royal Prince Alfred Hospital in Sydney, Australia, I was awarded a Fellowship that enabled me in 1958 to study for a doctorate in London at the Cancer Research Institute. It was as a result of my studies on mouse lymphocytic leukemia, a cancer which in mice begins in the thymus before spreading elsewhere, that I made the serendipitous discovery of the immunological function of the thymus—a long-neglected organ. This stresses how important serendipity is in making really novel discoveries. A whole new world opened up before me and my work became more and more exciting, as happened subsequently during the identification of T and B cells, aided by my first PhD student Graham Mitchell at the Walter and Eliza Hall Institute of Medical Research in Melbourne.
It is still now very exciting to me, to see that the thymus, once believed to be a useless vestigial organ, populated with cells which in 1963 were considered by Nobel Laureate Sir Peter Medawar “as an evolutionary accident of no very great significance”, is producing T cells involved essentially across the entire spectrum of tissue physiology and pathology. These cells act not just in reactions considered to be bona fide immunological, but also, to cite some examples, in metabolism, in tissue repair, in dysbiosis and in pregnancy. I also find it most rewarding to see that basic research on thymus function, first published in one of my papers in 1961, and on T and B cells a few years later, has sown the seeds that spawned the new era of immunotherapy which can now claim a seat in the therapeutic pantheon of oncology, next to and perhaps about to supersede surgery, radiotherapy and chemotherapy.
Before I end, let me warmly thank the Lasker Foundation for celebrating basic medical research and for having chosen Max Cooper and myself for this prestigious award.
Seated, left to right James Rothman, Yale University ● Xiaowei Zhuang, Harvard University ● Joseph Goldstein, Chair of the Jury, University of Texas Southwestern Medical Center ● Lucy Shapiro, Stanford University ● J. Michael Bishop, University of California, San Francisco ● Erin O’Shea, Howard Hughes Medical Institute
Standing, left to right Richard Locksley, University of California, San Francisco ● Jeremy Nathans, Johns Hopkins School of Medicine ● Michael Brown, University of Texas Southwestern Medical Center ● K. Christopher Garcia, Stanford University ● Christopher Walsh, Harvard University ● Marc Tessier-Lavigne, Stanford University ● Robert Lefkowitz, Duke University School of Medicine ● Craig Thompson, Memorial Sloan-Kettering Cancer Center ● Richard Lifton, Rockefeller University ● Harold Varmus, Weill Cornell Medical College ● Laurie Glimcher, Dana-Farber Cancer Institute ● Jeffrey Friedman, Rockefeller University ● Charles Sawyers, Memorial Sloan-Kettering Cancer Center | <urn:uuid:5cd45a09-4e66-4c4a-ad56-01d6a61a7486> | CC-MAIN-2019-47 | http://www.laskerfoundation.org/awards/show/b-and-t-cells-organizing-principle-adaptive-immune-system/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00338.warc.gz | en | 0.955825 | 6,209 | 3 | 3 |
Statue of Liberty (Seattle)
The Statue of Liberty, or Lady Liberty, is a replica of the Statue of Liberty, installed at Seattle's Alki Beach Park, in the U. S. state of Washington. It was installed in 1952 by the Boy Scouts of America and underwent a significant restoration in 2007 after repeated vandalism had damaged the sculpture; the sculpture was donated to the city by the Boy Scouts of America in 1952, as part of the Strengthen the Arm of Liberty campaign. It was installed in February 1952 at a site near the landing spot of the Denny Party, who named the first settlement there "New York Alki" before moving to modern-day Downtown Seattle; the site was near a location proposed for a "grand monument" in the 1911 city plan outlined by Virgil Bogue. The original statue was constructed using stamped copper sheets and was damaged by vandals; the entire statue was knocked off its base by vandals in 1975, requiring $350 in repairs funded by the city's parks department. A miniature version of the statue, left inside the larger statue's pedestal base, was re-discovered with a ripped arm that mirrored the acts of an earlier vandal.
It was the site of a temporary memorial after the September 11 attacks, with flowers and flags left around the statue. The statue was used as the backdrop to several protests against the U. S. invasion of Iraq and the subsequent Iraq War. The Northwest Programs for the Arts announced plans in 2004 to re-cast the entire sculpture in bronze and began soliciting donations to fund the project; the statue's crown was stolen during the campaign, which received a $15,000 grant from the city's neighborhoods department to complete the project. The old statue was removed in July 2006 and sent to a foundry in Tacoma to be re-cast in bronze and painted copper green; the $140,000 restoration project was completed the following year and the statue was re-installed at Alki Beach on September 11, 2007. The statue is 7.5 feet tall, about 5 percent of the original's height, faces north towards Elliott Bay. A new, 4.5-foot pedestal was designed for the statue, sitting in a new plaza built by the city's parks department and dedicated in September 2008.
List of public art in Seattle Replicas of the Statue of Liberty Statue of Liberty – Alki, Washington at Waymarking.com
Wing Luke Museum of the Asian Pacific American Experience
The Wing Luke Museum of the Asian Pacific American Experience is a history museum of the culture and history of Asian Pacific Americans located in Seattle, Washington's Chinatown-International District, founded in 1967. It is a Smithsonian Institution affiliate, the only pan-Asian Pacific American community-based museum in the US. In February 2013 it was recognized as one of two dozen affiliated areas of the U. S. National Park Service; the Wing Luke Museum's collections have over 18,000 items, including artifacts, documents and oral histories. Parts of the museum's collections are viewable through its online database. There is an oral history lab inside the museum for staff and public use; the Wing houses temporary and permanent exhibitions related to Asian American history and cultures. The museum represents over 26 ethnic groups; the museum uses a community-based exhibition model to create exhibits. As part of the community-based process, the museum conducts outreach into communities to find individuals and organizations to partner with.
The museum forms a Community Advisory Committee to determine the exhibit's direction. Staff at the museum conduct research, gather materials, records relevant oral histories under the guidance of the CAC; the CAC determines the exhibit's overall design and content. This process can take 12 to 18 months. In 1995, the Wing Luke Museum received the Institute for Museum and Library Services National Award for Museum Service for its exhibit process. Award-winning exhibits by the museum include Do You Know Bruce?, a 2014 exhibit on Bruce Lee. The Association of King County Historical Organizations awarded Do You Know Bruce? the 2015 Exhibit Award. The museum is named for Seattle City Council member Wing Luke, the first Asian American elected to public office in the Pacific Northwest. Luke suggested the need for a museum in the Chinatown-International District in the early 1960s to preserve the history of the changing neighborhood. After Luke died in a small plane crash in 1965, friends and supporters donated money to start the museum he envisioned.
The Wing Luke Memorial Museum, as it was first named, opened in 1967 in a small storefront on 8th Avenue. The museum focused on Asian folk art, but soon expanded its programming to reflect the diversity of the local community; the museum exhibited work emerging local artists, by the 1980s pan-Asian exhibits made by community volunteers became central to the museum. In 1987 the Wing Luke Museum moved to a larger home on 7th Avenue and updated its name to Wing Luke Asian Museum, it achieved national recognition in the 1990s under the direction of local journalist Ron Chew, a pioneer of the community based model of exhibit development that placed personal experiences at the center of exhibit narratives. Today the museum continues to present exhibits and programs that promote social justice, multicultural understanding and tolerance. In 2008 the museum moved to a larger building at 719 South King Street, in the renovated 1910 East Kong Yick Building; the Museum continued addressing civil rights and social justice issues, while preserving historic spaces within the building including the former Gee How Oak Tin Association room, the Freeman SRO Hotel, a Canton Alley family apartment, the Yick Fung Mercantile.
In 2010 the museum changed its name to the Wing Luke Museum of the Asian Pacific American Experience, informally "The Wing." The East Kong Yick Building, where the Museum is located, along with the West Kong Yick Building, were funded by 170 Chinese immigrants in 1910. In addition to storefronts, the East Kong Yick Building contained the Freeman Hotel, used by Chinese and Filipino immigrants until the 1940s; the museum's galleries now share the building with re-creations of the Gee How Oak Tin Association's meeting room and apartments that were inside the hotel. The museum preserves the contents of a general store, Yick Fung Co. which the owner donated in its entirety. The museum is in Seattle's Chinatown-International District next to Canton Alley a residential and communal area; the Wing runs Chinatown Discovery Tours, a tour service founded in 1985 that takes visitors to significant sites within the neighborhood. Official website
A park is an area of natural, semi-natural or planted space set aside for human enjoyment and recreation or for the protection of wildlife or natural habitats. Urban parks are green spaces set aside for recreation inside cities. National parks and Country parks are green spaces used for recreation in the countryside. State parks and Provincial parks are administered by sub-national government agencies. Parks may consist of grassy areas, rocks and trees, but may contain buildings and other artifacts such as monuments, fountains or playground structures. Many parks have fields for playing sports such as soccer and football, paved areas for games such as basketball. Many parks have trails for walking and other activities; some parks are built adjacent to bodies of water or watercourses and may comprise a beach or boat dock area. Urban parks have benches for sitting and may contain picnic tables and barbecue grills; the largest parks can be vast natural areas of hundreds of thousands square kilometers, with abundant wildlife and natural features such as mountains and rivers.
In many large parks, camping in tents is allowed with a permit. Many natural parks are protected by law, users may have to follow restrictions. Large national and sub-national parks are overseen by a park ranger or a park warden. Large parks may have areas for canoeing and hiking in the warmer months and, in some northern hemisphere countries, cross-country skiing and snowshoeing in colder months. There are amusement parks which have live shows, fairground rides and games of chance or skill. English deer parks were used by the aristocracy in medieval times for game hunting, they had walls or thick hedges around them to keep game animals in and people out. It was forbidden for commoners to hunt animals in these deer parks; these game preserves evolved into landscaped parks set around mansions and country houses from the sixteenth century onwards. These may have served as hunting grounds but they proclaimed the owner's wealth and status. An aesthetic of landscape design began in these stately home parks where the natural landscape was enhanced by landscape architects such as Capability Brown.
As cities became crowded, the private hunting grounds became places for the public. With the Industrial revolution parks took on a new meaning as areas set aside to preserve a sense of nature in the cities and towns. Sporting activity came to be a major use for these urban parks. Areas of outstanding natural beauty were set aside as national parks to prevent their being spoiled by uncontrolled development. Park design is influenced by the intended purpose and audience, as well as by the available land features. A park intended to provide recreation for children may include a playground. A park intended for adults may feature walking paths and decorative landscaping. Specific features, such as riding trails, may be included to support specific activities; the design of a park may determine, willing to use it. Walkers may feel unsafe on a mixed-use path, dominated by fast-moving cyclists or horses. Different landscaping and infrastructure may affect children's rates of use of parks according to sex.
Redesigns of two parks in Vienna suggested that the creation of multiple semi-enclosed play areas in a park could encourage equal use by boys and girls. Parks are part of the urban infrastructure: for physical activity, for families and communities to gather and socialize, or for a simple respite. Research reveals that people who exercise outdoors in green-space derive greater mental health benefits. Providing activities for all ages and income levels is important for the physical and mental well-being of the public. Parks can benefit pollinators, some parks have been redesigned to accommodate them better; some organisations, such as Xerces Society are promoting this idea. City parks play a role in improving cities and improving the futures for residents and visitors - for example, Millennium Park in Chicago, Illinois or the Mill River Park and Green way in Stamford, CT. One group, a strong proponent of parks for cities is The American Society of Landscape Architects, they argue that parks are important to the fabric of the community on an individual scale and broader scales such as entire neighborhoods, city districts or city park systems.
Parks need to feel safe for people to use them. Research shows that perception of safety can be more significant in influencing human behavior than actual crime statistics. If citizens perceive a park as unsafe, they might not make use of it at all. A study done in four cities. There are a number of features. Elements in the physical design of a park, such as an open and welcoming entry, good visibility, appropriate lighting and signage can all make a difference. Regular park maintenance, as well as programming and community involvement can contribute to a feeling of safety. While Crime Prevention Through Environmental Design has been used in facility design, use of CPTED in parks has not been. Iqbal and Ceccato performed a study in Stockholm, Sweden to determine if it would be useful to apply to parks, their study indicated that while CPTED could be useful, due to the
Gas Works Park
Gas Works Park, in Seattle, Washington, is a 19.1-acre public park on the site of the former Seattle Gas Light Company gasification plant, located on the north shore of Lake Union at the south end of the Wallingford neighborhood. The park was added to the National Register of Historic Places on January 2, 2013, more than a decade after being nominated. Gas Works park contains remnants of the sole remaining coal gasification plant in the United States; the plant operated from 1906 to 1956 and was bought by the City of Seattle for park purposes in 1962. The park opened to the public in 1975; the park was designed by Seattle landscape architect Richard Haag, who won the American Society of Landscape Architects Presidents Award of Design Excellence for the project. The plant's conversion into a park was completed by Daviscourt Construction Company of Seattle, it was named Myrtle Edwards Park, after the city councilwoman who had spearheaded the drive to acquire the site and who died in a car crash in 1969.
In 1972, the Edwards family requested that her name be taken off the park because the design called for the retention of much of the plant. In 1976, Elliott Bay Park, just north of Seattle's Belltown neighborhood, was renamed Myrtle Edwards Park. Gas Works Park incorporates numerous pieces of the old plant; some stand as ruins, while others have been reconditioned and incorporated into a children's "play barn" structure, constructed in part from what was the plant's exhauster-compressor building. A web site affiliated with the Seattle Times newspaper says, "Gas Works Park is the strangest park in Seattle and may rank among the strangest in the world." Gas Works Park features an artificial kite-flying hill with an elaborately sculptured sundial built into its summit. The park was for many years the exclusive site of a summer series of "Peace Concerts"; these concerts are now shared out among several Seattle parks. The park has for many years hosted one of Seattle's two major Fourth of July fireworks events.
The park is the traditional end point of the Solstice Cyclists and the start point for Seattle's World Naked Bike Ride. The park constituted one end of the Burke-Gilman bicycle and foot trail, laid out along the abandoned right-of-way of the Seattle, Lake Shore and Eastern Railway. However, the trail has now been extended several kilometers northwest, past the Fremont neighborhood toward Ballard; the soil and groundwater of the site was contaminated during operation as a gasification plant. The 1971 Master Plan called for "greening" the park through bio-phytoremediation. Although the presence of organic pollutants had been reduced by the mid-1980s, the US Environmental Protection Agency and Washington State Department of Ecology required additional measures, including removing and capping wastes, air sparging in the southeast portion of the site to try to remove benzene, a theoretical source of pollutants reaching Lake Union via ground water. There are no known areas of surface soil contamination remaining on the site today, although tar still oozes from some locations within the site and is isolated and removed.
Despite its somewhat isolated location, the park has been the site of numerous political rallies. These included a seven-month continuous vigil under the name PeaceWorks Park, in opposition to the Gulf War; the vigil began at a peace concert in August 1990 and continued until after the end of the shooting war. Among the people who participated in the vigil at one point or another were former congressman and future governor Mike Lowry, then-city-councilperson Sue Donaldson, 1960s icon Timothy Leary, beat poet Allen Ginsberg. Gas Works Park has been a setting for films such as Singles and 10 Things I Hate About You, it has been featured twice on the travel-based television reality show The Amazing Race: once as the finish line for Season 3 and another time as the starting line for Season 10. The building is a Washington State Landmark. Gas Works Park occupies a 20.5 acres promontory between the northwest and northeast arms of Lake Union. Little is known of pre–Euro-American site history, but there were Native American settlements around Lake Union.
Native names for Lake Union include Kah-chug, Tenas Chuck, Xa’ten. In the mid-19th century Thomas Mercer named it "Lake Union" in expectation of future canals linking it to Puget Sound and to Lake Washington. Dense forests still came down to the water's edge and the lake drained into Salmon Bay through a stream "full of windfalls and brush, impassable for a canoe". Lake Union in the 1860-70s was a popular vacation spot with Seattleites for summer house-boating and picnicking. Several sawmills were operating on Lake Union's shore by the 1850s, taking advantage of the dense forests. Beginning in 1872, Seattle Coal and Transportation Company ferried coal from its Renton Hill mines across the lake for portage across to Puget Sound. In the 1880s came the Denny sawmill at the south end of Lake Union, brick manufacturing, ship building, a tannery, iron works. Canals with small locks were cut in 1885 from Lake Washington to Lake Union, from Lake Union to Salmon Bay; these were suitable for transporting logs, but not for shipping.
The arrival of the Seattle, Lake Shore and Eastern Railway in 1887 ensured that Lake Union would continue to be a focus for industrial development. In 1900 the Seattle Gas Light Company began to purchase lots on this promontory and its coal gas plant went into operation in 1906. At the time the neighborhood was known as Edgewater Seattle Gas Light Company purchased lots on the north shore promontory from 1900 to 1909. Despite t
Seward Park (Seattle)
Seward Park is a municipal park which covers 300 acres. It is located in southeast Seattle, Washington, U. S. A in the neighborhood of the same name; the park occupies all of a forested peninsula that juts into Lake Washington. It contains one of the last surviving tracts of old-growth forest within the city of Seattle; the park is named for U. S. Secretary of State William Seward. One approaches the park from the north by Lake Washington Boulevard S, from the south by Seward Park Avenue S. or from the west by S Orcas Street. The main parking lot and a tennis court are located in the southwest corner; the most used trail is a car-free loop around the park. It is flat and 2.4 mi. The perimeter trail was repaved in 2007. Other trails run through the interior, including a few car-accessible roads that lead to amenities including an amphitheater and picnic area. Seward Park features numerous small beaches, the largest one on its southwest side, as well as a playground and an arts center; the 300 acres of Seward Park have about a 120 acres surviving remnant of old growth forest, providing a glimpse of what some of the lake shore looked like before the city of Seattle.
With trees older than 250 years and many less than 200, the Seward Park forest is young. The area has been inhabited since the end of the last glacial period; the People of the Large Lake had resource sites. The Duwamish called Bailey Peninsula "Noses" for rocky points, or "noses", at the north and south ends evident before the completion of the Lake Washington Ship Canal in 1916 lowered the level of Lake Washington; the marshy isthmus was called cka’lapsEb, Lushootseed for “neck”. The purchase of the park was suggested as early as 1892, but was sidelined due to its distance from what was the city. However, the Olmsted Brothers assimilated it into its plan for Seattle parks, the city of Seattle bought Bailey Peninsula in 1911 for $322,000, named the park after William H. Seward, former United States Secretary of State, of Alaska Purchase fame. At the entrance to the park, in a wooded island filled with flowers between the circular entrance and exit road, there is a little-known monument: a taiko-gata stone lantern, a gift of friendship from the City of Yokohama, Japan, to the City of Seattle, given in 1930 in gratitude to Seattle's assistance to Yokohama after the 1923 Great Kantō earthquake.
Since at least early July, 2004, the park has become a home to wild rabbits and a growing colony of feral Peruvian conures, who were released into the wild by their owners. They fly between Maple Leaf in northeast Seattle; the park is home to two nesting pairs of bald eagles, who can be seen flying over Lake Washington and diving to the water's surface to catch fish and ducks. Renovation on the Tudor-style house at the entrance to Seward Park—originally the Seward Park Inn, a Seattle city landmark—was completed early in 2008 and is now the Seward Park Environmental & Audubon Center. Programming at the Center and in the park includes school, community, arts in the environment, special events; the Center includes exhibits, an extensive library, a laboratory, a small gift shop. Seward Park offers at least five unique experiences; the first Seward Park experience is the beach on Andrews' Bay. Flanked by a broad lawn and with full facilities, it is one of Seattle's several lakeshore beaches. On the other side of this beach is the second experience: a playground, tennis courts, several large parking lots.
This is the most social part of Seward Park, the lot features neighboring residents sometimes throw impromptu parties in this area of the park. The third experience is in the "upper lots," which provide parking for a large picnic area and an outdoor amphitheatre. Civic events are held in the amphitheatre, which has beautiful views of the old-growth forest, it has become a well-known spot to celebrate, the diversity of both Seattle and its South End; these parking lots can host impromptu parties. The fourth experience is the old growth forest itself. Granite trail markers help hikers navigate; the fifth experience is the paved perimeter of the park, a favorite place for neighbors and visitors alike to walk, run and blade. The perimeter reminds its user of the vast metropolis, Seattle, since it affords to the south of the park a view of Mount Rainier dominating South Lake Washington, as well as Boeing plants. Sites and works regarding William H. Seward "Seward Park". Seattle Parks and Recreation.
Not recorded, 2006-08-10. Retrieved not recorded, 2006-08-21. "Seward Park History". Seattle Parks and Recreation. 2003-06-30. Retrieved not recorded, 2006-08-21. Sherwood, Don. "Seward Park". PARK HISTORY: Sherwood History Files. Seattle Parks and Rec
A naval mine is a self-contained explosive device placed in water to damage or destroy surface ships or submarines. Unlike depth charges, mines are deposited and left to wait until they are triggered by the approach of, or contact with, any vessel. Naval mines can be used offensively, to hamper enemy shipping movements or lock vessels into a harbour. Mines can be laid in many ways: by purpose-built minelayers, refitted ships, submarines, or aircraft—and by dropping them into a harbour by hand, they can be inexpensive: some variants can cost as little as US$2000, though more sophisticated mines can cost millions of dollars, be equipped with several kinds of sensors, deliver a warhead by rocket or torpedo. Their flexibility and cost-effectiveness make mines attractive to the less powerful belligerent in asymmetric warfare; the cost of producing and laying a mine is between 0.5% and 10% of the cost of removing it, it can take up to 200 times as long to clear a minefield as to lay it. Parts of some World War II naval minefields still exist because they are too extensive and expensive to clear.
It is possible for some of these 1940s-era mines to remain dangerous for many years to come. Mines have been employed as offensive or defensive weapons in rivers, estuaries and oceans, but they can be used as tools of psychological warfare. Offensive mines are placed in enemy waters, outside harbours and across important shipping routes with the aim of sinking both merchant and military vessels. Defensive minefields safeguard key stretches of coast from enemy ships and submarines, forcing them into more defended areas, or keeping them away from sensitive ones. Minefields designed for psychological effect are placed on trade routes and are used to stop shipping from reaching an enemy nation, they are spread thinly, to create an impression of minefields existing across large areas. A single mine inserted strategically on a shipping route can stop maritime movements for days while the entire area is swept. International law requires nations to declare when they mine an area, to make it easier for civil shipping to avoid the mines.
The warnings do not have to be specific. Precursors to naval mines were first invented by Chinese innovators of Imperial China and were described in thorough detail by the early Ming dynasty artillery officer Jiao Yu, in his 14th century military treatise known as the Huolongjing. Chinese records tell of naval explosives in the 16th century, used to fight against Japanese pirates; this kind of naval mine was loaded in a wooden box, sealed with putty. General Qi Jiguang made several timed, to harass Japanese pirate ships; the Tiangong Kaiwu treatise, written by Song Yingxing in 1637 AD, describes naval mines with a rip cord pulled by hidden ambushers located on the nearby shore who rotated a steel wheellock flint mechanism to produce sparks and ignite the fuse of the naval mine. Although this is the rotating steel wheellock's first use in naval mines, Jiao Yu had described their use for land mines back in the 14th century; the first plan for a sea mine in the West was by Ralph Rabbards, who presented his design to Queen Elizabeth I of England in 1574.
The Dutch inventor Cornelius Drebbel was employed in the Office of Ordnance by King Charles I of England to make weapons, including a "floating petard" which proved a failure. Weapons of this type were tried by the English at the Siege of La Rochelle in 1627. American David Bushnell developed the first American naval mine for use against the British in the American War of Independence, it was a watertight keg filled with gunpowder, floated toward the enemy, detonated by a sparking mechanism if it struck a ship. It was used on the Delaware River as a drift mine. In 1812 Russian engineer Pavel Shilling exploded an underwater mine using an electrical circuit. In 1842 Samuel Colt used an electric detonator to destroy a moving vessel to demonstrate an underwater mine of his own design to the United States Navy and President John Tyler. However, opposition from former President John Quincy Adams scuttled the project as "not fair and honest warfare." In 1854, during the unsuccessful attempt of the Anglo-French fleet to seize the Kronstadt fortress, British steamships HMS Merlin, HMS Vulture and HMS Firefly suffered damage due to the underwater explosions of Russian naval mines.
Russian naval specialists set more than 1500 naval mines, or infernal machines, designed by Moritz von Jacobi and by Immanuel Nobel, in the Gulf of Finland during the Crimean War of 1853-1856. The mining of Vulcan led to the world's first minesweeping operation. During the next 72 hours, 33 mines were swept; the Jacobi mine was designed by German-born, Russian engineer Jacobi, in 1853. The mine was tied to the sea bottom by an anchor. A cable connected it to a galvanic cell which powered it from the shore, the power of its explosive charge was equal to 14 kilograms of black powder. In the summer of 1853, the production of the mine was approved by the Committee for Mines of the Ministry of War of the Russian Empire. In 1854, 60 Jacobi mines were laid in the vicinity of the Forts Pavel and Alexander, to deter the British Baltic Fleet from attacking them, it phased out its direct competitor the Nobel mine on the insistence of Admiral Fyodor Litke. The Nobel mines were bought from Swedish industrialist Immanuel Nobel who had entered into collusion with Russian head of navy Alexander Sergeyevich Menshikov.
Despite their high cost t
Denny Park (Seattle)
Denny Park is a park located in the South Lake Union neighborhood of Seattle, Washington. It occupies the block bounded by John Street and Denny Way on the north and south and Dexter and 9th Avenues N. on the west and east. Denny Park is Seattle's oldest park. In 1861 pioneer David Denny donated the land to the city as Seattle Cemetery. In 1883, the graves were removed and the cemetery was converted to a park, the city's first. By 1904, the surrounding area had become residential, the park was improved with formally designed planting beds and other play equipment, a sand lot and a play field; the Denny School, an elementary school, stood southeast of the park from 1884 to 1928. Children were, from the earliest, regular users of the park; the park stood on the north slope of Denny Hill. Between 1900 and 1931, the landscape of central Seattle was reshaped by a series of regrading projects. Denny Regrade No. 1, around 1910, lowered the land to the west of the park by some 60 feet. Some surviving Seattle pioneers demanded that the park remain unchanged.
The result, was that access to the park from the downtown side was impossible by car due to the grade. In Denny Regrade No. 2, around 1930, the park was once again planted in a formal style. In 1948, over the strenuous objections of the Denny family, a Parks and Recreation building was built within the park to house this growing city department. For several years, before all of the space was required for Parks personnel, the lower level of the new building housed the Washington Society for Crippled Children. By 1964, Parks Department personnel had inhabited the building. Today, Denny Park is undergoing extensive renovation. Phase 1 opened on May 2, 2009. Future phases include a history element highlighting early Seattle. A diverse coalition of park supporters called Friends of Denny Park have invested many hours to bring their need for a safe and vital park environment to the attention of the City; this group is working, in partnership with City Departments, to revitalize the park to serve the children and other constituents who populate two of Seattle's fastest growing neighborhoods—South Lake Union and Denny Triangle.
Denny Park Restoration Sherwood History Files | <urn:uuid:91044f39-61ef-495c-8a5a-0fa80f3e8fd4> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/Alki_Beach_Park | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00140.warc.gz | en | 0.967068 | 6,191 | 2.65625 | 3 |
If you’re a fan of chocolate, every occasion is right for this treat. Feeling down? Eat some chocolate. Celebrating? Eat some more. Want to show your Valentine how special you think your sweetheart is? Offer up chocolate!
People have been enjoying chocolate as a food, drink and medicine for thousands of years. The ancient Maya and Aztecs called chocolate kakaw and used it as medicine.
They also made chocolate offerings to their gods. Chocolate comes from the cacao (Kuh-KOW) tree, and its scientific name reflects its history. Theobroma cacao translates to “cacao, food of the gods.” These days, chocolate is not just for deities. The treat has become popular around the world.
It’s so popular, in fact, that people spend more than $90 billion on it each year. And that number just keeps growing.
People may eat chocolate because of its taste. And many adults justify that treat because they’ve heard it has health benefits. But chocolate’s popularity also has downsides. Scientists are scrambling to help farmers produce enough cacao to meet the growing demand for chocolate. One big challenge: Plant diseases threaten crop yields. So scientists are working hard to understand the cacao tree in hopes of protecting it.
Cacao is a tropical tree. And it's unusual: Its fruits, called pods, grow directly on the tree trunk. Inside the pod’s citrusy flesh are large, brown seeds. They are what hold the starting material for one of the world’s tastiest treats.
Those seeds, like the tree, are called cacao. After being harvested, they’re heaped in piles or poured into boxes to ferment. During this process, microbes break down the flesh. As they digest its sugars and other chemicals, they give off heat. That heat breaks down cells within the bean. This lets chemical reactions take place that produce the flavors we recognize in chocolate. After four to seven days, the seeds are laid in the sun to dry. Now they’re ready to be roasted and ground to make cocoa.
Cocoa is the basic ingredient in chocolate. When cocoa is mashed into a thick brown paste, it’s called chocolate liquor. Milk and dark chocolate both contain chocolate liquor. (Hardened cocoa liquor is also what chefs call unsweetened baking chocolate.) Cocoa butter is the fat in chocolate liquor. That fat can be separated out and used to make white chocolate. Chocolate candy also includes sugar, vanilla, lecithin and sometimes milk. (Lecithin is an emulsifier — a chemical that helps fatty and non-fatty ingredients stay smoothly mixed. That helps to stabilize the final product.)
The sweet confection we enjoy today is nothing like the original forms of cocoa. The Maya mixed cocoa, water and chili pepper to make a spicy, bitter drink. It wasn't until Spanish explorers sent cocoa back to Europe that candy makers came up with our modern, sweet version.
Story continues below image.
The ancient Maya and Aztec people also mixed cacao seeds with various herbs to make medicines. They used these to treat symptoms such as diarrhea, fever and cough. Cacao has a long history as a medicine — but scientists have only recently begun to investigate its benefits.
Cocoa for health
Cocoa contains antioxidants. These molecules stop chemical reactions that involve oxidation, which can damage the body’s cells. DNA — the molecule that gives instructions to each of our cells — is especially vulnerable. Damaged DNA can eventually lead to cancer. So antioxidants are an important part of our diet.
Many kinds of dark chocolate are high in antioxidants. Milk chocolate and white chocolate are not.
“Many research studies have shown that antioxidants protect DNA,” notes Astrid Nehlig. She is the director of research at the National Institute for Health and Medical Research in Strasbourg, France. Fruit, chocolate and coffee are all high in antioxidants. The DNA in people who consume a lot of these foods is less likely to break. And when it does, the body is more likely to repair it, Nehlig says.
Flavanols (FLAV-uh-nahls) are another important group of cocoa compounds. Arteries carry blood from the heart to organs and other tissues from head to foot. Flavanols can dilate — or widen — those arteries. That helps blood flow better, Nehlig explains. Many studies have shown that eating cocoa products can keep blood pressure low and help improve heart health, thanks in part to its flavanols.
In fact, improved blood flow seems to be the one benefit of cocoa products that holds up in study after study. Last year, medical researchers in Australia pored over 35 different studies. This type of investigation is called a meta-analysis. It looks at the big picture.
Eating cocoa compounds indeed appears to improve blood flow, these researchers reported. In April 2017, they shared their findings in the Cochrane Database of Systematic Reviews.
Improved blood flow helps the brain work better, notes Nehlig. So more blood reaching the brain means more energy for brain cells. Some research has found that learning and memory improve when people eat cocoa flavanols. But researchers do not yet know exactly how these compounds work, she adds.
It’s also unclear whether chocolate provided the flavanols in question. Researchers usually don’t give study participants chocolate. Instead, they give them cocoa flavanol supplements. So what? Certain candy-making processes may remove or alter the flavanols. That could erase their brain-sparing benefits. So it’s too soon to know whether cocoa flavanols might limit memory problems (including dementia) later in life.
Flavanols and other cocoa compounds may improve health in other ways, too. Studies suggest that molecules in cocoa can reduce symptoms ranging from anxiety to allergies. As encouraging as all such studies sound, it’s important to keep in mind that they often have come from labs funded by the chocolate industry.
Not all chocolate is a ‘health’ snack
Ready to rush out and buy chocolate for your health? First, consider this: Not all chocolate contains these potentially beneficial compounds.
Much of the chocolate sold today is low in cocoa flavanols, says Susan Miszewski. She works for Mars Symbioscience in Germantown, Md. “It is a common myth that chocolate with a high percent of cocoa solids (such as 70 percent dark chocolate) has higher levels of cocoa flavanols,” she says. Most chocolates do contain some of these flavanols. But that amount varies a lot, Mars scientists and their university colleagues have shown. So when it comes to these health boosters, she says, “chocolate is not a reliable source.”
The normal steps in harvesting a cocoa bean and processing it into a candy bar destroy flavanols along the way, she notes. Those processes include fermenting and roasting the seeds. Lowering the acidity of the cocoa also can break down flavanols. The acidity falls when chocolate makers add a base — or alkaline material — to cocoa. This alkalization makes chocolate less bitter, she says. But all of that processing means the final piece of chocolate has likely lost many of its initial health-boosting flavanols.
And don’t forget: Even chocolate that still contains flavanols is chock full of calories, sugar and fat.
Scientists at Mars, the candy company that makes M&M’s, know the potential health benefits flavanols can offer. So they have developed a process to preserve them in cocoa, Miszewski says. Cocoa prepared in this new way isn't used in candy bars, though. So it won't make your sugary snack suddenly healthy. But Mars is using this material to develop potentially health-enhancing flavanol-rich products.
Mars also has joined forces with other companies, universities and the U.S. Department of Agriculture to map the cacao genome. That’s the complete set of genes that make up the biological instruction manual in each of the plant’s cells. The resulting Cacao Genome Database is freely available for anyone to use.
Why build such a database? Between 40 million and 50 million people worldwide work in jobs related to cacao, cocoa and chocolate. So understanding and protecting the cacao tree is globally important.
Long ago, Central and South American rainforests were the only source of cacao trees. As Spaniards and other Europeans colonized new parts of the world, they found the trees and eventually moved some of these trees with them. Now cacao grows across the world’s tropics. A whopping 70 percent of cacao today comes from Africa, where the climate is just right. Ghana and the Ivory Coast grow most of that — 60 percent of the world's total.
Growing cacao is a good way for farmers to earn money and support their families. But the demand for cacao keeps increasing. That makes it difficult for farmers to grow enough. Diseases also threaten their trees. Cacao-killing diseases would do more than take away a tasty treat — they also could plunge millions of people into poverty. Scientists are using the Cacao Genome Database in hopes of heading off both of these problems.
Mark Guiltinan and Siela Maximova are plant biologists at Pennsylvania State University in University Park. They use the database all the time. “If there is a gene we are interested in, we can look it up in the database and use that information to design future experiments,” explains Guiltinan.
His group may look for a gene variant that may make these plants better at fighting off disease, for example. Or they may seek a gene to produce more flavorful fruit. They may even search for genes that allow the trees to grow more quickly.Such genes may already be known to exist in other, better-studied plants. Examples might include soybeans or corn. “We can find genes in cacao that are similar . . . to some of these,” Guiltinan says. And, he points out, “Often they share a similar function.”
He and Maximova can then test whether these genes actually work the same way in cacao trees as in other plants. The scientists don’t have time to wait for each seed to grow into a new tree. Instead, they have developed a way of cloning trees using the parts of one tree’s flowers (See "How to grow a cacao tree in a hurry".)
Then they check to see if the new trees make more pods, for example, or resist certain diseases. One such disease is called frosty pod. It makes cacao pods turn white and rot on the tree. Another is called witches’ broom. This infection spreads throughout the tree and can eventually kill it. Fungi cause both diseases.
Guiltinan and Maximova use their gene identification process to find trees that have several of the traits they want. The scientists then make many copies of those strong, healthy trees.
So far, the researchers have helped to clone 100 varieties of cacao tree. The result? About 100 million cacao trees that come from these newfound varieties have been planted in fields across Indonesia. And, Guiltinan adds, those trees are living up to their potential. They are growing more pods and resisting disease.
“Cacao plants are great for the environment, protecting soil, water and habitat,” he notes. And the income from farming cacao can be a big help to people in developing countries, he adds.
The quest for strong and healthy cacao trees will be important for meeting the world’s growing demand for chocolate and cocoa. As more people learn about cocoa’s potential health benefits, that demand might grow even more. Scientists hope their work will ensure a long-lasting supply of cacao.
That should be good news for cacao growers — and chocolate lovers — everywhere.
agriculture The growth of plants, animals or fungi for human needs, including food, fuel, chemicals and medicine.
allergy The inappropriate reaction by the body’s immune system to a normally harmless substance. Untreated, a particularly severe reaction can lead to death.
antioxidant Any of many chemicals that can shut down oxidation — a biologically damaging reaction. They do this by donating an electron to a free radical (a reactive molecular fragment) without becoming unstable. Many plant-based foods are good sources of natural antioxidants, including vitamins C and E.
anxiety A nervous reaction to events causing excessive uneasiness and apprehension. People with anxiety may even develop panic attacks.
blood pressure The force exerted against vessel walls by blood moving through the body. Usually this pressure refers to blood moving specifically through the body’s arteries. That pressure allows blood to circulate to our heads and keeps the fluid moving so that it can deliver oxygen to all tissues. Blood pressure can vary based on physical activity and the body’s position. High blood pressure can put someone at risk for heart attacks or stroke. Low blood pressure may leave people dizzy, or faint, as the pressure becomes too low to supply enough blood to the brain.
cacao The name of a tropical tree and of the tree’s seeds (from which chocolate is made).
cancer Any of more than 100 different diseases, each characterized by the rapid, uncontrolled growth of abnormal cells. The development and growth of cancers, also known as malignancies, can lead to tumors, pain and death.
cell The smallest structural and functional unit of an organism. Typically too small to see with the unaided eye, it consists of a watery fluid surrounded by a membrane or wall. Depending on their size, animals are made of anywhere from thousands to trillions of cells. Some organisms, such as yeasts, molds, bacteria and some algae, are composed of only one cell.
chemical A substance formed from two or more atoms that unite (bond) in a fixed proportion and structure. For example, water is a chemical made when two hydrogen atoms bond to one oxygen atom. Its chemical formula is H2O. Chemical can also be used as an adjective to describe properties of materials that are the result of various reactions between different compounds.
chemical reaction A process that involves the rearrangement of the molecules or structure of a substance, as opposed to a change in physical form (as from a solid to a gas).
climate The weather conditions prevailing in one area, in general, or over a long period.
clone An exact copy (or what seems to be an exact copy) of some physical object. (in biology) An organism that has exactly the same genes as another, like identical twins. Often a clone, particularly among plants, has been created using the cell of an existing organism. Clone also is the term for making offspring that are genetically identical to some “parent” organism.
cocoa A powder derived from the solids (not the fats) in beans that grow on the Theobroma cacao plant, also known as the cocoa tree. Cocoa is also the name of a hot beverage made from cocoa powder (and sometimes other materials) mixed with water or milk.
compound (often used as a synonym for chemical) A compound is a substance formed when two or more chemical elements unite (bond) in fixed proportions. For example, water is a compound made of two hydrogen atoms bonded to one oxygen atom. Its chemical symbol is H2O.
database An organized collection of information.
dementia A type of mental disorder caused by disease or injury that causes people to gradually lose all or part of their memory. It may start out temporary and build to a permanent condition where the ability to reason also is impaired.
developing country A relatively poor nation with little industry and a lower standard of living than industrial countries, such as the United States, Germany and Japan.
diet The foods and liquids ingested by an animal to provide the nutrition it needs to grow and maintain health. (verb) To adopt a specific food-intake plan for the purpose of controlling body weight.
digest (noun: digestion) To break down food into simple compounds that the body can absorb and use for growth. Some sewage-treatment plants harness microbes to digest — or degrade — wastes so that the breakdown products can be recycled for use elsewhere in the environment.
dilate To temporarily swell or expand in size.
DNA (short for deoxyribonucleic acid) A long, double-stranded and spiral-shaped molecule inside most living cells that carries genetic instructions. It is built on a backbone of phosphorus, oxygen, and carbon atoms. In all living things, from plants and animals to microbes, these instructions tell cells which molecules to make.
emulsify To blend two materials together that do not ordinarily want to mix and stay blended. An additive that helps keep such blended materials from separating again is known as an emulsifier.
environment The sum of all of the things that exist around some organism or the process and the conditions that those things create for that organism or process. Environment may refer to the weather and ecosystem in which some animal lives, or, perhaps, the temperature, humidity and placement of components in some electronics system or product.
fat A natural oily or greasy substance occurring in plants and in animal bodies, especially when deposited as a layer under the skin or around certain organs. Fat’s primary role is as an energy reserve. Fat is also a vital nutrient, though it can be harmful to one’s health if consumed in excessive amounts.
fermentation (v. ferment) The metabolic process of converting carbohydrates (sugars and starches) into short-chain fatty acids, gases or alcohol. Yeast and bacteria are central to the process of fermentation. Fermentation is a process used to liberate nutrients from food in the human gut. It also is an underlying process used to make alcoholic beverages, from wine and beer to stronger spirits.
flavanol A group of plant-derived compounds. Some of these are antioxidants, meaning they can fight cellular damage from oxidation — often resulting in heart-healthy benefits. Among the best known of these antioxidant flavanols is epicatechin, found in some teas and cocoa-based products.
fruit A seed-containing reproductive organ in a plant.
gene (adj. genetic) A segment of DNA that codes, or holds instructions, for a cell’s production of a protein. Offspring inherit genes from their parents. Genes influence how an organism looks and behaves.
genome The complete set of genes or genetic material in a cell or an organism. The study of this genetic inheritance housed within cells is known as genomics.
habitat The area or natural environment in which an animal or plant normally lives, such as a desert, coral reef or freshwater lake. A habitat can be home to thousands of different species.
infection A disease that can spread from one organism to another. It’s usually caused by some sort of germ.
Maya A native American culture developed by people who lived between 2500 B.C. and 1500 A.D. in what is now parts of southern Mexico (its Yucatan Peninsula) and Central America. At its height (between about 250 and 900 A.D.), the density of people in some Mayan cities was equal to that in Medieval Europe.
meta-analysis An investigation of data from a broad range of studies in a given area of research. It often comes from essentially pooling together data from a series of small studies, none of which on their own might have had the statistical power to make broad generalizations from their findings. Such studies also suffer from a weakness: The studies they draw upon may not be similar enough to safely mash-up. It might be like looking for the effects of apples by combining studies on apples and oranges. Or anticipating effects in children from studies that had focused almost entirely on the elderly. Strong meta-analyses are those which comb through data from very similar types of studies.
microbe Short for microorganism. A living thing that is too small to see with the unaided eye, including bacteria, some fungi and many other organisms such as amoebas. Most consist of a single cell.
molecule An electrically neutral group of atoms that represents the smallest possible amount of a chemical compound. Molecules can be made of single types of atoms or of different types. For example, the oxygen in the air is made of two oxygen atoms (O2), but water is made of two hydrogen atoms and one oxygen atom (H2O).
organ (in biology) Various parts of an organism that perform one or more particular functions. For instance, an ovary is an organ that makes eggs, the brain is an organ that makes sense of nerve signals and a plant’s roots are organs that take in nutrients and moisture.
oxidation A process that involves one molecule’s theft of an electron from another. The victim of that reaction is said to have been “oxidized,” and the oxidizing agent (the thief) is “reduced.” The oxidized molecule makes itself whole again by robbing an electron from another molecule. Oxidation reactions with molecules in living cells are so violent that they can cause cell death. Oxidation often involves oxygen atoms — but not always.
pH A measure of a solution’s acidity. A pH of 7 is perfectly neutral. Acids have a pH lower than 7; the farther from 7, the stronger the acid. Alkaline solutions, called bases, have a pH higher than 7; again, the farther above 7, the stronger the base.
rainforest Dense forest rich in biodiversity found in tropical areas with consistent heavy rainfall.
resistance (as in disease resistance) The ability of an organism to fight off disease.
tissue Made of cells, any of the distinct types of materials that make up animals, plants or fungi. Cells within a tissue work as a unit to perform a particular function in living organisms. Different organs of the human body, for instance, often are made from many different types of tissues.
trait A characteristic feature of something. (in genetics) A quality or characteristic that can be inherited.
variant A version of something that may come in different forms. (in biology) Members of a species that possess some feature (size, coloration or lifespan, for example) that make them distinct. (in genetics) A gene having a slight mutation that may have left its host species somewhat better adapted for its environment.
Journal: K. Ried, P. Fakler and N.P. Stocks. Effect of cocoa on blood pressure. Cochrane Database of Systematic Reviews 2017. Issue 4. Article No. CD008893. Published online April 25, 2017. doi: 10.1002/14651858.CD008893.pub3.
Journal: H Schmitz & H.-Y. Shapiro. The future of chocolate. Scientific American. Vol. 24, published online May 28, 2015. doi: 10.1038/scientificamericanfood0615-28.
Journal: R. Latif. Health benefits of cocoa. Current Opinion in Clinical Nutrition & Metabolic Care. Vol. 16, published online October 2013. doi: 10.1097/MCO.0b013e328365a235.
Journal: A. Nehlig. The neuroprotective effects of cocoa flavanol and its influence on cognitive performance. British Journal of Clinical Pharmacology. Vol. 75, published online July 10, 2012. doi: 10.1111/j.1365-2125.2012.04378.x.
Journal: S.R. Bauer et al. Cocoa consumption, cocoa flavanols, and effects on cardiovascular risk factors: An evidence-based review. Current Cardiovascular Risk Reports. Vol. 5, published online February 2, 2011. doi: 10.1007/s12170-011-0157-5.
Journal: S.N. Maximova et al. Field performance of Theobroma cacao L. plants propagated via somatic embryogenesis. In Vitro Cellular & Developmental Biology — Plant. Vol. 44, published online October 23, 2008. doi: 10.1007/s11627-008-9130-5. | <urn:uuid:23bfccb9-8912-4e86-b24b-a25f235b636c> | CC-MAIN-2019-47 | https://www.sciencenewsforstudents.org/article/increasingly-chocolate-makers-turn-science | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00097.warc.gz | en | 0.941283 | 5,095 | 2.984375 | 3 |
The Battle of Phú Lộc took place from 28 August to 10 December 1974 when North Vietnamese forces captured a series of hills and installed artillery that closed Phu Bai Air Base and interdicted Highway 1. The hills were recaptured by the South Vietnamese in costly fighting that depleted its reserve forces.
The Hải Vân Ridge formed the Thừa Thiên-Quảng Nam Province boundary from the sea to Bạch Mã Mountain, which was occupied by the People's Army of Vietnam (PAVN) in October 1973. The ridge continued west past Bạch Mã until it descended into the valley of the Sông Tả Trạch at Ruong Ruong, where the PAVN had established a forward operating base. Local Route 545 twisted through the mountains north from Ruong Ruong ( ), joining Highway 1 just south of Phu Bai. As it crossed over the western slopes of the Hải Vân Ridge, Route 545 passed between two lower hills, Núi Mô Tau ( ) on the west and Núi Bong ( ) on the east. Núi Mô Tau and Núi Bong were only about 300 meters and 140 meters high, respectively, but the Army of the Republic of Vietnam (ARVN) positions on them, and on neighboring hills, formed the main outer ring protecting Phu Bai and Huế from the south. Outposts were placed on hills 2,000 to 5,000 meters farther south, including hills as identified by their elevations of 144 ( ), 224 ( ), 273 ( ) and 350 meters ( ) .
At first, the I Corps commander, General Ngô Quang Trưởng, viewed the see-saw contest for the hills south of Núi Mô Tau as hardly more than training exercises and of no lasting tactical or strategic importance. That assessment was supportable so long as the PAVN was unable to extend his positions to within range of Phu Bai. Once this extension occurred, protecting Huế's vital air and land links with the south became matter of great urgency.
During inconclusive engagements in the spring of 1974, the ARVN 1st Division managed to hold on to Núi Mô Tau and Núi Bong, losing Hill 144 between the two, but regaining it on 7 April. Hills 273 and 350 were lost; then Hill 350 was recaptured by the 3rd Battalion, 3rd Infantry, in a night attack on 4 June. By this time, I Corps units were restricted by reductions in artillery ammunition. Tight restrictions had been imposed by General Trưởng on the number of rounds that could be fired in counterbattery, preparatory, and defensive fires. These conditions impelled the infantry commanders to seek means other than heavy artillery fires to soften objectives before the assault. In recapturing Hill 350, the 3rd Infantry worked around behind the hill and blocked the PAVN's access to defenses on the hill. Within a few days, PAVN soldiers on the hill were out of food and low on ammunition. When the ARVN commander, monitoring the PAVN's tactical radio net, learned this, he ordered the assault. No artillery was used; mortars and grenades provided the only fire support for the ARVN infantrymen. But they took the hill on the first assault even though the PAVN defenders fired a heavy concentration of Tear gas against them. ARVN casualties were light while the PAVN 5th Regiment lost heavily in men and weapons.
As the ARVN 1st Division pressed southward against the PAVN 324B Division's battalions trying to hold hard-won outposts in the hills, another new PAVN corps headquarters was organized north of the Hải Vân Pass and placed in command of the 304th, 324B and 325th Divisions. Designated the 2nd Corps, it was a companion to the new 1st Corps in Thanh Hóa Province of North Vietnam, the 3rd Corps south of the Hải Vân and the 301st Corps near Saigon. In the Thừa Thiên campaign, the 324B Division eventually assumed control of five regiments: its own 803rd and 812th and three independent PAVN infantry regiments, the 5th, 6th, and 271st.
In early June 1974, after releasing the 1st Airborne Brigade to the reserve controlled by the Joint General Staff, General Trưởng made major adjustments in command and deployments north of the Hải Vân Pass. The Marine Division was extended to cover about 10km of Thừa Thiên Province and was reinforced with the 15th Ranger Group of three battalions and the 1st Armored Brigade and had operational control of Quảng Trị Province's seven Regional Force (RF) battalions. The division commander, Brig. Gen. Bui The Lan, positioned his forces with the 258th Marine Brigade, with one M48 tank company attached, defending from the sea southwest to about 5km east of Quảng Trị. The 369th Marine Brigade held the center sector, Quảng Trị city and Highway 1. Southwest of the 369th was the attached 15th Ranger Group along the Thạch Hãn River and the 147th Marine Brigade was on the left and south of the 15th Rangers. When he had to extend his forces southward to cover the airborne sector, General Lan used a task force of the 1st Armored Brigade, two Marine battalions, and an RF battalion, keeping three tank companies on the approaches to Huế. The Airborne Division retained the responsibility for the Song Be approach, placing its two remaining brigades, the 2nd and 3rd, to the west. The 2nd Brigade had two RF battalions and one company of M41 tanks attached. The PAVN 4th Regiment was the principal unit in the 2nd Brigade's sector, while the 271st Regiment opposed the 3rd Airborne Brigade to the south near Firebase Bastogne. The four regiments and two attached RF battalions of the 1st Infantry Division were deployed in a long arc from the Airborne Division's left through the hills to Phú Lộc District, with the 54th Infantry Regiment protecting Highway 1 from the Truoi Bridge, just north of Núi Bong, to the Hải Vân Pass.:125–7
Hills 144, 273, 224, 350 and Núi Bong, and Núi Mô Tau, overlooking the lines of communication through Phú Lộc District and providing observation and artillery sites in range of Phu Bai, were generally along the boundary between Phú Lộc and Nam Hoa Districts of Thừa Thiên Province. Having recaptured Hill 350 on 4 June, the ARVN 1st Division continued the attack toward Hill 273. A fresh battalion, the 1st Battalion, 54th Infantry, took the hill on 27 June, incurring light casualties and by the next day, the 1st Division controlled all of the important high ground south of Phu Bai.:129
On 29 June General Trưởng directed his deputy north of the Hải Vân Pass, General Lâm Quang Thi, to constitute a regimental reserve for the expected PAVN counterattacks against the newly won hills. General Thi accordingly replaced the 54th Infantry with the 3rd Infantry on July, the 54th becoming the corps reserve north of the Hải Vân. General Trưởng had good reason to be concerned as the PAVN were preparing for increased and prolonged operations in Thừa Thiên Province, as revealed by aerial photography of PAVN rear areas on 30 June. A 150,000 gallon fuel tank farm, connected to the pipeline through the A Sầu Valley, was photographed under construction in far western Quảng Nam Province, only 25km south of the PAVN base in Ruong Ruong. The Ruong Ruong region, also called the Nam Dong Secret Zone, was seen growing in logistic capacity. Local Routes 593 and 545 were shown to be repaired and in use, and a tank park and two new truck parks were discernable.:129
The 324B NVA Division took a while to get organized for renewed attacks in southern Thừa Thiên. Its battalions had taken severe beatings, and a period of re-equipping was necessary. In the meantime, action shifted to the old Airborne Brigade sector in northern Thừa Thiên where the 6th and 8th Marine Battalions, attached to the 147th Brigade, came under heavy attack. Attacks continued through July, and some Marine outposts, targets for 130 mm gunfire, had to be given up. No important changes in dispositions took place, however. Mid-July passed in southern Thừa Thiên without much activity. But on 25 July, as the 2nd Infantry Regiment, 3rd Division, was trying to regroup following a devastating engagement in the Battle of Duc Duc, General Trưởng ordered the 54th Infantry from Thừa Thiên to Quảng Nam for attachment to the 3rd Division. The 1st Division, with only three regiments, was left with a 60km front including Highway 1 and no reserve north of the Hải Vân Pass. Since this situation was hazardous, General Trưởng on 3 August ordered General Thi to reconstitute a reserve using the 15th Ranger Group, at that time attached to the Marine Division on the Thạch Hãn River.:129
On 5 August the 121st RF Battalion replaced the 60th Ranger Battalion on the Quảng Trị front. Shortly afterward the 61st and 94th Ranger Battalions pulled out, relieved respectively by the 126th RF Battalion and the 5th Marine Battalion. But events in Quảng Nam forced General Trưởng to change his plans for the 15th Group; because Thượng Đức had just fallen, he needed the 3rd Airborne Brigade in Quảng Nam. So, as soon as the Marines and RF replaced the battalions of the 15th Group, the relief of the 3rd Airborne Brigade began in the Song Bo corridor. However General Thi was still without a reserve north of the Hải Vân Pass, and fresh opportunities for the new PAVN 2nd Corps appeared in Phú Lộc District.:129
While General Trưởng was shifting forces to save Quảng Nam, the PAVN 2nd Corps was moving new battalions near Hill 350. First to deploy, in late July, was the 271st Independent Regiment, previously under the control of the 325th Division. In mid-August, the 812th Regiment, 324B Division, began its march from A Lưới in the northern A Sầu Valley, covering the entire 50km on foot, the regiment arrived undetected on 26 August.:129
On 28 August attacks on ARVN positions in the Núi Mô Tau-Hill 350 area began. Over 600 artillery rounds hit Núi Mô Tau where the 2nd Battalion, 3rd Infantry, was dug in. The ARVN battalion held the hill against the assault of the PAVN infantrymen, but an adjacent position, manned by the 129th RF Battalion, collapsed, and the battalion was scattered. To the east, on Núi Bong and Hills 273 and 350, the other two battalions of the 3rd Infantry were bombarded by 1,300 rounds and driven from their positions by the PAVN 6th and 812th Regiments. Meanwhile, the 8th Battalion, 812th Regiment overran Hill 224. Thus, in a few hours, except for Núi Mô Tau, all ARVN accomplishments of the long summer campaign in southern Thừa Thiên were erased. The ARVN 51st Infantry Regiment was rushed into the line, but the momentum of the PAVN attack had already dissipated. The casualties suffered by the 324B Division were high, but it now controlled much of the terrain overlooking the Phú Lộc lowlands and Phu Bai.:129–30
Heavy fighting throughout the foothills continued into the first week of September with strong PAVN attacks against the 3rd Battalion, 51st Regiment, and the 1st and 2nd Battalions of the 3rd Regiment. The PAVN 6th and 803rd Regiments lost nearly 300 men and over 100 weapons in these attacks, but the ARVN 3rd Infantry was no longer combat effective due to casualties and equipment losses. Immediate reinforcements were needed south of Phu Bai. Accordingly, General Trưởng ordered the 54th Infantry Regiment back to Thừa Thiên Province, together with the 37th Ranger Battalion, which had been fighting on the Duc Duc front. General Thi took personal command of the ARVN forces in southern Thừa Thiên and moved the 7th Airborne Battalion from north of Huế and the 111th RF Battalion, securing the port at Tân Mỹ, to Phu Bai. These deployments and the skillful use of artillery concentrations along enemy routes of advance put a temporary damper on PAVN initiatives in the foothills.:130
In an apparent diversion to draw ARVN forces northward away from Phú Lộc, the PAVN on 21 September strongly attacked the 5th and 8th Marine and the 61st Ranger Battalions holding the Phong Điền sector north of Huế. Although some 6,600 rounds, including hundreds from 130 mm field guns, and heavy rockets, struck the defenses, the South Vietnamese held firmly against the ground attacks that followed. Over 240 PAVN infantrymen from the 325th Division were killed, mostly by ARVN artillery, in front of the 8th Marines, and General Thi made no deployments in response to the attack. The next week, however, renewed assaults by the PAVN 803rd Regiment carried it to Núi Mô Tau, and by the end of September, the 324B Division had consolidated its control over the high ground south of Phu Bai from Núi Mô Tau east to Núi Bong and Hill 350. The PAVN 2nd Corps immediately began to exploit this advantage by moving 85 mm field gun batteries of its 78th Artillery Regiment into position to fire on Phu Bai, forcing the Republic of Vietnam Air Force (RVNAF) to suspend operations at the only major airfield north of Hải Vân Pass.:130
The attack to retake the commanding ground began on 22 October with a diversionary assault on Hill 224 and Hill 303. The ARVN 1st Infantry Regiment was to follow with the main attack against the PAVN 803rd Regiment on Núi Mô Tau. Bad weather brought by Typhoon Della reduced air support to nothing, and little progress was made by the ARVN infantry. Nevertheless, the attack on Núi Mô Tau, with a secondary effort against elements of the PAVN 812th Regiment on Núi Bong, began on 26 October. The ARVN 54th Infantry, with the 2nd Battalion, 3rd Infantry attached, made slight progress on Núi Mô Tau, and the 3rd Battalion, 1st Infantry, met strong resistance near Núi Bong. But the ARVN artillery was taking its toll of the PAVN defenders, who were also suffering the effects of cold rains sweeping across the steep, shell-tom slopes. Heavy, accurate artillery fire forced the PAVN 6th Battalion, 6th Infantry, to abandon its trenches on Hill 312, east of Hill 350, and the 803rd Regiment's trenches, bunkers, and communications were being torn up by the ARVN fire placed on Núi Mô Tau. Toward the end of October, the PAVN 803rd and 812th Regiments were so depleted that the PAVN 2nd Corps withdrew them from the battle and assigned the defense of Núi Mô Tau and Núi Bong to the 6th Regiment and 271st Regiment respectively.:130
As heavy rains continued, movement and fire support became increasingly difficult, and the ARVN offensive in southern Thừa Thiên slowed considerably. PAVN artillery continued to inhibit the use of Phu Bai Air Base, and 1st Division infantrymen around Núi Bong suffered daily casualties to PAVN mortars and field guns. On 24 November, Maj. Gen. Nguyen Van Diem, commanding the 1st Division, secured permission to pull his troops away from Núi Bong and concentrate his forces against Núi Mô Tau. For a new assault on Núi Mô Tau, General Trưởng authorized the reinforcement of the 54th Infantry Regiment by the 15th Ranger Group drawn out of the Bo River Valley west of Huế; the 54th would make the main attack. The 54th Infantry commander selected his 3rd Battalion to lead, followed by the 2nd Battalion and the 60th and 94th Ranger Battalions. When the 3rd Battalion had difficulty reaching the attack position, it was replaced on 27 November by the 1st Battalion. Weather was terrible that day, but two Ranger battalions made some progress and established contact with the PAVN on the eastern and southeastern slopes of the hill. On 28 November, with good weather and long-awaited support from the RVNAF, the 1st Battalion, 54th Infantry, began moving toward the crest of Núi Mô Tau. On the hill the PAVN was approaching a desperate state; one battalion of the 5th Regiment was moving to reinforce but washouts on Route 545 between Ruong Ruong and Thon Ben Tau south of Núi Mô Tau had all but eliminated resupply. Despite difficulties, however, the PAVN continued to resist strongly on both hills. On 1 December, Colonel Vo Toan, the commander of the ARVN 1st Infantry, returned to his regiment from a six-month absence at South Vietnam's Command and General Staff College. His timely arrival was probably responsible for injecting new spirit and more professional leadership into the attack, which had bogged down so close to its objective. But help also arrived for the defenders; the PAVN 812th Regiment, refilled and somewhat recovered from its earlier combat, returned to Núi Mô Tau, replacing the badly battered 6th Regiment. Over on Núi Bong, however the remnants of the PAVN 271st Independent Regiment were without help. On 3 December, the 1st Reconnaissance Company and the 1st and 3d Battalions, 1st Infantry Regiment, were assaulting a dug-in battalion only 5O meters from the crest. But the expected victory slipped from their grasp. Intense fires drove the South Vietnamese back and although the 1st Infantry retained a foothold on the slopes, it was unable to carry the crest.:130–1
The attack by the 54th Infantry and the 15th Ranger Group had more success. On 10 December, the 1st Battalion of the 54th took one of the twin crests of Núi Mô Tau and captured the other the following day. As bloody skirmishing continued around the mountain for weeks, the PAVN executed another relief, replacing the 812th Regiment with the 803rd. Although the PAVN remained entrenched on Núi Bong, his access to lines of communication and the base in Ruong Ruong were frequently interdicted by the ARVN units operating in his rear. Furthermore, the PAVN 78th Artillery Regiment was forced to remove its batteries because resupply past the ARVN position around Núi Mô Tau became too difficult. The RVNAF, meanwhile resumed flying into Phu Bai on 13 December.:131
By making timely and appropriate economy of force deployments, often accepting significant risks, General Trưởng was able to hold the PAVN main force at bay around Huế, but the ring was closing on the city. Reinforced PAVN battalions equipped with new weapons, ranks filling with fresh replacements from the north were in close contact with ARVN outposts the length of the front. Behind these battalions, new formations of tanks were being assembled and large logistical installations were being constructed, heavily protected by antiaircraft and supplied by newly improved roads.:131
During the 1975 Spring Offensive on 21 March elements of the PAVN 325th Division overran the 61st Ranger positions on Hill 560 to the southeast, while elements of the 324th Division attacked Hills 224 and 303 and Núi Bong and Núi Mô Tau capturing them by 22 March. From these positions the PAVN proceeded to shell Highway 1 and then moved down from the hills cutting Highway 1, severing the land route between Huế and Da Nang.
This article incorporates public domain material from websites or documents of the United States Army Center of Military History. | <urn:uuid:96b48376-5f54-4141-afe2-ccf60c21eb62> | CC-MAIN-2019-47 | https://military.wikia.org/wiki/Battle_of_Ph%C3%BA_L%E1%BB%99c | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00221.warc.gz | en | 0.975654 | 4,577 | 2.671875 | 3 |
This project will guide a new radio controlled pilot through the steps to build a lightweight and inexpensive aircraft that is durable, easy and inexpensive to build. It will introduce model aircraft building techniques that have become popular among enthusiasts - using foam board as a base of construction, and we will also incorporate traditional balsa construction.
Since foam board aircraft have only become very popular in recent years, it might help explaining why. Foam board is relatively inexpensive compared to balsa sheets of the same size. It is lightweight, durable and easy to work with; it can be shaped, sanded, bent, folded, and cut with simple tools and a variety of adhesives can be used to bond it to itself and other materials. For this project, we will be using balsa as a means of increasing the structural integrity of the mostly foam airframe and wings.
The use of knives, scissors, hot-glue guns, and adhesives in general are potentially dangerous. Use caution and be aware of your surroundings when using any tool or machine.
Bill of Materials
balsa sheets: 3 each
- 3/32"x3"x36" - three sheets
- 1/16"x3"x30"- two sheets
- 3/16"x3"x30" - 1 sheet
- 3/16" square
- 1/4" square
- 1/8" square
two 30"x30"x3/16" foam boards
hot glue gun
cyanoacrylate glue (CA glue)
2-part epoxy(fast and slow setting)
clear packing tape roll
hobby knife (x-acto)
Assorted rulers, tape measure, etc
320 grit sandpaper
cups for mixing epoxy
small paintbrush for spreading epoxy
Choose a plane
We will be building the Papillon, a lightweight motor glider with a simple design aesthetic and easy construction that was marketed by Kyosho in the 70's and 80's. Because of its relatively large wingspan and it being classified as a glider, it should provide a great base for the beginning R/C pilot. Gliders tend to have very easy flight characteristics due to their low wing loading; this design is meant to be as light as possible and with modern radio and electronic equipment, it will be even lighter than it was originally intended. The Papillon was meant to have only 2-channel control (rudder and elevator); for power, an unthrottled .049 nitro engine was mounted in the nose. It would simply pull the plane to altitude and once it ran out of fuel it could continue flying by "catching" thermals. The firewall section could be easily modified to accommodate an electric motor to provide easy throttling and longer run times than the glow counterpart. I will be building per the plans, however, and installing a Cox .049 nitro engine. I found the plans here, along with hundreds of other free plans:
Electric conversions are becoming very common today and as such, there is a great deal of information that can be found with a little digging. I suggest searching in the forums on rcuniverse.com or the like if this is the direction you would like to go. The equipment has become much less expensive in the last five years, so finding older, lightly used motors and batteries can help save you from spending a lot for your first plane.
We will do some simple unpowered glide tests after construction is complete to inspect the low speed flight handling and do some initial control trimming. This may be the best way to get the beginner pilot acclimated to the controls, as the speeds and altitudes are usually so low that damage is unlikely to occur in the event of an accident.
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Fuselage Stock
We will begin by building the fuselage, the body of the airplane. For this, we will transfer the profile of the fuselage onto the foam board and fold the board in half so that we can cut out both sides of the airframe in the same step. This will ensure that both sides are nearly identical and save us the time of cutting each out individually. Once cut out, we will remove the paper and reinforce the foam with balsa sheeting.
Make a photocopy of the fuselage's profile section directly from the plan. Since this plane has some length it will require a couple sheets of paper to accomplish. Glue or tape the profile to the foam board, trying to conserve as much space as possible. Plain white glue is sufficient for this step, but taping it down in sections is preferred so we can use the profile again later.
Cut a rectangular section about twice as wide as the profile.
We will make a cut down the length of the board's center, but not through the sheet. The idea here is that the blade penetrates about 1/2 to 3/4 of the thickness of the board to facilitate folding. Refer to photo 1 for clarification - notice the plan is attached to the board and the board is folded along its center.
Note: We will be using this technique throughout the project - from now on I will refer to it as a partial cut.
Step 2: Trimming the Firewall
Apply a small amount of white glue to the inside of the board to bond
the two halves together. Once dry, carefully and cleanly cut the area where the firewall is illustrated on the profile. In the photo, a red line is drawn where you should make your cut. Try to keep the blade perpendicular to the surface of the board and use a straightedge or metal rule to guide you. We must remove this material so that the balsa ply firewall we build later can bond directly to the balsa sheeting we add to the outside of the foam sides.
Step 3: Cutting Out the Foam Sides
Begin cutting the outline of the fuselage on the inside of any outer sheeting the plan illustrates. Look at photo 3; here I'm comparing the fuselage on the plan to the profile I used to create the foam sides. Notice that the top and bottom sheeting is not included on the profile; we must allow for clearance here so that when we add that sheeting it doesn't make the fuselage taller than specified. Once all the excess material has been removed, trace over the bulkhead lines with a ballpoint pen. This will leave indents in the foam for later reference. Use a square to draw lines across the top and bottom of the sheets so that the same procedure can be done on the other side. See photo 4.
Step 4: Removing Paper From Foam Board
To separate the two halves and remove the paper, run hot water over the sheets while gently bending them. The paper can then be peeled off without too much trouble. Try not to kink the foam in the process, but its not a detriment to the structure if you do.
Aside: Although not necessary, I chose to remove the paper to reduce weight at the cost of rigidity. The paper adds some stiffness to the foam board, which may be advantageous; some planes that use foam board exclusively may not mention removing the paper, just be aware that removing it will make the foam flimsy. The next step, however, will increase the stiffness of the sides significantly when we be bond balsa sheeting to strategic areas of the foam sides.
Step 5: Balsa Sheeting
Tracing balsa profile.
The area on the top edge of the fuselage between F-5 and F-6 is the wing saddle. This area must be flat up to the forward dowel pin, illustrated on the plan. Position the profile section you used earlier on a piece of 3/32" sheet so that the wing saddle area is on the edge of the sheet. Since we want the sheeting to strengthen the forward section of the airframe without adding too much weight, we'll have it extend under the wing saddle on top and taper to a point midway across the wing chord on the bottom. In photo 5 I have the ruler spanning from the top of bulkhead F-6 to the midpoint on the bottom; transfer these measurements to the balsa when you trace the profile.
Cut the balsa along the diagonal line using a razor saw or sharp knife - use the rule to guide the blade so the cut is clean. Flip the excess material over so you can use the same angle for the other side, this way we can ensure the angles are the same. Apply a drop of CA glue to each end of the piece you traced the profile to and attach it to the remaining material, careful to align the edges. This is the same operation we did for the foam boards which will help us make exact duplicates we can separate later.
Cut out the profile
Carefully cut along the lines you traced. It is best to leave a bit of extra material so you can sand to the final shape with a sanding block. Once you're satisfied with the way the profile looks, slide a knife blade between the sheets and gently pry them apart. Since only a small amount of glue was used, there should be a minimal amount of damage to the wood. We now have to exact copies of the profiles. Use the profile plan to transfer measurements of the firewall location to the balsa - this will aid with aligning the foam board.
We also need some balsa strips to extend to the tail end of the fuselage. Measure onto another sheet of 3/32" balsa two 1/2" wide strips, extending the entire length of the sheet and cut off. Use these strips to make the extensions to the tail as I show in the photos. The strips will have to be forced into alignment with the fuselage since it is curved, so make sure you hold them in position before making any cuts; it works best to lay the strips on the profile plan to make cuts marks. Once one set has been cut, use them to make a set for the other side of the fuselage.
Note: The horizontal stabilizer passes through the lower strip and must be trimmed. It is best to do it now, before the halves are complete. You'll have noticed that the strips will try to overlap at the tail section, but once clearance is made for the horizontal stabilizer, no further trimming will be necessary. Refer to photos 9 -12.
Step 6: Attach Balsa Support Sheeting
Mix a batch of two-part epoxy adhesive per directions, enough to coat one side of each of the forward balsa supports. Mixing a small amount of isopropyl alcohol (90% or higher) to the epoxy will bring it to brushing consistency. Use a brush to apply a coat to one side of the balsa sheets, careful not to put any on the marks where the firewall will be attached. Position the sheets on the foam sides and place on a sheet of wax paper to prevent squeeze-out from adhering to the work surface. a stack of books is a great way to distribute an even load onto the sheets - but be careful the balsa doesn't shift while stacking.
It is best to let the epoxy set for a couple hours before handling again. Once cured, the tail support strips can be prepared. Recall that the bottom strips must be forced into position, though; to do this you will need a surface that you can push pins into - something like a cork board for posting notes.
Position the foam sheet to take advantage of the edge of the board; if there is no frame on your board clamp a length of square stock balsa to butt the top edge of the turtle deck to (the turtle deck is the top of the fuselage behind the wings). Apply the epoxy to both the foam and the balsa and use T-pins to hold in position. Allow to fully cure before handling.
Step 7: Firewall, Bulkheads F-5 and F-6, Fuselage Floor
Tip: After cutting out the firewall and bulkheads, ensure they
are square by measuring diagonally from corner to corner. If they are square, the measurements should be the same. If they are different, check your measurements again and trim as necessary.
The firewall will be constructed from two layers of 3/32" balsa and epoxy. Cut out two 2-1/8" squares from leftover materials. Apply epoxy to both faces you intend to join but orient the grain pattern perpendicular to each other to increase strength by limiting bending moments. Clamp with spring clamps or place under a hefty weight, be aware of squeeze-out. Once cured, choose a side as a base for trimming - use sand paper and sanding block with a square to make sure all sides are perpendicular to each other. The bottom edge of the firewall needs to be tapered to fit the floor of the fuselage. Mark a line on one edge at 2" from the opposite side, this is the "front". Cut along this line holding the saw blade at an angle so that the edge will taper to the edge on the other side. Some trimming will be necessary to achieve the proper angle - trim material from the "back" side since it is still a bit longer than necessary. Refer to the photos. Continue trimming until the angle matches the plan and sand smooth.
The width still needs to be refined to the proper size. To maintain the
proportions of the plan, trim 1/8" off one side, making the firewall a total 2" wide.
Attach 1/4" square stock around the bottom and sides on the front. Sand the bottom to match the angle.
For glow engine installation only
Mark holes for mounting the glow engine to the firewall. In the event that your engine has not mounting plate, construct a beam-style mount to be added later. Information on this mounting system can be found with a search - be aware that there are multiple versions and each will have its own advantages and drawbacks. The Cox Black Widow I'm using uses a mounting plate and integral tank, so the bolt pattern must be transferred to the firewall and holes drilled prior to installing the firewall (for convenience sake). The holes can be drilled at a later time, but it it easiest when the firewall can be laid flat.
Fuselage floor (Doubler)
The floor of the nose section is one length of 3/32" sheet extending from the firewall back to F-6, making it 10-1/16" long. The length must be accurate to account for the curvature of the bottom of the nose. The width is 2-3/16". Once cut out measure 1/4" in on each side down the length of the sheet; this will serve as a guide for attaching the sides. Double check that the ends are perfectly square using the diagonal measurements.
Bulkheads F-5 and F-6
The two bulkheads will be made from 3/32" sheet. These will be narrower than the firewall due to the foam board, at 1-5/8" wide. The height will be the same for each: 2-1/2". Use 1/4" square stock around the perimeter of both bulkheads, attached with CA glue.
Step 8: Joining Fuselage Sides
Start by sanding the edges of the fuselage sides with a sanding block. Knock down any high spots where the balsa sides join. Try to focus on making the foam and balsa even and the edges square so they both make contact when joined to the doubler. Sand the balsa sheeting to remove excess dried epoxy and smooth the grain of the wood.
Place the profile plan over the inside of the fuselage halves. Transfer marks for the bulkheads with a marker or pen. Apply epoxy to the firewall section of the right fuselage side and the right edge of the firewall and bulkheads F-5/6. Only one side will be joined first so that they can be checked for squareness. These three bulkheads are critical to maintaining the geometry of the fuselage, so be as accurate as possible. Use weights and T-pins or other objects to hold them in place while the epoxy dries. The use of a fast setting epoxy may be helpful here so you can hold the bulkhead in position. If necessary, each bulkhead can be joined one at a time to aid set up.
Once the epoxy cures, check the bulkheads for squareness again. Make any adjustments if necessary by pressing lightly to deform the foam board. Apply epoxy to the opposite edges and place the left side of the fuselage in position. Use T-pins to anchor the bulkheads in alignment with their corresponding marks. Turn the fuselage onto its side and place an evenly distributed load on the other side. A book works well here also. Once cured, measure diagonally from the corner of F-5 to the opposite corner of F-6 and repeat on the other corners (see photo). Ensure these measurements are the same to check squareness.
The front of the floor must be notched to fit between the sides of the at the firewall. Wet the outside with water to make the wood easier to manipulate to the shape of the fuselage bottom. Attach with epoxy (quick setting) applied to the bottom of F-5/6 and CA glue on the bottom of the firewall (to hold on position) and place some weights to hold the rear portion in contact - be sure to check alignment. Let the epoxy cure then carefully position the sides even with the floor and use CA glue to adhere in place.
Join the tail section by first trimming the contacting surfaces. Pinch the halves together and estimate the angle between them and trim to suit. The ends need to be even with each other to prevent skewing of the fuselage. Once trimmed, join with epoxy.
The floor of the tail section and turtle deck will be made from 1/16" sheeting. The top and bottom can be cut at this point, but only the bottom will be attached. Measure a 2-1/8" x 13-5/8" rectangle and draw a line down its center. Mark the bottom center of F-6 and align the center of the sheet that was just cut with this mark and the tail. Trace the outside edge of the fuselage onto the sheet and trim. Glue along the edges with CA. Sand edges and transition from 3/32" to 1/16" with 320 grit sand paper and sanding block.
Cut two 1/4" dowels to 3-1/8". Mark their positions on both sides of the fuselage and bore a 1/4" hole at each location; I have positioned the rear dowel so that it can be glued to bulkhead F-6. This can be accomplished with a drill or by carefully cutting with a hobby knife. Install with epoxy.
Step 9: Horizontal and Vertical Stabilizers
Transfer measurements of both horizontal and vertical stabilizers onto a piece of 3/16" balsa; if necessary the rudder and elevator control surfaces can be separated from the stabilizers to fit on the wood sheet. Note that the plan only shows one half of the horizontal stab. Measure inside each edge 3/16" so that both are scaled smaller - we will be attaching 3/16" square stock around the perimeter for strength using CA glue.
Cut out the rudder and elevator and use the same procedure.
Once the glue dries, use a sanding block to knock the excess material off and round the edges.
Step 10: Wings
The original wings were constructed using the built-up method, a timely and material-costly process. Instead we will use the Armin Wing method, detailed here: flitetest
Since flitetest has a tutorial on this method, I won't reiterate it here. The steps are very simple and use only hot glue and foam board. I have constructed the wing to the same dimensions in the plan, so you would want to make a 6" chord wing using the method in the linked video. Only a small adjustment in the dimensions is necessary and I did not include aileron control surfaces, since I wanted the model to be rudder/elevator only. Use the remainder of the 1/4" dowel that was used for the hold-down pins. I made the dowel extend half way into each wing, leaving the outer section without a spar. This will still be plenty strong and ensure the wing stays together in the center. I also did not use the packing tape as a covering method, since I am not sure of its fuel resistance. For electric flight, the packing tape is fine. Using the dowel, however, will eliminate any dihedral and reduce stability. To keep the dihedral, I warped the dowel with water and a bending jig, set to the proper 6 degrees. Pour boiling hot water over the dowel and hold in position with a gloved hand.
When joining the wing halves use a combination of epoxy and foaming polyurethane glue (Gorilla glue). Apply Gorilla glue to the spar before inserting into each wing. Place epoxy in the wing joint and elevate the opposite wing tip to the prescribed 6". After curing, wrap the joint section with packing tape to reinforce the area. Cap the wing tips with 3/32" balsa and epoxy.
For an even stronger wing joint, a layer of fiber glass cloth and epoxy can be used, I chose to forego this however, for weight savings. Although, the materials are lightweight on their own, the wing is nonetheless heavy for its size. A similarly sized wing made with traditional methods may be lighter but the durability, material cost, and time investments make up for the weight.
Step 11: Turtle Deck, Tail Apendages, Skid Plate, Forward Hatch
Attach the turtle deck that was cut out earlier with CA glue being careful to align the center mark with mark on top of bulkhead F-6 and tail. A line should extend down the center to show where to attach the vertical stabilizer.
Cut a slot on both sides of the fuselage where the horizontal stabilizer resides. Test fit the elevator and make marks to set alignment. Apply a small amount of CA to a corner to hold in place. Using a tape measure, check the distance from the nose sheeting to the outer corner of the trailing edge of the tail. Check the measurement on the other side and compare. These two should match before cementing in place. Once satisfied with the alignment, apply more CA to hold firmly. Cut a strip of 1/4" x 1/8" balsa to attach where the elevator and fuselage meet.
Add a spar to the trailing edge of the Rudder, made out of 3/16" x 1/4" balsa; make sure the bottom edge of the spar is tapered to allow elevator throw. Apply CA to the bottom edge of the rudder and carefully align on the fuselage. Be sure to check that the Vertical stabilizer is perpendicular to the horizontal stabilizer with a square or measuring from the top of the rudder to the outer corner of the elevator. Glue a 1/4" triangular piece of balsa to both sides of the rudder base and the bottom side of the elevator base.
The skid plate is made from 3/32" balsa sheet. Follow the contour illustrated on the plan for the skid plate formers (two on each side) and glue into position. Sheeting is 3/32" laid with the grain oriented transversely to the roll axis of the airframe. Wet the sheeting to help conform to the formers before gluing with CA.
The final sheeting in front of the wing saddle doubles as a hatch to access the engine mounting bolts and electrics. I created a frame of 3/32" balsa to glue on the top edge and then cut another sheet to fit over the forward area to be held down with screws or magnets.
Step 12: Covering, Rudder and Elevator Attachment, Radio Equipment, Balancing
The covering I chose for this plane is a fuel proof, heat shrinking plastic called Monokote (Top Flight brand). For an all electric aircraft, simple packing tape can be used to cover the entire airframe. I won't show the steps for this process, as it is lengthy and there is a great deal of information that already exists that will be far more helpful to a beginning builder. Follow this link for a tutorial on covering with tape. Follow this link for a tutorial on covering with Monokote plastic film covering.
To attach the control surfaces to the tail appendages, I used the old fashioned method of sewing them on. There is a great deal of info on this method as well with a simple search. Places like rcgroups, rcuniverse and others have loads of discussions on this as well as several other popular methods for attaching the surfaces. I'm leaving it up to the builder to decide which method best suits their skill level or ability. Just know that there are advantages and disadvantages to them all, so do your research.
Installing radio equipment and balancing the model need to be accomplished at the same time since the bulk of the weight in the aircraft are these components. First, mark the Center of Gravity (CG) on the bottom side of the wings, based on the plan. Set all the equipment inside the fuselage and attach the wings. Try to have everything sitting where you want it to be when it is flying. Slowly lift the model with fingers positioned at the CG marks. Note if the nose or tail rises. If one end of the plane moves more than about a 1/4" in either direction of the pitch axis, reposition the electrical components and test again. Although having the model hang perfectly level is good, a general rule is to have the nose hanging down somewhat, as this is a safer attitude for flight. Having the tail hanging low, on the other hand, is totally unacceptable and must be mended. The model should also be balanced along the roll axis, by lifting with one finger under the tail and one in the center of the nose. Repeat several times and note if there is a tendency to roll to one side or the other. Make sure the wing is centered on the fuselage. It may be necessary to add a small amount of weight to a wing tip; start with coins taped to the wing tip on the opposite side of the roll to see if there is a change. If a significant amount of weight is needed to balance the model, recheck the positioning of your fingers and the wing in the wing saddle - it should not require much.
Step 13: Conclusion
Flight testing is necessary before any powered flights can occur. Be sure that the aircraft's battery and the transmitter battery are fully charged. It is best to conduct these tests on a flat surface with tall grass to soften the airplane's landing. It will take some practice to get some distance, but starting with the model held low, as if from a seated position, launch into a slight breeze to help increase lift. Having a partner to help is good as it gives time for the pilot to transition from launching to controlling the model. At this point, however, allowing the model to glide back to earth uncontrolled helps with studying any bad tendencies it may have in slow flight. The weight of this model (with the Armin wing) will severely limit its gliding abilities, so keeping that in mind, don't get overzealous and try to start launching with the intent to pull any altitude. Build up to a standing launch and get used to what it may try to do. Many tutorials exist for first time R/C pilots, but one of the best learning tools is to find a veteran pilot and have that person fly it under power for the first time. They will be able to comment on any poor flight characteristics better than anyone. Join an R/C flying club and get to know some members, most are very helpful and affordable for beginner pilots as part of incentive programs.
Good luck and happy flying. | <urn:uuid:c640600a-fe9e-452f-b673-b4c9182f6274> | CC-MAIN-2019-47 | https://www.instructables.com/id/Balsa-and-Foam-Board-Composite-RC-Glider/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00499.warc.gz | en | 0.930948 | 5,827 | 2.828125 | 3 |
Latest 3.x Release
This documentation covers the latest release of the legacy 3.x Fedora. Looking for another version? See all documentation.
Fedora 4 Development
Looking for Fedora's currently active development?
Fedora defines a generic digital object model that can be used to persist and deliver the essential characteristics for many kinds of digital content including documents, images, electronic books, multi-media learning objects, datasets, metadata and many others. This digital object model is a fundamental building block of the Content Model Architecture and all other Fedora-provided functionality.
On this page:
The Fedora Digital Object Model
Fedora uses a "compound digital object" design which aggregates one or more content items into the same digital object. Content items can be of any format and can either be stored locally in the repository, or stored externally and just referenced by the digital object. The Fedora digital object model is simple and flexible so that many different kinds of digital objects can be created, yet the generic nature of the Fedora digital object allows all objects to be managed in a consistent manner in a Fedora repository.
A good discussion of the Fedora digital object model (for Fedora 2 and prior versions) exists in a recent paper (draft) published in the International Journal of Digital Libraries. While some details of this paper have been made obsolete by the CMA (e.g. Disseminators), the core principles of the model are still part of the CMA. The Fedora digital object model is defined in XML schema language (see The Fedora Object XML - FOXML). For more information, also see the Introduction to FOXML in the Fedora System Documentation.
The basic components of a Fedora digital object are:
- PID: A persistent, unique identifier for the object.
- Object Properties: A set of system-defined descriptive properties that are necessary to manage and track the object in the repository.
- Datastream(s): The element in a Fedora digital object that represents a content item.
A Datastream is the element of a Fedora digital object that represents a content item. A Fedora digital object can have one or more Datastreams. Each Datastream records useful attributes about the content it represents such as the MIME-type (for Web compatibility) and, optionally, the URI identifying the content's format (from a format registry). The content represented by a Datastream is treated as an opaque bit stream; it is up to the user to determine how to interpret the content (i.e. data or metadata). The content can either be stored internally in the Fedora repository, or stored remotely (in which case Fedora holds a pointer to the content in the form of a URL). The Fedora digital object model also supports versioning of Datastream content (see the Fedora Versioning Guide for more information).
Each Datastream is given a Datastream Identifier which is unique within the digital object's scope. Fedora reserves four Datastream Identifiers for its use, "DC", "AUDIT", "RELS-EXT" and "RELS-INT". Every Fedora digital object has one "DC" (Dublin Core) Datastream by default which is used to contain metadata about the object (and will be created automatically if one is not provided). Fedora also maintains a special Datastream, "AUDIT", that records an audit trail of all changes made to the object, and can not be edited since only the system controls it. The "RELS-EXT" Datastream is primarily used to provide a consistent place to describe relationships to other digital objects, and the "RELS-INT" datastream is used to describe internal relationships from digital object datastreams. In addition, a Fedora digital object may contain any number of custom Datastreams to represent user-defined content.
Decisions about what to include in a Fedora digital object and how to configure its Datastreams are choices as you develop content for your repository. The examples in this tutorial demonstrate some common models that you may find useful as you develop your application. Different patterns of datastream designed around particular "genre" of digital object (e.g., article, book, dataset, museum image, learning object) are known as "content models" in Fedora.
The basic properties that the Fedora object model defines for a Datastream are as follows:
- Datastream Identifier: an identifier for the datastream that is unique within the digital object (but not necessarily globally unique)
- State: the Datastream's state: Active, Inactive, or Deleted
- Created Date: the date/time that the Datastream was created (assigned by the repository service)
- Modified Date: the date/time that the Datastream was modified (assigned by the repository service)
- Versionable: an indicator (true/false) as to whether the repository service should version the Datastream (by default the repository versions all Datastreams)
- Label: a descriptive label for the Datastream
- MIME Type: the MIME type of the Datastream (required)
- Format Identifier: an optional format identifier for the Datastream such as emerging schemes like PRONOM and the Global Digital Format Registry (GDRF)
- Alternate Identifiers: one or more alternate identifiers for the Datastream (such identifiers could be local identifiers or global identifiers such as Handles or DOI)
- Checksum: an integrity stamp for the Datastream content which can be calculated using one of many standard algorithms (MD5, SHA-1, etc.)
- Bytestream Content: the content (as a stream resource) represented or encapsulated by the Datastream (such as a document, digital image, video, metadata record)
Control Group: the approach used by the Datastream to represent or encapsulate the content as one of four types or control groups:
- Internal XML Content - the content is stored as XML in-line within the digital object XML file
- Managed Content - the content is stored in the repository and the digital object XML maintains an internal identifier that can be used to retrieve the content from storage
- Externally Referenced Content - the content is stored outside the repository and the digital object XML maintains a URL that can be dereferenced by the repository to retrieve the content from a remote location. While the datastream content is stored outside of the Fedora repository, at runtime, when an access request for this type of datastream is made, the Fedora repository will use this URL to get the content from its remote location, and the Fedora repository will mediate access to the content. This means that behind the scenes, Fedora will grab the content and stream in out the the client requesting the content as if it were served up directly by Fedora. This is a good way to create digital objects that point to distributed content, but still have the repository in charge of serving it up.
- Redirect Referenced Content - the content is stored outside the repository and the digital object XML maintains a URL that is used to redirect the client when an access request is made. The content is not streamed through the repository. This is beneficial when you want a digital object to have a Datastream that is stored and served by some external service, and you want the repository to get out of the way when it comes time to serve the content up. A good example is when you want a Datastream to be content that is stored and served by a streaming media server. In such a case, you would want to pass control to the media server to actually stream the content to a client (e.g., video streaming), rather than have Fedora in the middle re-streaming the content out.
Digital Object Model - Access Perspective
Below is an alternative view of a Fedora digital object that shows the object from an access perspective. The digital object contains Datastreams and a set of object properties (simplified for depiction) as described above. A set of access points are defined for the object using the methods described below. Each access point is capable of disseminating a "representation" of the digital object. A representation may be considered a defined expression of part or all of the essential characteristics of the content. In many cases, direct dissemination of a bit stream is the only required access method; in most repository products this is the only supported access method. However, Fedora also supports disseminating virtual representations based on the choices of content modelers and presenters using a full range of information and processing resources. The diagram shows all the access points defined for our example object.
For the access perspective, it would be best if the internal structure of digital object is ignored and treated as being encapsulated by its access points. Each access point is identified by a URI that conforms to the Fedora "info" URI scheme . These URIs can be easily converted to the URL syntax for the Fedora REST-based access service (API-A-LITE). It should be noted that Fedora provides a several protocol-based APIs to access digital objects. These protocols can be used both to access the representation and to obtain associated metadata at the same access point.
By default, Fedora creates one access point for each Datastream to use for direct dissemination of its content. The diagram shows how these access points map to the Datastreams. The example object aggregates three Datastreams: a Dublin Core metadata record, a thumbnail image, and a high resolution image. As shown, each Datastream is accessed from a separate URI.
Custom access points are created using the Content Model Architecture by defining control objects as described below. Behind the scenes, custom access points connect to services that are called on by the repository to produce representations of the object. Custom access points are capable of producing both virtual and direct representations (though they are likely to provide slower performance). Content in the Datastreams may be used as input as well as caller-provided parameters. A "virtual representation" is produced at runtime using any resource the service can access in conjunction with content generated in its code. In this example, there is one service that contains two operations, one for producing zoomable images and one for producing grayscale images. These operations both require a jpeg image as input, therefore the Datastream labeled "HIGH" is used by this service. Fedora will generate one access point for each operation defined by the service. The control objects contains enough information so that a Fedora repository can automatically mediate all interactions with the associated service. The Fedora repository uses this information to make appropriate service calls at run time to produce the virtual representation. From a client perspective this is transparent; the client just requests a dissemination from the desired access point.
Four Types of Fedora Digital Objects
Although every Fedora digital object conforms to the Fedora object model, as described above, there are four distinct types of Fedora digital objects that can be stored in a Fedora repository. The distinction between these four types is fundamental to how the Fedora repository system works. In Fedora, there are objects that store digital content entities, objects that store service descriptions, objects used to deploy services, and objects used to organize other objects.
In Fedora, a Data object is the type of object used to represent a digital content entity. Data objects are what we normally think of when we imagine a repository storing digital collections. Data objects can represent such varied entities as images, books, electronic texts, learning objects, publications, datasets, and many other entities. One or more Datastreams are used to represent the parts of the digital content. A Datastream is an XML element that describes the raw content (a bitstream or external content). In the CMA, Disseminators, a metadata construct used to represent services, are eliminated though their functionality is still provided in other ways.
The Data object, indeed all Fedora digital objects, now consists of the FOXML digital object encapsulation (
foxml:digitalObject) and two fundamental XML elements: Object Properties (
foxml:objectProperties) and Datastreams (
foxml:datastream). The Data object is the simplest, most common of all the specialized object types and is identical to the digital object described in the Fedora Digital Object Model section above.
Data objects can now be freely shared between Fedora repositories. If a federated identifier-resolver system, such as the Handle System™, or any authoritative name registry system is used, the Data object will have the same identifier for each copy of itself in each participating repository. Sharing Data objects while keeping the same identifier in each copy greatly simplifies replication, and enables many business processes and services that are needed for large scale repository installations integrated within the Fedora Framework. Data objects can still be shared between repositories by including both the original identifier and alternate identifiers as part of the object's metadata.
Service Definition Object
In Fedora, a Service Definition object or
SDef is a special type of control object used to store a model of a Service. A Service contains an integrated set of Operations that a Data object supports. In object-oriented programming terms, the SDef defines an "interface" which lists the operations that are supported but does not define exactly how each operation is performed. This is also similar to approaches used in Web (REST) programming and in SOAP Web services. In order to execute an operation you need to identify the Data object, the SDef, and the name of the Operation. Some Operations use content from Datastreams (supplied by the Data object) and, possibly, additional parameters supplied by the client program or browser requesting the execution.
Conceptually an Operation is called using the following form (the specifics vary with the actual Fedora interface being used but all will contain some form of this information):
A SDef is a building block in the CMA that enables adding customized functionality for Data objects. Using a SDef is a way of saying "this Data object supports these operations." Essentially, a SDef defines a "behavior contract" to which one or more Data objects may "subscribe." In repositories, we usually create a large number of similar Data objects and want them all to have the same functionality. To make this approach flexible and easier to use, the CMA uses the Content Model (CModel) object (described below) to contain the model for similar Data objects. Instead of associating the SDef directly with each Data object, the relation hasService is asserted to the CModel object. By following the relation between the Data object to the CModel object, and then from the CModel object to the SDef object, we can determine what Operations the Data object can perform. Also note that a Data object (through its CModel object) may support more than one Service (by having multiple SDef relations).
SDef objects can now be freely shared between Fedora repositories. If a federated identifier-resolver system, such as the Handle System™, or any authoritative name registry system is used the SDef object will have the same identifier for each copy of itself in each participating repository. Sharing SDef objects while keeping the same identifier in each copy greatly simplifies replication, and enables many business processes and services that are needed for large scale repository installations integrated within the Fedora Framework. SDef objects can still be shared between repositories by including both the original identifier and alternate identifiers as part of the object's metadata. The best results will be gained by sharing the Data object, SDef objects, and Content Model object as a group maintaining the same original identifiers. By using the CMA in this fashion, you transfer a significant unit of the data and metadata that documents the expression pattern for your intellectual work. While this is, by itself, not everything needed, it is a big step forward for creating a durable content repository.
It is worth noting that Service Definition objects conform to the basic Fedora object model. Also, they are stored in a Fedora repository just like other Fedora objects. As such, a collection of SDef objects in a repository constitutes a "registry" of Service Definitions.
Service Deployment Object
The Service Deployment object is a special type of control object that describes how a specific repository will deliver the Service Operations described in a SDef for a class of Data objects described in a CModel. The SDep is note executable code but instead it contains information that tells the Fedora repository how and where to execute the function that the SDep represents. In the CMA, the SDep acts as a deployment object only for the specific repository in which it is ingested; each repository is free to provide functionality in a different way. For example, one Fedora repository may choose to use a Servlet and another may use a SOAP Web service to perform the same function. As another example, individual repository implementations may need to provide the functionality at different end points. Or perhaps, a specific installation may use a dynamic end point resolution mechanism to permit failover to different service providers.
Since the SDep operates only within the scope of an individual repository, the operators of that repository are free to make changes to the SDep or the functionality it represents at any time (except for temporarily making the object's services unavailable while the change is being made). This approach permits the system operators to control access to services called by the Fedora repository to institute security or policies as their organization determines. It enables Fedora-called services to be managed using the same principles and tools for the deployment of any distributed system. It also enables the system operators to reconfigure their systems quickly without having to change any part of their content except the SDep object.
The SDep stores concrete service binding metadata. A SDep uses a isDeploymentOf relation to a SDef as its way of saying "I am able to perform the service methods described by that SDef." A SDep object is related to a SDef in the sense that it defines a particular concrete implementation of the abstract operations defined in a SDef object. The SDef also uses a isContractorOf relation to a CModel as a way of saying "Use me to do the service operations for any Data objects conforming to that CModel."
A SDep object stores several forms of metadata that describe the runtime bindings for invoking service methods. The most significant of these metadata formats is service binding information encoded in the Web Services Description Language (WSDL). The Fedora repository system uses the WSDL at runtime to dispatch service method requests in fulfilling client requests for "virtual representations" of a Data object (i.e., via its Operations). This enables Fedora to talk to a variety of different services in a predictable and standard manner. A SDep also contains metadata that defines a "data contract" between the service and a class of Fedora Data objects as defined in the CModel. For the initial deployment of the CMA a simple data contract mechanism was chosen. Since the Datastream IDs are specified in the CModel and the SDep is now a deployment control object only for a specific repository, the SDep is able to uniformly bind directly to these IDs. In the future a more abstract binding mechanism may be used but this approach is simple and clear, though it may require the creation of a small number of additional SDep objects.
A major aspect of the CMA redesign is that there is no requirement that conformance to a Content Model or that referential integrity between objects be checked at ingest time. This may result in a run-time error if the repository cannot find referenced objects, interpret the Content Model or if there are any conformance problems.
It is worth noting that SDep objects conform to the basic Fedora object model. Also, they are stored in a Fedora repository just like other Fedora objects. As such, a collection of SDep objects in a repository constitutes a "registry" of service deployments that can be used with Fedora objects. In the CMA, SDep objects are not freely sharable across repositories. They represent how a specific repository implements a service. However, SDep objects can be shared if the operator of the system modifies them for local deployment. Because of this, SDep objects should not be automatically replicated between repositories without considering the affect.
Content Model Object
The Content Model object or CModel is a new specialized control object introduced as part of the CMA. It acts as a container for the Content Model document which is a formal model that characterizes a class of digital objects. It can also provide a model of the relationships which are permitted, excluded, or required between groups of digital objects. All digital objects in Fedora including Data, SDef, SDep, and CModel objects are organized into classes by the CModel object. In this section, we will primarily discuss the relationship between the Data and CModel objects.
To create a class of Data objects, create a CModel object. Each Data object belonging to the class asserts the relation hasModel using the identifier of the CModel as the object of the assertion. The current CModel object contains a structural model of the Data object. Over time there will be additional elements to the Content Model document but this initial implementation is sufficient to describe the Datastreams which are required to be present in each Data object in the class. The other key relation is to the SDef objects. You can assert zero or more hasService relations in the CModel to SDef objects.
A Data object may assert a hasModel relationship to multiple CModel objects. Such a Data object should conform to all of its Content Models, containing an aggregation of all the Datastreams defined by the Content Models. If two or more Content Models define Datastreams which have the same name but different characteristics, no well-formed Data object can be constructed and likely the repository will be unable to deliver its content or services. Fedora automatically assumes that all objects conform to a system-defined "Basic Content Model." There is no need to assert a relation to this content model explicitly but, if the Data object asserts other relations, it is a good practice to make the assertion to the Basic Content Model explicit. Regardless, the repository will behave the same whether the relation is asserted or not. Along with the Basic Content Model, the repository defines a "Basic Service Definition" which supplies Operations common to all objects. One such service provides direct access to the Datastreams.
Because of the Basic Content Model and the Basic Service Definition, nothing needs to be added to a Data object if the user only wants to store and disseminate Datastreams by name. However, without an explicit Content Model you cannot validate whether the Data object is correctly formed. In the CMA, if the repository cannot find and interpret all the control objects related to a Data object, or cannot interpret the Content Model, it will issue a runtime error when the Data object is accessed. Note that the repository will always be able to able to perform basic Datastream operations because they are a part of the Basic Content Model and Basic Service Definition. Other than conformance to the rules for a properly formed digital object, there is no warning or error issued on ingest or modification of an object in the CMA.
CModel objects can now be freely shared between Fedora repositories. If a federated identifier-resolver system, such as the Handle System™, or any authoritative name registry system is used the CModel object will have the same identifier for each copy of itself in each participating repository. Sharing CModel objects while keeping the same identifier in each copy greatly simplifies replication, and enables many business processes and services that are needed for large scale repository installations integrated within the Fedora Framework. CModel objects can still be shared between repositories by including both the original identifier and alternate identifiers as part of the object's metadata. The best results will be gained by sharing the Data object, SDef objects, and CModel objects as a group maintaining the same original identifiers. By using the CMA in this fashion, you transfer a significant unit of the data and metadata that documents the expression pattern for your intellectual work. While this is, by itself, not everything needed, it is a big step forward for creating a durable content repository. Over time, Content Model languages can be developed that permit describing an ever larger portion of the essential characteristics of the content and its behaviors.
It is worth noting that Content Model objects conform to the basic Fedora object model. Also, they are stored in a Fedora repository just like other Fedora objects. As such, a collection of Content Model objects in a repository constitutes a "registry" of Content Models. | <urn:uuid:7914b19b-e37d-4306-9bc3-83cff18edb17> | CC-MAIN-2019-47 | https://wiki.lyrasis.org/display/FEDORA38/Fedora+Digital+Object+Model | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00260.warc.gz | en | 0.893252 | 5,033 | 2.609375 | 3 |
10.Most Dangerous Animals Lurking In Your Backyard
Most people who live in the United States know that they probably don’t have the most dangerous animals around. Unlike Africa, Asia, South America, and Australia, there just aren’t that many large beasts that pose a danger to humans.
Of course, there are some big critters to be concerned with, but most often, the smaller the animal, the deadlier its bite. The following critters may be lurking in your backyard even as you read this, so beware the deadly bite, sting, or mauling from these deadly beasts.
10.Western Diamondback Rattlesnake Crotalius Atrox
Most people who live in Americ never see a snake for their entire lives, except maybe at the Zoo or Wildlife World Zoo. If you are unfortunate enough to be bitten by a snake, don't panic. Snake bites rarely result in fatalities, particularly if you know how to react. However, if you are bitten by a venomous snake, you must seek professional medical help immediately.
There are many varieties of snakes in the America, some of which are venomous and some which are not. The most venomous snakes that are the most dangerous to your health are the Western Diamondback Rattlesnake and the Arizona Coral Snake (also known as the Sonoran Coralsnake). The venom from a Mojave Rattlesnake can affect your nervous system. Baby rattlesnakes are dangerous because they tend to try to release as much venom as they can to protect themselves.
When a Snake Bites:
- Go to a hospital immediately.
- DO NOT use ice to cool the bite.
- DO NOT cut open the wound and try to suck out the venom.
- DO NOT use a tourniquet. This will cut off blood flow and the limb may be lost.
- DO NOT drink alcohol.
- DO NOT try to catch the snake. It just wastes time.
- Look for symptoms. If the area of the bite begins to swell and change color, the snake was probably poisonous.
- Keep the bitten area still. Don't tie the limb tightly to anything—you don't want to reduce blood flow.
9. Rabies Virus - Cute And Cuddly Mammals
Not all deadly organisms are deadly all on their own. The rabies virus is one of the deadliest neurotropic viruses on the planet, and a bite from an infected carrier resulting in transmission is nearly 100-percent fatal if left untreated.
In North America, it is most commonly found in wild animals like raccoons, bats, coyotes, rabbits, opossums, squirrels, skunks, wolves, foxes, cats, and dogs. Symptoms are not immediately present in many animals, so a bite from any of the above listed animals should be treated as a possible rabies transmission. For humans, this can be a painful and disturbingly inconvenient problem due to the nature of medical treatment for the virus.
Signs of Rabies in Dogs & Cats
The clinical signs of rabies may vary considerably. Initially animals may show changes in behavior such as nervousness and anxiety. This is referred to as the prodromal stage of the disease, which may last from two to three days.
This quickly progresses to irritability, photophobia (fear of light), biting, snapping, in-coordination and seizures. This is referred to as the furious form of rabies.
Some animals show signs of weakness, in-coordination and paralysis which also affect the muscles used for swallowing. This is known as the dumb form of rabies.
Rabid animals may exhibit one or more of the following signs:
- A change in your pet’s attitude
- Behavioral changes
- Inability to swallow
- A change in the tone of a dog’s bark
- Excessive salivation
- Wild animals may be unusually friendly
Once clinical signs of rabies have developed, death is inevitable as no cure exists.
8. American Alligator Alligator Mississippiensis
Alligator attacks are very rare, even in Florida where there are plenty of gators and people. Perhaps the best way to survive is to avoid attack in the first place.
Take special care in places that are known to be home to these reptiles; this means definitely don’t swim and also avoid the water’s edge – alligators are ambush hunters and can lurk, unseen, just waiting for something or someone to come along.
Time of day is also a factor in alligator attacks with dawn and dusk being prime times to avoid.
If in the unlikely event you do get attacked you need to make life as difficult as possible for the gator. They’re after an easy meal so anything to prevent this will go in your favor. Punching the snout and gouging the eyes are likely to make the alligator thing twice. It may also make the gator adjust its grip at which point you may have an opportunity to make your escape.
On the plus side the odds are firmly on your side as the vast majority of alligator attacks do not result in serious injury and very few indeed are fatal.
Alligator attack statistics
Obviously the alligator is in a different league to these giant crocs but the statistics don’t show the whole picture. Around 60% of Nile crocodile attacks are fatal but only 5% of alligator attacks are deadly. Ten years ago there were an average of around 11 alligator attacks in Florida each year but this number has been slowly creeping upwards. In fact over recent years there has been an average of one fatality a year.
7.Black Widow Spider Latrodectus
The female of the species carries a venom 15 times more toxic than the prairie rattlesnake, making it the most venomous spider in North America. A black widow’s bite carries with it a latrotoxin, which has the nasty effect of inducing severe muscle pain and muscle spasms.
Most people who are bitten by a black widow spider do not die but do live in fairly intense pain from the bite for up to a week. When children are bitten, they are much more susceptible to dying from the venom so they should be treated at a hospital immediately.
What Are the Symptoms of a Black Widow Spider Bite?
The black widow spider produces a protein venom that affects the victim's nervous system. This neurotoxic protein is one of the most potent venoms secreted by an animal. Some people are slightly affected by the venom, but others may have a severe response. The first symptom is acute pain at the site of the bite, although there may only be a minimal local reaction. Symptoms usually start within 20 minutes to one hour after the bite.
- Local pain may be followed by localized or generalized severe muscle cramps,abdominal pain, weakness, and tremor.
- Large muscle groups (such as the shoulder or back muscles) are often affected, resulting in considerable pain. In severe cases, nausea, vomiting, fainting, dizziness, chest pain, and respiratory difficulties may follow.
- The severity of the reaction depends on the age and physical condition of the person bitten. Children and the elderly are more seriously affected than young adults.
- In some cases, abdominal pain may mimic such conditions as appendicitis or gallbladder problems. Chest pain may be mistaken for a heart attack.
- Blood pressure and heart rate may be elevated. The elevation of blood pressure can lead to one of the most severe complications.
- People rarely die from a black widow's bite. Life-threatening reactions are generally seen only in small children and the elderly.
6.Coral Snake Calliophis, Hemibungarus
Have you ever heard the rhyme, “Red and yellow, kill a fellow; red and black, friend of Jack” and wondered what it was referring to? The fearsome creature that rhyme was written for is none other than the coral snake, which is one of the deadliest snakes in the world.
Their venom is an extremely powerful neurotoxin that paralyzes the breathing muscles. Since you need those muscles to . . . you know, breathe, your chances of surviving without an anti-venom are pretty much nil.
If you are bitten by a venomous snake, call 911 or your local emergency number immediately, especially if the area changes color, begins to swell or is painful. Many emergency rooms stock antivenom drugs, which may help you.
If possible, take these steps while waiting for medical help:
- Remain calm and move beyond the snake's striking distance.
- Remove jewelry and tight clothing before you start to swell.
- Position yourself, if possible, so that the bite is at or below the level of your heart.
- Clean the wound, but don't flush it with water. Cover it with a clean, dry dressing.
- Don't use a tourniquet or apply ice.
- Don't cut the wound or attempt to remove the venom.
- Don't drink caffeine or alcohol, which could speed your body's absorption of venom.
- Don't try to capture the snake. Try to remember its color and shape so that you can describe it, which will help in your treatment.
- Pain is felt at the region bitten by the snake.
- Two puncture wounds can be seen at the site of the snake bite.
- Swelling and redness is seen around the puncture wounds caused by snake bite.
- Patient bitten by a snake is likely to experience difficulty in breathing.
- Nausea and vomiting occurs.
- The vision becomes blurry.
- Patient salivates and sweats more.
- Patient experiences numbness in the face and limbs.
RECOMMENDED: This Bug Could Turn The U.S Into A 3rd World Country
5.Arizona Hairy Scorpion Hadrurus Arizonensis
Sometimes referred to as your nightmare come to life, these nasty buggers can grow up to 5.5 inches long (14 centimeters), pack two lobster-like claws, and have a ferocious sting. In most people, a scorpion sting is not lethal.
This is due to its venom being comparable to a honeybee sting, which is painful but rarely fatal.
The problem with a scorpion sting is that many people who happen to be allergic to its venom don’t realize it—that is until they are stung by one. If a person has an allergy to these little guys, they will usually begin to have difficulty breathing, which can be fatal if left untreated.
The venom of the deathstalker scorpion produces a number of severe symptoms with death as a possible end result.
The venom of a scorpion contains a variety of different chemicals, including both neurotoxins and enzymes that penetrate the skin and other tissues.
Most people who are stung by a scorpion will feel a sharp, burning pain not unlike a bee or wasp sting, or will feel like an electric shock. The initial sting can be quite painful, but for most people the discomfort will subside within an hour.
After the sting, there may be burning or numbness at the location of the tail strike. Some people may experience numbness beyond the sting site, seizures, difficulty breathing, blurred vision, or other severe symptoms. If any of these symptoms occur, you should get immediate medical attention as these are symptoms of anaphylactic shock.
- Increased heart rate
- High blood pressure
- Increased fluid secretion into the lungs and bronchioles
IF YOU SEE SCORPIONS IN YOUR HOME
Scorpions have been known to enter homes in order to escape the heat of the desert sun. In order to prevent these occurrences, cracks in walls and foundations should be sealed, ensuring that small scorpions cannot enter. Homes should also be kept clean in order to prevent incidental insects that scorpions may eat, such as ants and cockroaches. Items such as boxes that can serve as hiding places for scorpions should be removed.
If one or two scorpions are seen inside a home in a scorpion-prone area, it may not be a sign of a problem. Individual scorpions can be prevented by ensuring that home entry points are sealed and that items such as clothing are scorpion-free before entering a house. If multiple scorpions are seen in a short period of time it is a likely sign of infestation and a licensed pest-control professional should be called.
4.Brown Recluse Spider Loxosceles Reclusa
These nightmares share something in common with rattlesnakes because their venom does pretty much the same thing. The necrosis brought on by these evil arachnids can cause severe disfigurement, amputation, and even death, though this is rare. Most bites result in horribly disfiguring scars and damaged tissue can, and often does, become gangrenous.
Brown recluse spiders are capable of biting when disturbed or threatened. This may occur when a person unknowingly wears an infested piece of clothing or rolls over in his or her sleep. Similarly, brown recluses are known to build their webs in boxes and beneath old furniture; reaching into these areas may result in a bite.
Reactions to the brown recluse spider bite are variable. Depending on the bite location and amount of venom injected, reactions run the gamut from mild skin irritation to skin lesions. Most bites heal themselves and do not result in lasting tissue damage.
These bites are not painful at first and often go unnoticed until the first side effects appear. Symptoms do not usually manifest for a few hours after the bite. After reddening and swelling, a blister may appear at the bite site. Victims of brown recluse spider bites can experience fever, convulsions, itching, nausea and muscle pain.
In extreme cases, brown recluse spider bites may result in necrosis, or the death of living cells. In this case, painful open wounds appear and do not heal quickly. Wounds will appear purple and black at this time. If left untreated, necrotic and ulcerous wounds can expand to affect both superficial and deep tissues. Deep scarring can occur in the wake of such brown recluse spider bite symptoms, and skin grafting is sometimes utilized to cosmetically treat scarring.
3.Honey Bee Apis
In the United States, the honey bee is classified as the deadliest non-human animal there is, resulting in an average of 100 deaths each year.
A person with the allergy can die within 10 minutes of getting stung and will show signs such as swelling of the face, throat, and mouth, their pulse will become rapid, and their blood pressure will plummet. Fortunately, a shot from an epinephrine pen (an EpiPen) will save a person’s life, so people who know they have the allergy tend to carry them wherever they go.
Honey bee stings are known to be very painful, but the symptoms that result from a sting vary, depending on the amount of poison that has entered the immune system of the victim. The initial pain eventually fades, but only after a period of swelling and itching. Some individuals may also experience visible signs, including redness of the skin around the sting. Although the honey bee sting is not commonly hazardous, some people may be allergic to the bee’s venom and will experience such severe side effects as nausea, fainting and, in extreme cases, death.
The numbers of stings also plays a role in the effects. As the number of stings increases, the severity of reaction also increases and can be lethal to anyone if stung too many times. If a person is stung or has medical concerns related to honey bees, they should seek a medical professional.
2.Rats And Other Rodents
Millions of homes in the United States have un welcomed guests in the form of rats and other rodents. And while the presence of these pests can affect the emotional well being of an individual or family, the health risks of having an unchecked rodent population in a home is far more dangerous than one might originally think.
Rats and rodents are known to be carriers of several types of diseases that can lead to serious illness and in some cases, even death.
To properly understand the dangers of rodents in the home, one must understand the basics of how disease can be spread.
Transmission of diseases usually occurs through several routes:
- Exposure to (handling of, ingesting of, and airborne particles from) infected rodent waste including: feces, urine, saliva, and nesting material.
- Bites from infected rodent or insect.
- Handling of infected rodents or insects – some viruses can transfer from skin to skin contact with no bite or scratch mark necessary.
The following is a list of diseases spread by rats, rodents and insects that feed or travel on these rodents like: fleas, ticks or mites.
Most commonly found in the white-footed mouse, cotton rat and rice rat, the Hantavirus is a potentially life-threatening disease that currently has no specific treatment, cure or vaccine.
Symptoms include: fever, fatigue, muscle aches (generally in hips, backs and thighs) and may include, diarrhea abdominal pain, nausea and vomiting.
- LYMPHOCYTIC CHORIOMENINGITIS VIRUS (LCMV)
Lymphocytic choriomeningitis virus, or LCMV, most popular host is the common house mouse. LCMV usually occurs in two stages. The first stage includes symptoms such as nausea, vomiting, headache, muscle aches and lack of appetite. The second stage is primarily more neurological in nature including the occurrence of: meningitis, encephalitis, or meningoencephalitis.
Yes, you read that right. The same plague that killed millions of people during the Middle Ages could be creeping around your floor boards and behind your walls. The most basic form of plague may be as close as one infected flea bite away. Of the different types of plague (there are several: Bubonic, Septicemic, Pneumonic) all are caused by the same bacterium: Yersinia pestis. The different types are classified by which level of the body the plague has reached: the immune system, the blood system and the lungs. Symptoms are dependent on the type. Prompt medical treatment through antibiotics is necessary to treat illness and possible death.
Some rodents carry the salmonella bacteria in their digestive tract, (who knew?) making any contact with rodent waste, especially the consumption of contaminated food, a potential risk to contract salmonella.
Symptoms include: chills, fever, abdominal cramps, nausea, vomiting, and diarrhea.
- RAT BITE FEVER
No, this is not the latest dance craze. As the name suggests, Rat Bite Fever is spread when a person has been bitten by a rodent who is infected, has handled an infected rodent (even when no bite or scratch occurs) or has consumed the bacteria in some form.
Symptoms include: fever, skin rash, headaches, vomiting, rash and muscle pain.
Caused by the bacterium Francisella tularenis, Tularemia is often found in rodents, rabbits and hares who are especially prone. Tularemia is most commonly transferred to humans by an infected tick or deer fly bite, or by handling of an animal that is infected. Reported in almost every state in America, Tularemia can be life a threatening illness, though most cases can be treated with the use of antibiotics.
Most common symptom:
- This is a normal reaction from the bite since flesh has been damaged by the rat’s teeth. Part of it is removed or penetrated and your body needs to receive a signal that something is wrong. Do not do any complex treatment, just make sure the wound is disinfected and try to find a medical help as fast as possible
- If streptobacillus bacteria is transmitted on a human being after the bite, symptoms like fever and vomiting may occur because of streptobacillus moniliformis infection. Such complaints are very likely to appear after the wound has healed and up to 10 days have passed after the bite. Additional symptoms of a more complicated infection may be:Pain in the joints
- Swollenness of the whole limb
To avoid those symptoms, follow these tips on how to get rid of rats and minimize the chances of getting bitten by a rodent.
You wouldn’t think it to look at them, but the common mosquito is the deadliest animal on the planet. Mosquitos carry diseases like malaria, encephalitis, and the West Nile Virus, which the females of the species like to transmit to humans whenever they can.
In developed nations like the United States, deaths from mosquito bites and transmitted diseases are rare, but throughout the world, they account for around one million deaths each year.
Mosquito bites are caused by female mosquitoes feeding on your blood. Female mosquitoes have a mouthpart made to pierce skin and siphon off blood. Males lack this blood-sucking ability because they don't produce eggs and so have no need for protein in blood.
As a biting mosquito fills itself with blood, it injects saliva into your skin. Proteins in the saliva trigger a mild immune system reaction that results in the characteristic itching and bump.
Mosquitoes select their victims by evaluating scent, exhaled carbon dioxide and the chemicals in a person's sweat.
Symptoms Mosquito bite signs include:
- A puffy, white and reddish bump that appears a few minutes after the bite
- A hard, itchy, reddish-brown bump, or multiple bumps, appearing a day or so after the bite or bites
- Small blisters instead of hard bumps
- Dark spots that look like bruises
More-severe reactions may be experienced by children, adults not previously exposed to the type of mosquito that bit them, and people with immune system disorders. In these people, mosquito bites sometimes trigger:
- A large area of swelling and redness
- Low-grade fever
- Swollen lymph nodes
Children are more likely to develop a severe reaction than are adults, because many adults have had mosquito bites throughout their lives and become desensitized. | <urn:uuid:055a2bd3-13d8-4f3c-b850-5ce82db66728> | CC-MAIN-2019-47 | https://www.newsprepper.com/if-you-see-this-run-fast-and-ask-for-helpthe-worlds-deadliest-animal-lives-in-your-backyard/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00380.warc.gz | en | 0.941662 | 4,632 | 2.796875 | 3 |
In modern medicine, a surgeon is a physician who performs surgical operations. There are surgeons in podiatry, dentistry maxillofacial surgeon and the veterinary fields; the first person to document a surgery was Sushruta. He specialized in cosmetic plastic surgery and had documented an operation of open rhinoplasty, his magnum opus Suśruta-saṃhitā is one of the most important surviving ancient treatises on medicine and is considered a foundational text of Ayurveda and surgery. The treatise addresses all aspects of general medicine, but the translator G. D. Singhal dubbed Suśruta "the father of surgical intervention" on account of the extraordinarily accurate and detailed accounts of surgery to be found in the work. After the eventual decline of the Sushruta School of Medicine in India, surgery had been ignored until the Islamic Golden Age surgeon Al-Zahrawi, reestablished surgery as an effective medical practice, he is considered the greatest medieval surgeon to have appeared from the Islamic World, has been described as the father of surgery.
His greatest contribution to medicine is the Kitab al-Tasrif, a thirty-volume encyclopedia of medical practices. He was the first physician to describe an ectopic pregnancy, the first physician to identify the hereditary nature of hæmophilia, his pioneering contributions to the field of surgical procedures and instruments had an enormous impact on surgery but it was not until the eighteenth century that surgery as a distinct medical discipline emerged in England. In Europe, surgery was associated with barber-surgeons who used their hair-cutting tools to undertake surgical procedures at the battlefield and for their employers. With advances in medicine and physiology, the professions of barbers and surgeons diverged. Surgeon continued, however, to be used as the title for military medical officers until the end of the 19th century, the title of Surgeon General continues to exist for both senior military medical officers and senior government public health officers. In 1950, the Royal College of Surgeons of England in London began to offer surgeons a formal status via RCS membership.
The title Mister became a badge of honour, today, in many Commonwealth countries, a qualified doctor who, after at least four years' training, obtains a surgical qualification is given the honour of being allowed to revert to calling themselves Mr, Mrs or Ms in the course of their professional practice, but this time the meaning is different. It is sometimes assumed that the change of title implies consultant status, but the length of postgraduate medical training outside North America is such that a qualified surgeon may be years away from obtaining such a post: many doctors obtained these qualifications in the senior house officer grade, remained in that grade when they began sub-specialty training; the distinction of Mr is used by surgeons in the Republic of Ireland, some states of Australia, New Zealand, South Africa and some other Commonwealth countries. In many English-speaking countries the military title of surgeon is applied to any medical practitioner, due to the historical evolution of the term.
The US Army Medical Corps retains various surgeon MOS' in the ranks of officer pay grades for military personnel dedicated to performing surgery on wounded soldiers. Some physicians who are general practitioners or specialists in family medicine or emergency medicine may perform limited ranges of minor, common, or emergency surgery. Anesthesia accompanies surgery, anesthesiologists and nurse anesthetists may oversee this aspect of surgery. Surgeon's assistant, surgical nurses, surgical technologists are trained professionals who support surgeons. In the United States, the Department of Labor description of a surgeon is "a physician who treats diseases and deformities by invasive, minimally-invasive, or non-invasive surgical methods, such as using instruments, appliances, or by manual manipulation". Sushruta al-Zahrawi, regarded as one of the greatest medieval surgeons and a father of surgery. ) Charles Kelman William Stewart Halsted Alfred Blalock C. Walton Lillehei Christiaan Barnard Victor Chang Australian pioneer of heart transplantation John Hunter Sir Victor Horsley Lars Leksell Joseph Lister Harvey Cushing Paul Tessier Gholam A. Peyman Ioannis Pallikaris Nikolay Pirogov Valery Shumakov Svyatoslav Fyodorov Gazi Yasargil Rene Favaloro (first surgeon to perform bypass
Biblioteca Nacional de España
The Biblioteca Nacional de España is a major public library, the largest in Spain, one of the largest in the world. It is located on the Paseo de Recoletos; the library was founded by King Philip V in 1712 as the Palace Public Library. The Royal Letters Patent that he granted, the predecessor of the current legal deposit requirement, made it mandatory for printers to submit a copy of every book printed in Spain to the library. In 1836, the library's status as Crown property was revoked and ownership was transferred to the Ministry of Governance. At the same time, it was renamed the Biblioteca Nacional. During the 19th century, confiscations and donations enabled the Biblioteca Nacional to acquire the majority of the antique and valuable books that it holds. In 1892 the building was used to host the Historical American Exposition. On March 16, 1896, the Biblioteca Nacional opened to the public in the same building in which it is housed and included a vast Reading Room on the main floor designed to hold 320 readers.
In 1931 the Reading Room was reorganised, providing it with a major collection of reference works, the General Reading Room was created to cater for students and general readers. During the Spanish Civil War close to 500,000 volumes were collected by the Confiscation Committee and stored in the Biblioteca Nacional to safeguard works of art and books held until in religious establishments and private houses. During the 20th century numerous modifications were made to the building to adapt its rooms and repositories to its expanding collections, to the growing volume of material received following the modification to the Legal Deposit requirement in 1958, to the numerous works purchased by the library. Among this building work, some of the most noteworthy changes were the alterations made in 1955 to triple the capacity of the library's repositories, those started in 1986 and completed in 2000, which led to the creation of the new building in Alcalá de Henares and complete remodelling of the building on Paseo de Recoletos, Madrid.
In 1986, when Spain's main bibliographic institutions - the National Newspaper Library, the Spanish Bibliographic Institute and the Centre for Documentary and Bibliographic Treasures - were incorporated into the Biblioteca Nacional, the library was established as the State Repository of Spain's Cultural Memory, making all of Spain's bibliographic output on any media available to the Spanish Library System and national and international researchers and cultural and educational institutions. In 1990 it was made an Autonomous Entity attached to the Ministry of Culture; the Madrid premises are shared with the National Archaeological Museum. The Biblioteca Nacional is Spain's highest library institution and is head of the Spanish Library System; as the country's national library, it is the centre responsible for identifying, preserving and disseminating information about Spain's documentary heritage, it aspires to be an essential point of reference for research into Spanish culture. In accordance with its Articles of Association, passed by Royal Decree 1581/1991 of October 31, 1991, its principal functions are to: Compile and conserve bibliographic archives produced in any language of the Spanish state, or any other language, for the purposes of research and information.
Promote research through the study and reproduction of its bibliographic archive. Disseminate information on Spain's bibliographic output based on the entries received through the legal deposit requirement; the library's collection consists of more than 26,000,000 items, including 15,000,000 books and other printed materials, 4,500,000 graphic materials, 600,000 sound recordings, 510,000 music scores, more than 500,000 microforms, 500,000 maps, 143,000 newspapers and serials, 90,000 audiovisuals, 90,000 electronic documents, 30,000 manuscripts. The current director of the Biblioteca Nacional is Ana Santos Aramburo, appointed in 2013. Former directors include her predecessors Glòria Pérez-Salmerón and Milagros del Corral as well as historian Juan Pablo Fusi and author Rosa Regàs. Given its role as the legal deposit for the whole of Spain, since 1991 it has kept most of the overflowing collection at a secondary site in Alcalá de Henares, near Madrid; the Biblioteca Nacional provides access to its collections through the following library services: Guidance and general information on the institution and other libraries.
Bibliographic information about its collection and those held by other libraries or library systems. Access to its automated catalogue, which contains close to 3,000,000 bibliographic records encompassing all of its collections. Archive consultation in the library's reading rooms. Interlibrary loans. Archive reproduction. Biblioteca Digital Hispánica, digital library launched in 2008 by the Biblioteca Nacional de España List of libraries in Spain Media related to Biblioteca Nacional de España at Wikimedia Commons Official site Official web catalog
Bulgaria the Republic of Bulgaria, is a country in Southeast Europe. It is bordered by Romania to the north and North Macedonia to the west and Turkey to the south, the Black Sea to the east; the capital and largest city is Sofia. With a territory of 110,994 square kilometres, Bulgaria is Europe's 16th-largest country. One of the earliest societies in the lands of modern-day Bulgaria was the Neolithic Karanovo culture, which dates back to 6,500 BC. In the 6th to 3rd century BC the region was a battleground for Thracians, Persians and ancient Macedonians; the Eastern Roman, or Byzantine, Empire lost some of these territories to an invading Bulgar horde in the late 7th century. The Bulgars founded the First Bulgarian Empire in AD 681, which dominated most of the Balkans and influenced Slavic cultures by developing the Cyrillic script; this state lasted until the early 11th century, when Byzantine emperor Basil II conquered and dismantled it. A successful Bulgarian revolt in 1185 established a Second Bulgarian Empire, which reached its apex under Ivan Asen II.
After numerous exhausting wars and feudal strife, the Second Bulgarian Empire disintegrated in 1396 and its territories fell under Ottoman rule for nearly five centuries. The Russo-Turkish War of 1877–78 resulted in the formation of the current Third Bulgarian State. Many ethnic Bulgarian populations were left outside its borders, which led to several conflicts with its neighbours and an alliance with Germany in both world wars. In 1946 Bulgaria became part of the Soviet-led Eastern Bloc; the ruling Communist Party gave up its monopoly on power after the revolutions of 1989 and allowed multi-party elections. Bulgaria transitioned into a democracy and a market-based economy. Since adopting a democratic constitution in 1991, the sovereign state has been a unitary parliamentary republic with a high degree of political and economic centralisation; the population of seven million lives in Sofia and the capital cities of the 27 provinces, the country has suffered significant demographic decline since the late 1980s.
Bulgaria is a member of the European Union, NATO, the Council of Europe. Its market economy is part of the European Single Market and relies on services, followed by industry—especially machine building and mining—and agriculture. Widespread corruption is a major socioeconomic issue; the name Bulgaria is derived from a tribe of Turkic origin that founded the country. Their name is not understood and difficult to trace back earlier than the 4th century AD, but it is derived from the Proto-Turkic word bulģha and its derivative bulgak; the meaning may be further extended to "rebel", "incite" or "produce a state of disorder", i.e. the "disturbers". Ethnic groups in Inner Asia with phonologically similar names were described in similar terms: during the 4th century, the Buluoji, a component of the "Five Barbarian" groups in Ancient China, were portrayed as both a "mixed race" and "troublemakers". Neanderthal remains dating to around 150,000 years ago, or the Middle Paleolithic, are some of the earliest traces of human activity in the lands of modern Bulgaria.
The Karanovo culture arose circa 6,500 BC and was one of several Neolithic societies in the region that thrived on agriculture. The Copper Age Varna culture is credited with inventing gold metallurgy; the associated Varna Necropolis treasure contains the oldest golden jewellery in the world with an approximate age of over 6,000 years. The treasure has been valuable for understanding social hierarchy and stratification in the earliest European societies; the Thracians, one of the three primary ancestral groups of modern Bulgarians, appeared on the Balkan Peninsula some time before the 12th century BC. The Thracians excelled in metallurgy and gave the Greeks the Orphean and Dionysian cults, but remained tribal and stateless; the Persian Achaemenid Empire conquered most of present-day Bulgaria in the 6th century BC and retained control over the region until 479 BC. The invasion became a catalyst for Thracian unity, the bulk of their tribes united under king Teres to form the Odrysian kingdom in the 470s BC.
It was weakened and vassalized by Philip II of Macedon in 341 BC, attacked by Celts in the 3rd century, became a province of the Roman Empire in AD 45. By the end of the 1st century AD, Roman governance was established over the entire Balkan Peninsula and Christianity began spreading in the region around the 4th century; the Gothic Bible—the first Germanic language book—was created by Gothic bishop Ulfilas in what is today northern Bulgaria around 381. The region came under Byzantine control after the fall of Rome in 476; the Byzantines were engaged in prolonged warfare against Persia and could not defend their Balkan territories from barbarian incursions. This enabled the Slavs to enter the Balkan Peninsula as marauders through an area between the Danube River and the Balkan Mountains known as Moesia; the interior of the peninsula became a country of the South Slavs, who lived under a democracy. The Slavs assimilated the Hellenized and Gothicized Thracians in the rural areas. Not l
Russia the Russian Federation, is a transcontinental country in Eastern Europe and North Asia. At 17,125,200 square kilometres, Russia is by far or by a considerable margin the largest country in the world by area, covering more than one-eighth of the Earth's inhabited land area, the ninth most populous, with about 146.77 million people as of 2019, including Crimea. About 77 % of the population live in the European part of the country. Russia's capital, Moscow, is one of the largest cities in the world and the second largest city in Europe. Extending across the entirety of Northern Asia and much of Eastern Europe, Russia spans eleven time zones and incorporates a wide range of environments and landforms. From northwest to southeast, Russia shares land borders with Norway, Estonia, Latvia and Poland, Ukraine, Azerbaijan, China and North Korea, it shares maritime borders with Japan by the Sea of Okhotsk and the U. S. state of Alaska across the Bering Strait. However, Russia recognises two more countries that border it, Abkhazia and South Ossetia, both of which are internationally recognized as parts of Georgia.
The East Slavs emerged as a recognizable group in Europe between the 3rd and 8th centuries AD. Founded and ruled by a Varangian warrior elite and their descendants, the medieval state of Rus arose in the 9th century. In 988 it adopted Orthodox Christianity from the Byzantine Empire, beginning the synthesis of Byzantine and Slavic cultures that defined Russian culture for the next millennium. Rus' disintegrated into a number of smaller states; the Grand Duchy of Moscow reunified the surrounding Russian principalities and achieved independence from the Golden Horde. By the 18th century, the nation had expanded through conquest and exploration to become the Russian Empire, the third largest empire in history, stretching from Poland on the west to Alaska on the east. Following the Russian Revolution, the Russian Soviet Federative Socialist Republic became the largest and leading constituent of the Union of Soviet Socialist Republics, the world's first constitutionally socialist state; the Soviet Union played a decisive role in the Allied victory in World War II, emerged as a recognized superpower and rival to the United States during the Cold War.
The Soviet era saw some of the most significant technological achievements of the 20th century, including the world's first human-made satellite and the launching of the first humans in space. By the end of 1990, the Soviet Union had the world's second largest economy, largest standing military in the world and the largest stockpile of weapons of mass destruction. Following the dissolution of the Soviet Union in 1991, twelve independent republics emerged from the USSR: Russia, Belarus, Uzbekistan, Azerbaijan, Kyrgyzstan, Tajikistan and the Baltic states regained independence: Estonia, Lithuania, it is governed as a federal semi-presidential republic. Russia's economy ranks as the twelfth largest by nominal GDP and sixth largest by purchasing power parity in 2018. Russia's extensive mineral and energy resources are the largest such reserves in the world, making it one of the leading producers of oil and natural gas globally; the country is one of the five recognized nuclear weapons states and possesses the largest stockpile of weapons of mass destruction.
Russia is a great power as well as a regional power and has been characterised as a potential superpower. It is a permanent member of the United Nations Security Council and an active global partner of ASEAN, as well as a member of the Shanghai Cooperation Organisation, the G20, the Council of Europe, the Asia-Pacific Economic Cooperation, the Organization for Security and Co-operation in Europe, the World Trade Organization, as well as being the leading member of the Commonwealth of Independent States, the Collective Security Treaty Organization and one of the five members of the Eurasian Economic Union, along with Armenia, Belarus and Kyrgyzstan; the name Russia is derived from Rus', a medieval state populated by the East Slavs. However, this proper name became more prominent in the history, the country was called by its inhabitants "Русская Земля", which can be translated as "Russian Land" or "Land of Rus'". In order to distinguish this state from other states derived from it, it is denoted as Kievan Rus' by modern historiography.
The name Rus itself comes from the early medieval Rus' people, Swedish merchants and warriors who relocated from across the Baltic Sea and founded a state centered on Novgorod that became Kievan Rus. An old Latin version of the name Rus' was Ruthenia applied to the western and southern regions of Rus' that were adjacent to Catholic Europe; the current name of the country, Россия, comes from the Byzantine Greek designation of the Rus', Ρωσσία Rossía—spelled Ρωσία in Modern Greek. The standard way to refer to citizens of Russia is rossiyane in Russian. There are two Russian words which are commonly
Royal Library of the Netherlands
The Royal Library of the Netherlands is based in The Hague and was founded in 1798. The mission of the Royal Library of the Netherlands, as presented on the library's web site, is to provide "access to the knowledge and culture of the past and the present by providing high-quality services for research and cultural experience"; the initiative to found a national library was proposed by representative Albert Jan Verbeek on August 17 1798. The collection would be based on the confiscated book collection of William V; the library was founded as the Nationale Bibliotheek on November 8 of the same year, after a committee of representatives had advised the creation of a national library on the same day. The National Library was only open to members of the Representative Body. King Louis Bonaparte gave the national library its name of the Royal Library in 1806. Napoleon Bonaparte transferred the Royal Library to The Hague as property, while allowing the Imperial Library in Paris to expropriate publications from the Royal Library.
In 1815 King William I of the Netherlands confirmed the name of'Royal Library' by royal resolution. It has been known as the National Library of the Netherlands since 1982, when it opened new quarters; the institution became independent of the state in 1996, although it is financed by the Department of Education and Science. In 2004, the National Library of the Netherlands contained 3,300,000 items, equivalent to 67 kilometers of bookshelves. Most items in the collection are books. There are pieces of "grey literature", where the author, publisher, or date may not be apparent but the document has cultural or intellectual significance; the collection contains the entire literature of the Netherlands, from medieval manuscripts to modern scientific publications. For a publication to be accepted, it must be from a registered Dutch publisher; the collection is accessible for members. Any person aged 16 years or older can become a member. One day passes are available. Requests for material take 30 minutes.
The KB hosts several open access websites, including the "Memory of the Netherlands". List of libraries in the Netherlands European Library Nederlandse Centrale Catalogus Books in the Netherlands Media related to Koninklijke Bibliotheek at Wikimedia Commons Official website
Bulgarians are a South Slavic ethnic group who are native to Bulgaria and its neighboring regions. Bulgarians derive their ethnonym from the Bulgars, their name is not understood and difficult to trace back earlier than the 4th century AD, but it is derived from the Proto-Turkic word bulģha and its derivative bulgak. Alternate etymologies include derivation from a compound of Proto-Turkic bel and gur, a proposed division within the Utigurs or Onogurs. According to the Art.25 of Constitution of Bulgaria, a Bulgarian citizen shall be anyone born to at least one parent holding a Bulgarian citizenship, or born on the territory of the Republic of Bulgaria, should they not be entitled to any other citizenship by virtue of origin. Bulgarian citizenship shall further be acquirable through naturalization. About 77% of Bulgaria's population identified themselves as Bulgarians in 2011 Bulgarian census; the population of Bulgaria descend from peoples with different numbers. They became assimilated by the Slavic settlers in the First Bulgarian Empire.
Two of the non-Slavic nations maintain a legacy among modern-day Bulgarians: the Thracians, from whom cultural and ethnic elements were taken. From the indigenous Thracian people certain cultural and ethnic elements were taken. Other pre-Slavic Indo-European peoples, including Dacians, Goths, Ancient Greeks, Sarmatians and Illyrians settled into the Bulgarian land; the Thracian language has been described as a southern Baltic language. It was still spoken in the 6th century becoming extinct afterwards, but that in a period the Bulgarians replaced long-established Greek/Latin toponyms with Thracian toponyms might suggest that Thracian had not been obliterated then; some pre-Slavic linguistic and cultural traces might have been preserved in modern Bulgarians. Scythia Minor and Moesia Inferior appear to have been Romanized, although the region became a focus of barbarian re-settlements during the 4th and early 5th centuries AD, before a further "Romanization" episode during the early 6th century.
According to archeological evidence from the late periods of Roman rule, the Romans did not decrease the number of Thracians in major cities. By the 4th century the major city of Serdica had predominantly Thracian populace based on epigraphic evidence, which shows prevailing Latino-Thracian given names, but thereafter the names were replaced by Christian ones; the Early Slavs emerged from their original homeland in the early 6th century, spread to most of the eastern Central Europe, Eastern Europe and the Balkans, thus forming three main branches: the West Slavs in eastern Central Europe, the East Slavs in Eastern Europe, the South Slavs in Southeastern Europe. The latter inflicted total linguistic replacement of Thracian, if the Thracians had not been Romanized or Hellenized. Most scholars accept that they began large-scale settling of the Balkans in the 580s based on the statement of the 6th century historian Menander speaking of 100,000 Slavs in Thrace and consecutive attacks of Greece in 582.
They continued coming to the Balkans in many waves, but leaving, most notably Justinian II settled as many as 30,000 Slavs from Thrace in Asia Minor. The Byzantines grouped the numerous Slavic tribes into two groups: the Sklavenoi and Antes; some Bulgarian scholars suggest. The Bulgars are first mentioned in the 4th century in the vicinity of the North Caucasian steppe. Scholars suggest that the ultimate origins of the Bulgar is Turkic and can be traced to the Central Asian nomadic confederations as part of loosely related Oghuric tribes which spanned from the Pontic steppe to central Asia. However, any direct connection between the Bulgars and postulated Asian counterparts rest on little more than speculative and "contorted etymologies"; some Bulgarian historians question the identification of the Bulgars as a Turkic tribe and suggest an Iranian origin. In the 670s, some Bulgar tribes, the Danube Bulgars led by Asparukh and the Macedonian Bulgars, led by Kouber, crossed the Danube river and settled in the Balkans with a single migration wave, the former of which Michael the Syrian described as numbering 10,000.
The Bulgars are not thought to have been numerous, becoming a ruling elite in the areas they controlled. However, according to Steven Runciman a tribe, able to defeat a Byzantine army, must have been of considerable dimensions. Asparukh's Bulgars made a tribal union with the Severians and the "Seven clans", who were re-settled to protect the flanks of the Bulgar settlements in Scythia Minor, as the capital Pliska was built on the site of a former Slavic settlement. During the Early Byzantine Era, the Roman provincials in Scythia Minor and Moesia Secunda were engaged in economic and social exchange with the'barbarians' north of the Danube; this might have facilitated their eventual Slavonization, although the majority of the population appears to have been withdrawn to the hinterland of Constantinople or Asia Minor prior to any permanent Slavic and Bulgar settlement south of the Danube. The major port towns in Pontic Bulgaria remained Byzantine Greek in their outlook; the large scale population transfers and territorial expansions during the 8th and 9th century, additionally increased the number of the Slavs and Byzantine Christians within the state, making the Bulgars quite a
National Library of the Czech Republic
The National Library of the Czech Republic is the central library of the Czech Republic. It is directed by the Ministry of Culture; the library's main building is located in the historical Clementinum building in Prague, where half of its books are kept. The other half of the collection is stored in the district of Hostivař; the National Library is the biggest library in the Czech Republic, in its funds there are around 6 million documents. The library has around 60,000 registered readers; as well as Czech texts, the library stores older material from Turkey and India. The library houses books for Charles University in Prague; the library won international recognition in 2005 as it received the inaugural Jikji Prize from UNESCO via the Memory of the World Programme for its efforts in digitising old texts. The project, which commenced in 1992, involved the digitisation of 1,700 documents in its first 13 years; the most precious medieval manuscripts preserved in the National Library are the Codex Vyssegradensis and the Passional of Abbes Kunigunde.
In 2006 the Czech parliament approved funding for the construction of a new library building on Letna plain, between Hradčanská metro station and Sparta Prague's football ground, Letná stadium. In March 2007, following a request for tender, Czech architect Jan Kaplický was selected by a jury to undertake the project, with a projected completion date of 2011. In 2007 the project was delayed following objections regarding its proposed location from government officials including Prague Mayor Pavel Bém and President Václav Klaus. Plans for the building had still not been decided in February 2008, with the matter being referred to the Office for the Protection of Competition in order to determine if the tender had been won fairly. In 2008, Minister of Culture Václav Jehlička announced the end of the project, following a ruling from the European Commission that the tender process had not been carried out legally; the library was affected by the 2002 European floods, with some documents moved to upper levels to avoid the excess water.
Over 4,000 books were removed from the library in July 2011 following flooding in parts of the main building. There was a fire at the library in December 2012. List of national and state libraries Official website | <urn:uuid:741b9215-a51b-4f5c-bd2a-d4309be4cf3d> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/George_Sava | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00457.warc.gz | en | 0.958686 | 6,148 | 3.25 | 3 |
In multi-user systems, many users may update the same information at the same time. Locking allows only one user to update a particular data block; another person cannot modify the same data.
Oracle provides two different levels of locking: Row Level Lock and Table Level Lock.
With a row-level locking strategy, each row within a table can be locked individually. Locked rows can be updated only by the locking process. All other rows in the table are still available for updating by other processes. Of course, other processes continue to be able to read any row in the table, including the one that is actually being updated. When other processes do read updated rows, they see only the old version of the row prior to update (via a rollback segment) until the changes are actually committed. This is known as a consistent read.
With table-level locking, the entire table is locked as an entity. Once a process has locked a table, only that process can update (or lock) any row in the table. None of the rows in the table are available for updating by any other process. Of course, other processes continue to be able to read any row in the table, including the one that is actually being updated.
Many users believe that they are the only users on the system - at least the only ones who count. Unfortunately, this type of attitude is what causes locking problems. We've often observed applications that were completely stalled because one user decided to go to lunch without having committed his or her changes. Remember that all locking (row or table) will prevent other users from updating information. Every application has a handful of central, core tables. Inadvertently locking such tables can affect many other people in a system.
Modes of Locking
Oracle uses two modes of locking in a multi-user database:
Description of each Lock Mode
Row Share Table Locks (RS)
A row share table lock (also sometimes called a subshare table lock, SS) indicates that the transaction holding the lock on the table has locked rows in the table and intends to update them. A row share table lock is automatically acquired for a table when one of the following SQL statements is executed:Permitted Operations:
A row share table lock held by a transaction allows other transactions to:Prohibited Operations:
A row share table lock held by a transaction prevents other transactions from exclusive write access to the same table.When to Lock with ROW SHARE Mode:
Your transaction needs to prevent another transaction from acquiring an intervening share, share row, or exclusive table lock for a table before the table can be updated in your transaction. If another transaction acquires an intervening share, share row, or exclusive table lock, no other transactions can update the table until the locking transaction commits or rolls back.
We use the EMP table for the next examples.
Row Exclusive Table Locks (RX)
A row exclusive table lock (also called a subexclusive table lock, SX) generally indicates that the transaction holding the lock has made one or more updates to rows in the table. A row exclusive table lock is acquired automatically for a table modified by the following types of statements:Permitted Operations:
When to Lock with ROW EXCLUSIVE Mode:
This is the Default Locking Behaviour of Oracle.
Share Table Locks (S)
A share table lock held by a transaction allows other transactions only toProhibited Operations:
A share table lock held by a transaction prevents other transactions from modifying the same table and from executing the following statements:
When to Lock with SHARE Mode
Exclusive Table Locks (X)
Be careful to use an EXCLUSIVE lock!
Your transaction requires immediate update access to the locked table. When your transaction holds an exclusive table lock, other transactions cannot lock specific rows in the locked table.
update emp set
sal = sal * 1.1
where empno = 7369;
1 row updated.
update emp set
sal = sal * 1.1
where empno = 7934;
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
update emp set
mgr = 1342
where empno = 7934;
1 row updated.
update emp set
mgr = 1342
where empno = 7369;
In the example, no problem exists at time point A, as each transaction has a row lock on the row it attempts to update. Each transaction proceeds without being terminated. However, each tries next to update the row currently held by the other transaction. Therefore, a deadlock results at time point B, because neither transaction can obtain the resource it needs to proceed or terminate. It is a deadlock because no matter how long each transaction waits, the conflicting locks are held.
Automatic Deadlock Detection
Oracle performs automatic deadlock detection for enqueue locking deadlocks. Deadlock detection is initiated whenever an enqueue wait times out, if the resource type required is regarded as deadlock sensitive, and if the lock state for the resource has not changed. If any session that is holding a lock on the required resource in an incompatible mode is waiting directly or indirectly for a resource that is held by the current session in an incompatible mode, then a deadlock exists.If a deadlock is detected, the session that was unlucky enough to find it aborts its lock request and rolls back its current statement in order to break the deadlock. Note that this is a rollback of the current statement only, not necessarily the entire transaction. Oracle places an implicit savepoint at the beginning of each statement, called the default savepoint, and it is to this savepoint that the transaction is rolled back in the first case. This is enough to resolve the technical deadlock. However, the interacting sessions may well remain blocked.
ORA-60 error in ALERT.LOG
An ORA-60 error is returned to the session that found the deadlock, and if this exception is not handled, then depending on the rules of the application development tool, the entire transaction is normally rolled back, and a deadlock state dump written to the user dump destination directory. This, of course, resolves the deadlock entirely. The enqueue deadlocks statistic in V$SYSSTAT records the number of times that an enqueue deadlock has been detected.select name, value
where name = 'enqueue deadlocks';
enqueue deadlocks 1
How to avoid Deadlocks
Application developers can eliminate all risk of enqueue deadlocks by ensuring that transactions requiring multiple resources always lock them in the same order. However, in complex applications, this is easier said than done, particularly if an ad hoc query tool is used. To be safe, you should adopt a strict locking order, but you must also handle the ORA-60 exception appropriately. In some cases it may be sufficient to pause for three seconds, and then retry the statement. However, in general, it is safest to roll back the transaction entirely, before pausing and retrying.
Referential Integrity Locks (RI Locks)
With the introduction of automated referential integrity (RI) came a whole new suite of locking problems. What seems at first to be a DBA's blessing can turn out to be an absolute nightmare when the DBA doesn't fully understand the implications of this feature. Why is this so?
RI constraints are validated by the database via a simple SELECT from the dependent (parent) table in question-very simple, very straightforward. If a row is deleted or a primary key is modified within the parent table, all associated child tables need to be scanned to make sure no orphaned records will result. If a row is inserted or the foreign key is modified, the parent table is scanned to ensure that the new foreign key value(s) is valid. If a DELETE CASCADE clause is included, all associated child table records are deleted. Problems begin to arise when we look at how the referential integrity is enforced.
Oracle assumes the existence of an index over every foreign key within a table. This assumption is valid for a primary key constraint or even a unique key constraint but a little presumptuous for every foreign key.
Index or no Index on Foreign Key's ?
If an index exists on the foreign key column of the child table, no DML locks, other than a lock over the rows being modified, are required.
If the index is not created, a share lock is taken out on the child table for the duration of the transaction.
The referential integrity validation could take several minutes or even hours to resolve. The share lock over the child table will allow other users to simultaneously read from the table, while restricting certain types of modification. The share lock over the table can actually block other normal, everyday modification of other rows in that table.
You can use the script: show_missing_fk_index.sql to check unindexed foreign keys:
SQL> start show_missing_fk_index.sql
Please enter Owner Name and Table Name. Wildcards allowed (DEFAULT: %)
eg.: SCOTT, S% OR %
eg.: EMP, E% OR %
Owner <%>: SCOTT
Unindexed Foreign Keys owned by Owner: SCOTT
Table Name 1. Column Constraint Name
------------------------ ------------------------ ---------------
EMP DEPTNO FK_EMP_DEPT
What is so dangerous about a Cascading Delete ?
Oracle allows to enhance a referential integrity definition to included cascading deletion. If a row is deleted from a parent table, all of the associated children will be automatically purged. This behavior obviously will affect an application's locking strategy, again circumnavigating normal object locking, removing control from the programmer.
What is so dangerous about a cascading delete? A deleted child table might, in turn, have its own child tables. Even worse, the child tables could have table-level triggers that begin to fire. What starts out as a simple, single-record delete from a harmless table could turn into an uncontrollable torrent of cascading deletes and stored database triggers.
DELETE CASCADE constraints can be found with the following script:
SQL> SELECT OWNER,
WHERE DELETE_RULE IS NOT NULL;CONSTRAINT_NAME C TABLE_NAME DELETE_RU
------------------------------ - ----------------- ---------
FK_EMP_DEPT R EMP CASCADE
Oracle resolves true enqueue deadlocks so quickly that overall system activity is scarcely affected. However, blocking locks can bring application processing to a standstill. For example, if a long-running transaction takes a shared mode lock on a key application table, then all updates to that table must wait.There are numerous ways of attempting to diagnose blocking lock situations, normally with the intention of killing the offending session.Blocking locks are almost always TX (transaction) locks or TM (table) locks . When a session waits on a TX lock, it is waiting for that transaction to either commit or roll back. The reason for waiting is that the transaction has modified a data block, and the waiting session needs to modify the same part of that block. In such cases, the row wait columns of V$SESSION can be useful in identifying the database object, file, and block numbers concerned, and even the row number in the case of row locks. V$LOCKED_OBJECT can then be used to obtain session information for the sessions holding DML locks on the crucial database object. This is based on the fact that sessions with blocking TX enqueue locks always hold a DML lock as well, unless DML locks have been disabled.It may not be adequate, however, to identify a single blocking session, because it may, in turn, be blocked by another session. To address this requirement, Oracle's UTLLOCKT.SQL script gives a tree-structured report showing the relationship between blocking and waiting sessions. Some DBAs are loath to use this script because it creates a temporary table, which will block if another space management transaction is caught behind the blocking lock. Although this is extremely unlikely, the same information can be obtained from the DBA_WAITERS view if necessary. The DBA_WAITERS view is created by Oracle's catblock.sql script.Some application developers attempt to evade blocking locks by preceding all updates with a SELECT FOR UPDATE NOWAIT or SELECT FOR UPDATE SKIP LOCKED statement. However, if they allow user interaction between taking a sub-exclusive lock in this way and releasing it, then a more subtle blocking lock situation can still occur. If a user goes out to lunch while holding a sub-exclusive lock on a table, then any shared lock request on the whole table will block at the head of the request queue, and all other lock requests will queue behind it.Diagnosing such situations and working out which session to kill is not easy, because the diagnosis depends on the order of the waiters. Most blocking lock detection utilities do not show the request order, and do not consider that a waiter can block other sessions even when it is not actually holding any locks.
Lock Detection Scripts
The following scripts can be used to track and identify blocking locks. The scripts shows the following lock situation.
Session 1 Session 2 select empno
from emp for update of empno;
update emp set ename = 'Müller'
where empno = 7369;
This script shows actual DML-Locks (incl. Table-Name), WAIT = YES means
that users are waiting for a lock.
WAI OSUSER PROCESS LOCKER T_OWNER OBJECT_NAME PROGRAM
--- ------- -------- ------- -------- ------------- --------------
NO zahn 8935 SCOTT - Record(s) sqlplus@akira
YES zahn 8944 SCOTT - Record(s) sqlplus@akira
NO zahn 8935 SCOTT SCOTT EMP sqlplus@akira
NO zahn 8944 SCOTT SCOTT EMP sqlplus@akira
This script show users waiting for a lock, the locker and the SQL-Command they are waiting for a lock, the osuser, schema and PIDs are shown as well.
OS_LOCKER LOCKER_SCHEMA LOCKER_PID OS_WAITER WAITER_SCHEMA WAITER_PID
---------- -------------- ---------- ----------- --------------- ----------
zahn SCOTT 8935 zahn SCOTT 8944
TX: update emp set ename = 'Müller' where empno = 7369
This is the original Oracle script to print out the lock wait-for graph in a tree structured fashion. This script prints the sessions in the system that are waiting for locks, and the locks that they are waiting for. The printout is tree structured. If a sessionid is printed immediately below and to the right of another session, then it is waiting for that session. The session ids printed at the left hand side of the page are the ones that everyone is waiting for (Session 96 is waiting for session 88 to complete):
WAITING_SESSION LOCK_TYPE MODE_REQUESTED MODE_HELD LOCK_ID1 LOCK_ID2 ----------------- ------------ -------------- ---------- --------- -------- 88 None 96 Transaction Exclusive Exclusive 262144 3206The lock information to the right of the session id describes the lock that the session is waiting for (not the lock it is holding). Note that this is a script and not a set of view definitions because connect-by is used in the implementation and therefore a temporary table is created and dropped since you cannot do a join in a connect-by.
This script has two small disadvantages. One, a table is created when this script is run. To create a table a number of locks must be acquired. This might cause the session running the script to get caught in the lock problem it is trying to diagnose. Two, if a session waits on a lock held by more than one session (share lock) then the wait-for graph is no longer a tree and the conenct-by will show the session (and any sessions waiting on it) several times.
For distributed transactions, Oracle is unable to distinguish blocking locks and deadlocks, because not all of the lock information is available locally. To prevent distributed transaction deadlocks, Oracle times out any call in a distributed transaction if it has not received any response within the number of seconds specified by the _DISTRIBUTED_LOCK_TIMEOUT parameter. This timeout defaults to 60 seconds. If a distributed transaction times out, an ORA-2049 error is returned to the controlling session. Robust applications should handle this exception in the same way as local enqueue deadlocks.select name,value
where name = 'distributed_lock_timeout';
ITL Entry Shortages
There is an interested transaction list (ITL) in the variable header of each Oracle data block. When a new block is formatted for a segment, the initial number of entries in the ITL is set by the INITRANS parameter for the segment. Free space permitting, the ITL can grow dynamically if required, up to the limit imposed by the database block size, or the MAXTRANS parameter for the segment, whichever is less.Every transaction that modifies a data block must record its transaction identifier and the rollback segment address for its changes to that block in an ITL entry. (However, for discrete transactions, there is no rollback segment address for the changes.) Oracle searches the ITL for a reusable or free entry. If all the entries in the ITL are occupied by uncommitted transactions, then a new entry will be dynamically created, if possible.If the block does not have enough internal free space (24 bytes) to dynamically create an additional ITL entry, then the transaction must wait for a transaction using one of the existing ITL entries to either commit or roll back. The blocked transaction waits in shared mode on the TX enqueue for one of the existing transactions, chosen pseudo-randomly. The row wait columns in V$SESSION show the object, file, and block numbers of the target block. However, the ROW_WAIT_ROW# column remains unset, indicating that the transaction is not waiting on a row-level lock, but is probably waiting for a free ITL entry.The most common cause of ITL entry shortages is a zero PCTFREE setting. Think twice before setting PCTFREE to zero on a segment that might be subject to multiple concurrent updates to a single block, even though those updates may not increase the total row length. The degree of concurrency that a block can support is dependent on the size of its ITL, and failing that, the amount of internal free space. Do not, however, let this warning scare you into using unnecessarily large INITRANS or PCTFREE settings. Large PCTFREE settings compromise data density and degrade table scan performance, and non-default INITRANS settings are seldom warranted.One case in which a non-default INITRANS setting is warranted is for segments subject to parallel DML. If a child transaction of a PDML transaction encounters an ITL entry shortage, it will check whether the other ITL entries in the block are all occupied by its sibling transactions and, if so, the transaction will roll back with an ORA-12829 error, in order to avoid self-deadlock. The solution in this case is to be content with a lower degree of parallelism, or to rebuild the segment with a higher INITRANS setting. A higher INITRANS value is also needed if multiple serializable transactions may have concurrent interest in any one block.
Check ITL Waits
The following SQL-Statement shows the number of ITL-Waits per table (Interested Transaction List). INITRANS and/or PCTFREE for those tables is to small (could also be that MAXTRANS is too small). Note that STATISTICS_LEVEL must be set to TYPICAL or ALL, MAXTRANS has been desupported in Oracle 10g and now is always 255 (maximum).select name,value
where name = 'statistics_level';NAME VALUE
statistics_level TYPICALTTITLE "ITL-Waits per table (INITRANS to small)"
set pages 1000
col owner format a15 trunc
col object_name format a30 word_wrap
col value format 999,999,999 heading "NBR. ITL WAITS"
object_name||' '||subobject_name object_name,
where statistic_name = 'ITL waits'
and value > 0
order by 3,1,2;
col owner clear
col object_name clear
col value clear
Exclusive Locks lock a resource exclusively, Share Locks can be acquired by more than one Session as long as the other Session holding a Share Lock have no open Transaction. A Share Lock can be "switched" from one Session to another.HAPPY LEARNING!
Application developers can eliminate the risk of deadlocks by ensuring that transactions requiring multiple resources always lock them in the same order.
A DELETE CASCADE can start out as a simple, single-record delete from a harmless table could turn into an uncontrollable torrent of cascading deletes and stored database triggers.
Blocking locks are almost always TX (transaction) locks or TM (table) locks. Oracle always performs locking automatically to ensure data concurrency, data integrity, and statement-level read consistency. Usually, the default locking mechanisms should not be overriden.
For distributed transactions, Oracle is unable to distinguish blocking locks and deadlocks.
The most common cause of ITL entry shortages is a zero PCTFREE setting. | <urn:uuid:8b279d20-0580-4fb2-b789-ccc6c25fc22e> | CC-MAIN-2019-47 | http://oracledbascriptsfromajith.blogspot.com/2011/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667262.54/warc/CC-MAIN-20191113140725-20191113164725-00257.warc.gz | en | 0.862764 | 4,564 | 3.0625 | 3 |
1968: The general strike and student revolt in France
Part 3—How Alain Krivine’s JCR covered for the betrayals of Stalinism (1)
5 July 2008
This is the third in series of articles dealing with the events of May/June 1968 in France. Part 1, posted May 28, deals with the development of the student revolt and the general strike up to its high point at the end of May. Part 2, posted May 29, examines how the Communist Party (PCF) and the union it controls, the CGT, enabled President Charles de Gaulle to regain control. Parts 3 and 4 examine the role played by the Pabloites; the final part will examine Pierre Lambert’s Organization Communiste Internationaliste (OCI).
President de Gaulle and his Fifth Republic owed their political survival in May 1968 to the Stalinist French Communist Party (Parti Communiste Français—PCF) and its trade union arm—the General Confederation of Labour (Confédération Générale du Travail—CGT). The influence of the PCF had clearly decreased, however, between 1945 and 1968. In order to strangle the general strike the Stalinists relied on the support of other political forces that struck a more radical stance but ensured that the PCF maintained its political dominance over the mass movement.
In this respect a key role was played by the Pabloite United Secretariat led by Ernest Mandel and its French supporters, the Revolutionary Communist Youth (Jeunesse Communiste Révolutionnaire—JCR) led by Alain Krivine and the International Communist Party (Parti Communiste Internationaliste—PCI) headed by Pierre Frank. They prevented the radicalisation of youth from developing into a serious revolutionary alternative and so helped the Stalinists bring the general strike under control.
At the end of the Second World War the PCF had acquired considerable political authority due to the victory of the Soviet Red Army over Nazi Germany and the French party’s own role in the anti-fascist Résistance movement. The French bourgeoisie in the form of the Vichy regime had discredited itself through its collaboration with the Nazis and there was a powerful yearning within the working class for a socialist society, which extended into the membership of the PCF. However, the leader of the PCF at that time, Maurice Thorez, used his entire political authority to re-establish bourgeois rule. Thorez personally participated in the first post-war government established by de Gaulle and was instrumental in ensuring the disarming of the Résistance.
Support gradually ebbed for the PCF due to its role in restabilising bourgeois society in the post-war period. The party had lent its support to the colonial wars against Vietnam and Algeria and was further discredited following the revelation of Stalin’s crimes in the speech made by Nikita Khrushchev in 1956. This was followed by the bloody suppression of popular uprisings by Stalinist troops in Hungary and Poland. While in 1968 the PCF was still the party with the biggest working class membership it had largely lost its authority among students and youth.
In particular, the Communist Student Federation (Union des Étudiants Communistes—UEC) was in profound crisis. From 1963 onwards various fractions emerged in the UEC—“Italian” (supporters of Gramsci and the Italian Communist Party), “Marxist-Leninist” (supporters of Mao Zedong) and “Trotskyist”—which were then expelled and went on to establish their own organizations. This period marked the origin of the so-called “extreme left,” whose appearance on the political scene marked “the emerging break by an active part of the militant youth with the PCF,” according to the historian Michelle Zancarini-Fournel in her book about the 1968 movement.
The authority of the CGT was also under increasing pressure in 1968. Rival trade unions—such as Force Ouvrière and the CFDT (Confédération Française Démocratique du Travail)— at that time under the influence of the left-reformist Parti Socialiste Unifié (PSU)—struck militant postures and challenged the CGT. The CFDT in particular was able to garner support in the service sector and public services.
Under these circumstances the Pabloites organised in the United Secretariat played a very important role in defending the authority of the Stalinists and making the sell-out of the general strike possible.
The origins of Pabloism
The Pabloite United Secretariat emerged in the early 1950s as the result of a political attack against the program of the Fourth International. The secretary of the FI, Michel Pablo, rejected the entire analysis of Stalinism that had formed the basis for the founding of the Fourth International by Leon Trotsky in 1938.
Following the defeat of the German proletariat in 1933, Trotsky concluded that the extent of the Stalinist degeneration of the Communist International made any policy based on the reform of the International untenable. Proceeding from the political betrayal of the German Communist Party, which had made possible Hitler’s assumption of power, and the subsequent refusal of the Communist International to draw any lessons from the German disaster, Trotsky concluded that the Communist parties had definitively gone over to the side of the bourgeoisie. He insisted that the future of revolutionary struggle depended on the building of a new proletarian leadership, and wrote in the founding program of the Fourth International: “The crisis of the proletarian leadership, having become the crisis in mankind’s culture, can be resolved only by the Fourth International.”
Pablo rejected this view. He concluded from the emergence of new deformed workers’ states in Eastern Europe that Stalinism could play a historically progressive role in the future. Such a perspective amounted to the liquidation of the Fourth International. According to Pablo there was no reason to construct sections of the Fourth International independently of the Stalinist mass organizations. Instead the task of Trotskyists was reduced to entering existing Stalinist parties and supporting the presumed leftist elements in their leaderships.
Pablo ended up rejecting the entire Marxist conception of a proletarian party that insists on the necessity of a politically and theoretically conscious avant-garde. For Pablo the role of leadership could be allocated to non-Marxist and non-proletarian forces such as trade unionists, left reformists, petty bourgeois nationalists and national liberation movements in the colonial and former colonial countries, which would be driven to the left under the pressure of objective forces. Pablo personally put himself at the service of the Algerian National Liberation Front, the FLN (Front de Libération Nationale), and following its victory even joined the Algerian government for a period of three years.
Pablo’s onslaught split the Fourth International. The majority of the French section rejected his revisions and was bureaucratically expelled by a minority led by Pierre Frank. In 1953 the American Socialist Workers Party responded to the Pabloite revisions with a devastating critique and issued an Open Letter calling for the international unification of all orthodox Trotskyists. This became the basis for the International Committee of the Fourth International (ICFI), which included the French majority.
However, the SWP did not maintain its opposition to Pabloism for long. During the next 10 years the SWP increasingly dropped its differences with the Pabloites and eventually joined them to form the United Secretariat (US) in 1963. In the meantime the leadership of the US had been taken over by Ernest Mandel. Pablo played an increasingly secondary role and left the United Secretariat soon afterwards. The basis for the reunification in 1963 was uncritical support for Fidel Castro and his petty bourgeois nationalist “26th of July Movement.” According to the United Secretariat the seizure of power by Castro in Cuba amounted to the setting up of a workers’ state, with Castro, Ernesto “Che” Guevara and other Cuban leaders playing the role of “natural Marxists.”
This perspective served not only to disarm the working class in Cuba, which never had its own organs of power; it also disarmed the international working class by lending uncritical support to Stalinist and petty bourgeois nationalist organizations and strengthening their grip on the masses. In so doing, Pabloism emerged as a secondary agency of imperialism, whose role became even more important under conditions where the older bureaucratic apparatuses were increasingly discredited in the eyes of the working class and the youth.
This was confirmed in Sri Lanka just one year after the unification of the SWP and the Pabloites. In 1964 a Trotskyist party with mass influence, the Lanka Sama Samaja Party (LSSP)—joined a bourgeois coalition government with the nationalist Sri Lankan Freedom Party. The price paid by the LSSP for its entry into government was to abandon the country’s Tamil minority in favour of Sinhala chauvinism. The country is still suffering the consequences of this betrayal, which reinforced the discrimination of the Tamil minority and led to the bloody civil war that has plagued Sri Lanka for three decades.
The Pabloites also played a crucial role in France in helping maintain bourgeois rule in 1968. When one examines their role during the key events, two things are striking: their apologetic stance with regard to Stalinism and their uncritical adaptation to the anti-Marxist theories of the “New Left,” which predominated in the student environment.
Alain Krivine and the JCR
The Fourth International had considerable influence in France at the end of the Second World War. In 1944 the French Trotskyist movement, which had split during the war, reunited to form the Parti Communiste Internationaliste (PCI). Two years later PCI had around 1,000 members and put up 11 candidates in parliamentary elections, who received between 2 and 5 percent of the vote. The organisation’s newspaper La Vérité was sold at kiosks and enjoyed a broad readership. Its influence extended into other organizations; the entire leadership of the socialist youth organization, with a total membership of 20,000, supported the Trotskyists. Members of the PCI played a prominent role in the strike movement which rocked the country and forced the PCF to withdraw from the government in 1947.
In subsequent years, however, the revolutionary orientation of the PCI came under repeated attack from elements inside its own ranks. In 1947 the social-democratic SFIO (Section Française de l’Internationale Ouvrière) moved sharply to the right, dissolved its youth organization and expelled its Trotskyist leader. The right wing of the PCI, led by its secretary at the time, Yvan Craipeau, reacted by junking any revolutionary perspective. One year later this wing was expelled after it had argued in favour of dissolving the PCI into the broad left movement led by the French philosopher Jean-Paul Sartre (Rassemblement Démocratique Révolutionnaire—RDR). Many of the leading figures in the expelled wing, including Craipeau himself, re-emerged later in the PSU.
In the same year, 1948, another group—Socialisme ou barbarie (Socialism or Barbarism), headed by Cornelius Castoriadis and Claude Lefort—quit the PCI. This group reacted to the start of the Cold War by rejecting Trotsky’s analysis of the Soviet Union as a degenerated workers’ state, arguing that the Stalinist regime represented a new class within a system of “bureaucratic capitalism.” Based on this standpoint the group developed a number of positions hostile to Marxism. The writings of Socialisme ou barbarie were to have considerable influence on the student movement and one its members, Jean François Lyotard, later played a leading role in developing the ideology associated with postmodernism.
The biggest blow to the Trotskyist movement in France, however, was delivered by Pabloism. The PCI was both politically and organizationally weakened by the liquidationist policy of Michel Pablo and the subsequent expulsion of a majority of the section by the Pabloite minority. The PCI majority led by Pierre Lambert will be dealt with in the final part of this series. The Pabloite minority led by Pierre Frank concentrated after the split on providing practical and logistical support for the national liberation movement, the FLN, in the Algerian war. During the 1960s it had largely lost any influence inside the factories. It did have support in student circles, however, and played an important role amongst such layers in 1968. Its leading member, Alain Krivine, was one of the best known faces of the student revolt alongside figures such as the anarchist Daniel Cohn-Bendit and the Maoist Alain Geismar.
Krivine had joined the Stalinist youth movement in 1955 at the age of 14 and in 1957 was part of an official delegation attending a youth festival in Moscow. According to his autobiography, it was there that he met members of the Algerian FLN and developed a critical attitude towards the policies of the Communist Party with regard to Algeria. One year later he began to collaborate with the Pabloite PCI on the Algerian question. Krivine claims he was initially unaware of the background of the PCI, but this is highly unlikely since two of his brothers belonged to the leadership of the organisation. In any event, he joined the PCI at the latest in 1961, while continuing to officially work inside the Stalinist student organization, the UEC (Union des étudiants communistes).
Krivine quickly rose inside the leadership of the PCI and the United Secretariat. From 1965 the 24-year-old Krivine belonged to the top leadership of the party, the Political Bureau, alongside Pierre Frank and Michel Lequenne. In the same year he was appointed to the executive committee of the United Secretariat as a substitute for Lequenne.
In 1966 Krivine’s section of the UEC at the University of Paris (La Sorbonne) was expelled by the Stalinist leadership for refusing to support the joint presidential candidate of the left, François Mitterrand. Together with other rebel UEC sections he went on to establish the JCR (Jeunesse Communiste Révolutionnaire), which consisted almost exclusively of students and, unlike the PCI, did not expressly commit itself to Trotskyism. In April 1969 the JCR and PCI then officially merged to form the Ligue Communiste (from 1974, Ligue Communiste Révolutionnaire—LCR) after the French interior minister had banned both organisations a year previously.
In retrospect, Krivine has sought to present the JCR in 1968 as a young and largely naïve organization characterised by heady enthusiasm but little political experience: “We were an organization of some hundred members, whose average age barely corresponded to the legal age of adulthood at that time: twenty-one years. It is hardly necessary to note that driven by the next most important task from one meeting and demonstration to another we had no time to think things through. In view of our modest forces we felt at home in the universities, on strike, and on the streets. The solution of the problem of government took place at another level over which we had barely any influence.”
In fact, such claims simply do not stand up. Aged 27 in 1968, Alain Krivine was still relatively young but had already acquired considerable political experience. He had inside knowledge of Stalinist organizations and as a member of the United Secretariat was entirely familiar with the international conflicts within the Trotskyist movement. At this time he had already left university, but then returned in order to lead the activities of the JCR.
The political activity of the JCR in May-June 1968 cannot be put down to juvenile inexperience but was instead guided by the political line developed by Pabloism in the struggle against orthodox Trotskyism. Fifteen years after its break with the Fourth International the United Secretariat had changed not only its political but also its social orientation. It was no longer a proletarian current, but instead a petty bourgeois movement. For one-and-a-half decades the Pabloites had sought the favours of careerists in the Stalinist and reformist apparatuses and wooed national movements. The social orientation of such movements had become second nature for the Pabloites themselves. What had begun as a theoretical revision of Marxism had become an organic part of their political physiognomy—insofar as it is permissible to transfer terms from the realm of physiology to politics.
In drawing the lessons from the defeat of the European revolutions of 1848 Marx distinguished the perspective of the petty bourgeois from that of the working class as follows: “The democratic petty bourgeois, far from wanting to transform the whole society in the interests of the revolutionary proletarians, only aspire to a change in social conditions which will make the existing society as tolerable and comfortable for themselves as possible.” This characterisation applied equally in 1968 to the Pabloites. This was clear from their uncritical attitude towards anarchist and other petty bourgeois movements, movements which had been uncompromisingly fought at an earlier date by Marx and Engels. It was also evident in the significance they attached at that time and continue to attach today to such issues as race, gender and sexual orientation; and in their enthusiasm for the leaders of nationalist movements, which despise the working class and—as was the case with the Russian Populists fought by Lenin—orient themselves towards layers of the rural middle class.
“More Guevarist than Trotskyist”
Above all, Krivine’s JCR was characterised by its completely uncritical support for the Cuban leadership—the issue which lay at the heart of the unification which took place in 1963. The author of a history of the LCR, Jean-Paul Salles, refers to “the identity of an organization, which prior to May 68 appeared in many respects more Guevarist than Trotskyist.”
On October 19, 1967, 10 days after his murder in Bolivia, the JCR organised a commemoration meeting for Che Guevara in the Paris Mutualité. Guevara’s portrait was pervasive at JCR meetings. In his autobiography of 2006 Alain Krivine writes: “Our most important point of reference with regard to the liberation struggles in the countries of the third world was undoubtedly the Cuban revolution, which led us to being called ‘Trotsko-Guevarists’ ... In particular Che Guevara embodied the ideal of the revolutionary fighter in our eyes.”
With its glorification of Che Guevara the LCR evaded the urgent problems bound up with the building of a leadership in the working class. If there is a single common denominator to be found in the eventful life of the Argentine-Cuban revolutionary, it is his unwavering hostility to the political independence of the working class. Instead, he represented the standpoint that a small armed minority—a guerrilla troop operating in rural areas—could lead the path to socialist revolution, independently of the working class. This required neither a theory nor a political perspective. The action and the will of a small group were crucial. The ability of the working class and the oppressed masses to attain political consciousness and lead their own liberation struggle was denied.
In January 1968 the JCR newspaper Avant-Garde Jeunesse propagated Guevara’s conceptions as follows: “Irrespective of the current circumstances the guerrillas are called upon to develop themselves until, after a shorter or longer period, they are able to draw in the whole mass of the exploited into a frontal struggle against the regime.”
However, the guerrilla strategy pursued by Guevara in Latin America could not so easily be transferred to France. Instead Mandel, Frank and Krivine ascribed the role of the avant-garde to the students. They glorified the spontaneous activities of students and their street battles with the police. Guevara’s conceptions served to justify blind activism at the expense of any serious political orientation. In doing so, the Pabloites completely adapted to the anti-Marxist theories of the New Left, which played a leading role amongst students, thereby blocking the path to a genuine Marxist orientation.
There were hardly any recognizable political differences between the “Trotskyist” Alain Krivine, the anarchist Daniel Cohn-Bendit, the Maoist Alain Geismar and other student leaders who were prominent in the events of 1968. They showed up side by side in the street battles that took place in the Latin Quarter. Jean-Paul Salles writes: “During the week of May 6-11 members of the JCR stood at the forefront and took part in all the demonstrations alongside Cohn-Bendit and the anarchists—including the night of the barricades.” On May 9, the JCR held a meeting prepared long before in the Mutualité, in the Latin Quarter, scene of the fiercest street battles at that time. Over 3,000 attended the meeting and one of the main speakers was Daniel Cohn-Bendit.
During the same period in Latin America the United Secretariat unconditionally supported Che Guevara’s guerrilla perspective. At its 9th World Congress held in May 1969 in Italy, the US instructed its South American sections to follow Che Guevara’s example and unite with his supporters. This meant turning their back on the urban-based working class in favour of an armed guerrilla struggle aimed at carrying the fight from the countryside to the cities. The majority of delegates at the congress supporting this strategy included Ernest Mandel and the French delegates, Pierre Frank and Alain Krivine. They held firmly to this strategy for no less than 10 years, although the perspective of guerrilla-type struggle was a source of dispute inside the United Secretariat as its catastrophic consequences became increasingly visible. Thousands of young people who had followed this path and taken up the path of guerrilla struggle senselessly sacrificed their lives, while the actions of the guerrillas—kidnappings, hostage taking and violent clashes with the army—only served to politically disorientate the working class.
The students as “revolutionary avant-garde”
The utterly uncritically stance taken by the Pabloites to the role played by students is evident from a long article over the May events written by Pierre Frank at the beginning of June 1968, shortly before the prohibition of the JCR.
“The revolutionary vanguard in May is generally conceded to have been the youth,” Frank wrote, and added: “The vanguard, which was politically heterogeneous and within which only minorities were organized, had overall a high political level. It recognized that the movement’s object was the overthrow of capitalism and the establishment of a society building socialism. It recognized that the policy of ‘peaceful and parliamentary roads to socialism’ and of ‘peaceful coexistence’ was a betrayal of socialism. It rejected all petty bourgeois nationalism and expressed its internationalism in the most striking fashion. It had a strongly anti-bureaucratic consciousness and a ferocious determination to assure democracy in its ranks.”
Frank even went so far as to describe the Sorbonne as the “most developed form of ‘dual power’” and “the first free territory of the Socialist Republic of France.” He continued: “The ideology inspiring the students of opposition to the neo-capitalist consumer society, the methods they used in their struggle, the place they occupy and will occupy in society (which will make the majority of them white-collar employees of the state or the capitalists) gave this struggle an eminently socialist, revolutionary, and internationalist character.” The struggle by students demonstrated “a very high political level in a revolutionary Marxist sense.”
In reality there was no trace of revolutionary consciousness in the Marxist sense on the part of the students. The political conceptions that prevailed amongst students had their origin in the theoretical arsenal of the so-called “New Left” and had been developed over many years in opposition to Marxism.
The historian Ingrid Gilcher-Holtey writes on the ’68 movement in France: “The student groups driving the process forward are groups, which explicitly base themselves on the intellectual mentors of the New Left or were influenced by their themes and critique, in particular the writings of the ‘Situationist International,’ the group around ‘Socialisme ou barbarie’ and ‘Arguments.’ Both their strategy of action (direct and provocative), and their own self conception (anti-dogmatic, anti-bureaucratic, anti-organizational, anti-authoritarian) fit into the system of coordinates of the New Left.”
Rather than regarding the working class as a revolutionary class, the New Left saw workers as a backward mass fully integrated into bourgeois society via consumption and the media. In place of capitalist exploitation the New Left emphasised the role of alienation in its social analysis—interpreting alienation in a strictly psychological or existentialist sense. The “revolution” was to be led not by the working class, but rather by the intelligentsia and groups on the fringe of society. For the New Left, the driving forces were not the class contradictions of capitalist society, but “critical thinking” and the activities of an enlightened elite. The aim of the revolution was no longer the transformation of the relations of power and ownership but social and cultural changes such as alterations to sexual relations. According to the representatives of the New Left such cultural changes were a prerequisite for a social revolution.
Two of the best-known student leaders in France and Germany, Daniel Cohn-Bendit and Rudi Dutschke, were both influenced by the “Situationist International,” which propagated a change of consciousness by means of provocative actions. Originally formed as a group of artists with roots in the traditions of Dada and Surrealism, the Situationists stressed the significance of practical activities. As a recent article on the Situationists puts it: “Activist disruption, radicalisation, the misuse, revaluation and playful reproduction of concrete everyday situations are the means to elevate and permanently revolutionize the consciousness of those in the omnipotent grip of the deep sleep arising from all-pervasive boredom.”
Such standpoints are light-years removed from Marxism. They deny the revolutionary role of the working class, which is rooted in its position in a society characterised by insurmountable class conflicts. The driving force of the revolution is the class struggle, which is objectively based. Consequently the task of Marxist revolutionaries is not to electrify the working class with provocative activities but rather to elevate its political consciousness and provide a revolutionary leadership capable of enabling it to take up responsibility for its own fate.
Not only did the Pabloites declare that the anarchist, Maoist and other petty bourgeois groups that played the leading role in the Latin Quarter demonstrated “a very high political level in a revolutionary Marxist sense” (Pierre Frank), they put forward similar political points of view and took part in their adventurous activities with enthusiasm.
The anarchist-inspired street battles in the Latin Quarter contributed nothing to the political education of workers and students and never posed a serious threat to the French state. In 1968 the state had a modern police apparatus and an army that had been forged in the course of two colonial wars, and could rely on the support of NATO. It could not be toppled by the sort of revolutionary tactics used in the 19th century—i.e., the building of barricades in the streets of the capital city. Even though the security forces were in the main responsible for the huge levels of violence that characterised the street battles in the Latin Quarter, there was an undoubted element of infantile revolutionary romanticism in the way in which the students eagerly assembled barricades and played their game of cat and mouse with the police.
To be continued
1. Michelle Zancarini-Fournel, “1962-1968: Le champ des possibles” in 68: Une histoire collective, Paris: 2008
2. Daniel Bensaid, Alain Krivine, Mai si! 1968-1988: Rebelles et repentis, Montreuil: 1988, p. 39
3. Karl Marx and Friedrich Engels, “Speech to the Central Authority of the Communist League”
4. Jean-Paul Salles, La Ligue communiste révolutionnaire , Rennes: 2005, p. 49
5. Alain Krivine, Ça te passera avec l’âge, Flammarion: 2006, pp. 93-94
6. Jean-Paul Salles, ibid., p. 52
7. Pierre Frank, “Mai 68: première phase de la révolution socialiste française”
8. Pierre Frank, ibid.
9. Ingrid Gilcher-Holtey, “Mai 68 in Frankreich” in 1968: Vom Ereignis zum Mythos, Frankfurt am Main: 2008, p. 25
10. archplus 183, Zeitschrift für Architektur und Städtebau, May 2007 | <urn:uuid:ea7ee674-e67b-4e73-be38-70063bd08f55> | CC-MAIN-2019-47 | https://www8.wsws.org/en/articles/2008/07/fra3-j05.html | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00538.warc.gz | en | 0.958047 | 6,089 | 2.703125 | 3 |
- Research article
- Open Access
- Open Peer Review
Hospital-acquired fever in oriental medical hospitals
BMC Health Services Research volume 18, Article number: 88 (2018)
Traditional Oriental medicine is used in many Asian countries and involves herbal medicines, acupuncture, moxibustion, and cupping. We investigated the incidence and causes of hospital-acquired fever (HAF) and the characteristics of febrile inpatients in Oriental medical hospitals (OMHs).
Patients hospitalized in two OMHs of a university medical institute in Seoul, Korea, were retrospectively reviewed from 2006 to 2013. Adult patients with HAF were enrolled.
There were 560 cases of HAF (5.0%). Infection, non-infection, and unknown cause were noted in 331 cases (59.1%), 109 cases (19.5%), and 120 cases (21.4%) of HAF, respectively. Respiratory tract infection was the most common cause (51.2%) of infectious fever, followed by urinary tract infection. Drug fever due to herbal medicine was the most common cause of non-infectious fever (53.1%), followed by procedure-related fever caused by oriental medical procedures. The infection group had higher white blood cell count (WBC) (10,400/mm3 vs. 7000/mm3, p < 0.001) and more frequent history of antibiotic therapy (29.6% vs. 15.1%, p < 0.001). Multivariate analysis showed that older age (odds ratio (OR) 1.67, 95% confidence interval (C.I.) 1.08–2.56, p = 0.020), history of antibiotic therapy (OR 3.17, C.I. 1.85–5.41, p < 0.001), and WBC > 10,000/mm3 (OR 2.22, C.I. 1.85–3.32, p < 0.001) were associated with infection.
Compared to previous studies on HAF in Western medicine, the incidence of HAF in OMHs was not high. However, Oriental medical treatment does play some role in HAF. Fever in patients with history of antibiotic therapy, or high WBC was more likely of infectious origin.
Fever is a common clinical event in hospitalized patients. Although fever is frequently suspected and proven to be related to infections, diverse etiologies may account for fever in hospitalized patients.
Hospital-acquired febrile illness is defined as a fever occurring at least 48 h after hospital admission . The prevalence of hospital-acquired febrile illness has been estimated at 2% to 31% for medical inpatients . There have been studies on the etiologies of fever in elderly patients, solid organ transplant recipients, cancer patients, and neutropenic hosts [3,4,5,6,7]. Fever was attributed to infection in 37% to 74% of patients, whereas a non-infectious etiology was identified in 3% to 52% of patients . The most common infectious causes included urinary tract infection, pneumonia, sinusitis, and bloodstream infection . The most common non-infectious causes were procedure-related, malignancies, and ischemic conditions .
Oriental traditional medicine has been practiced for a very long time in Korea, China, Japan and throughout Southeast Asia, including Vietnam, Thailand, and Tibet. It is also currently practiced as a part of alternative medicine in Western countries. Traditional Oriental medicine is part of medical practice in Korea, and there are many hospitals in which traditional Oriental medical doctors practice traditional medicine for hospitalized patients. In Korea, patients with cerebrovascular accident and elderly patients tend to seek oriental medical care more often than other patients .
Our medical centers are composed of a medical hospital, oriental medical hospital (OMH), and dental hospital. Many patients are hospitalized for traditional medical treatment, and some patients develop fever. When patients in the OMH develop fever, many are referred to the medical hospital for evaluation and treatment of fever. There has been no data on the incidence or etiology of fever in these patients. As far as we know, this is the first study on hospital-acquired febrile illness in OMHs.
This study was designed to identify the characteristics of febrile inpatients, causes of fever, and clinical outcomes of fever in OMHs and to identify the risk factors of febrile illness of infectious cause in these patients.
This retrospective study was performed at two OMHs of a university medical institute in Seoul, Korea. The medical institute consists of two medical hospitals, two OMHs, and two dental hospitals. The study protocol was approved by institutional review board (IRB) of Kyung Hee University Hospital at Gangdong (IRB No. 2016–03-008). Informed consent from the patients was waived by IRB.
Patients hospitalized in the OMHs from June 2006 to June 2013 were retrospectively reviewed via electronic medical records by two infectious diseases specialists. Patients age 18 years or older were screened. Adult patients with axillary body temperature ≥ 37.8 °C after 48 h of hospitalization were enrolled. If a patient was transferred from another OMH where he or she was admitted for more than 48 h and where fever started within 48 h of hospitalization, these fevers were also considered hospital-acquired. If a patient was transferred from a medical hospital or long-term care facility and fever had started within 48 h of hospitalization to the OMH, he or she was excluded.
Demographic characteristics, clinical features, laboratory data, and treatment history were collected by review of medical records. Infection was defined using criteria proposed by the Centers for Disease Control and Prevention .
Fever was considered procedure-related if there was a transient temperature elevation during the 48 h period following an invasive procedure and no evidence of infection elsewhere . Drug fever was defined by diagnostic criteria adapted from those described by Young et al. Patients with fever accompanied by drug rash were included if the fever resolved after discontinuing the offending drug. If patients with acute intra-cranial hemorrhage (ICH) or acute cerebral infarction developed fever with no other certain fever focus, the fever was considered related to the ICH or cerebral infarction. If patients had massive gastro-intestinal bleeding and no other obvious fever focus, the fever was considered related to gastrointestinal bleeding. If a fever was observed after transfusion of blood products such as packed RBC or platelet concentrates and no other cause was found, the fever was considered related to the transfusion. If a fever was observed in a severely dehydrated patient and it disappeared after adequate hydration, the fever was considered to be caused by dehydration. When there was unexplained fever in advanced cancer patients or patients with hematologic malignancy, the fever was considered a cancer fever.
If the etiology was uncertain or data was insufficient to determine a cause, the fever was defined as unknown. Defervescence was defined as peak body temperature below 37.3 °C for more than 2 days.
McCabe classification was used to evaluate the severity of underlying illnesses .
SPSS for Windows version 11.0 (SPSS Inc., Chicago, IL, USA) was used for statistical analysis. Student’s t-test and Chi-square test were used for univariate analysis, and a logistic regression test was used for multivariate analysis. Factors with statistical significance in univariate analysis were chosen for multivariate analysis. P values < 0.05 were considered significant.
A total of 11,207 adult patients were hospitalized in the OMHs during the study period. Among those 11,207 patients, 560 cases with hospital-acquired febrile events were identified (5.0%).
Infection was identified as the cause of fever in 331 cases (59.1%). In 179 cases (32.0%), non-infectious causes of fever were identified (Table 1). There were 50 cases (8.9%) of fever with unknown etiology. Among patients with fever caused by infections, respiratory tract infection was the most common cause (166 cases, 51.2%), followed by urinary tract infection (99 cases, 30.6%) and intra-abdominal infection (39 cases, 12.1%) (Table 1). Among patients with respiratory infection, aspiration pneumonia was the most common cause. Among patients with non-infectious fever, drug fever was the most common cause (101 cases, 56.4%), followed by procedure-related fever (24 cases, 13.4%) and cancer fever (23 cases, 12.8%). Among 101 patients with drug fever, herbal medicine was the most common cause (95 cases, 94.1%), while antibiotics or bisphosphonate were the cause of drug fever in 6 patients.
Among 11,207 adult patients hospitalized in OMHs, 10,880 were treated with herbal medicine, and 8125 underwent oriental procedures. There were 8040 patients who received acupuncture, 6585 patients who received moxibustion, and 4267 patients who received cupping. Ninety-five patients developed fever due to herbal medicine, 8 patients developed fever due to acupuncture, 7 developed fever patients due to moxibustion, and 1 patient developed fever due to cupping (Table 2). Overall incidence of procedure-related fever caused by invasive oriental procedures is 2.9% (16 episodes), while incidence of procedure-related fever caused by western medical procedures is 1.4% (8 episodes).
Table 3 shows the patient characteristics of febrile patients in the OMHs. The mean age of these patients was 61.4 ± 15.2 years, and 49.6% of the patients were male. Patients with fever of infectious origin (infection group) were older than patients with fever of non-infectious origin (non-infection group) (63.7 ± 15.3 years vs. 58.0 ± 14.7 years, p < 0.001). More patients in the infection group had history of CVA (54.1% vs. 35.2%, p < 0.001). There were more patients with malignancies in the non-infection group (55.9% vs. 32.9%, p < 0.001). There was no statistical difference in the frequency of previous admission history between the two groups. Among 560 patients with fever, 506 (90.4%) were treated with herbal medicine. Invasive oriental procedures, such as acupuncture, moxibustion, and cupping, were performed in 532 patients (95.0%). Patients in the infection group had more frequent history of antibiotic treatment before fever onset (29.6% vs. 15.1%). There was no significant difference in peak body temperature or fever duration between the infection group and non-infection group. WBC count was higher in the infection group (median 10,400/mm3 vs. median 7000/mm3, p < 0.001); however, CRP level was not different between the two groups (Table 3).
More patients with infectious fever received consultations for evaluation and management of fever (72.8% vs. 26.3%, p < 0.001). Antibiotics were prescribed in 302 cases (53.9%). More patients in the infection group were treated with antibiotics (73.7% vs. 25.1%, p < 0.001), and the duration of antibiotic therapy was also longer in the infection group (median 9 days vs. median 5.5 days, p = 0.002). Overall 30-day mortality rate was 9.6% (Table 4).
Table 5 shows multivariate analysis for risk factors associated with infectious origin in febrile illnesses. Patients age over 65 years developed fever of infectious origin 1.666 times more often (95% confidence interval (C.I.); 1.082–2.564, p = 0.020) than younger patients. When patients had a history of antibiotic use, fever of infectious origin was 3.166 times more likely (95% C.I.; 1.852–5.413, p < 0.001). Fever in the patients with WBC higher than 10,000/mm3 were 2.223 times more likely to be caused by infection (95% C.I.; 1.486–3.324, p < 0.001). In patients with cancer or history of chemotherapy, the chance of infectious fever decreased 0.263 times (95% C.I.; 0.116–0.594, p = 0.001) and 0.507 times (95% C.I.; 0.289–0.0887, p = 0.017), respectively.
Oriental traditional medicine in East Asia, including Korea, China, and Japan, uses acupuncture, moxibustion, cupping, herbal medicines, and manual therapies . In Korea, traditional Oriental medicine is called Hanbang and is an inseparable component of Korean culture and Korean medical services . Traditional Oriental medicine is included in the national health care system in Korea. National policies have been developed based on its historical and cultural background in a way that differs from its use as a complementary and alternative medicine in Western society . Korea has the highest percentage (15.3%) of traditional Oriental medical doctors in hospitals and clinics in East Asia, followed by mainland China (12.6%) and the Taiwan region (9.7%) Korean patients with cerebrovascular accidents (ICH or cerebral infarction), advanced cancer, facial palsy, and old age tend to seek traditional Oriental medicine more often than patients with other acute illnesses . Because traditional Oriental medicine uses different treatment modalities than Western medicine, the etiology of fever might be different. As far as we know, this is the first study on the incidence and etiologies of hospital-acquired fever in patients hospitalized at OMHs.
In this study, incidence of HAF was 5.2%, lower than that of other studies [2, 15]. Infection accounted for 59.1% of HAF among patients hospitalized in OMHs. In Arbo’s study of HAF, infection accounted for 56.0% of fever, which is similar to our results . Chang’s study on liver transplant recipients showed a higher incidence of infection (78.0%) . In cancer patients, infection, non-infectious causes, and fever of unknown origin represented 67.0%, 23.0%, and 10.0% of cases, respectively . The respiratory tract was the most frequently involved site in cancer patients, similar to our results.
In OMHs, more patients with solid cancer (55.9% vs. 32.9%, p < 0.001) and history of anti-cancer chemotherapy (31.3% vs. 12.1%, p < 0.001) have non-infectious fever. In these patients, moxibustion was more commonly used, and herbal medication was less commonly prescribed without statistical significance. Moxibustion was more commonly used in the non-infection group, while herbal medicine was more frequently prescribed to patients in the infection group. There were more patients with cancer in the non-infection group than in the infection group. Moxibustion was more frequently used in the non-infection group for pain relief in cancer patients. Some studies have shown that moxibustion is effective in reducing pain associated with osteoarthritis and herpes zoster . Herbal medicine is a main treatment modality in traditional Oriental medicine and is prescribed to most patients. Unlike Western medicine, Oriental herbal medicine can only be administered by oral route. Some patients with advanced cancer were unable to orally take medication, so herbal medicine could not be prescribed.
Drug fever accounted for 56.4% of non-infectious fever in our study. In Cunha’s studies on fever in a neurosurgical intensive care unit and intensive care unit, drug fever occurred in approximately 10.0% of patients [17, 18]. In Toussaint’s study on fever in cancer patients, drug fever accounted for 18.0% of fever not attributed to infection (non-infectious fever and fever of unknown origin together), which is lower than our study result . In Oisumi’s study on drug fever caused by antibiotics, drug fever was recognized in 13.1% of 390 patients receiving parenteral antibiotic therapy for pulmonary infections. Drugs have been estimated to cause 10.0–15.0% of adverse events and 3.0–5.0% of drug fever in hospitalized patients in the US .
Procedure-related fever in Western medicine was about 1.5% to 5.9%, [2, 7] while incidence of procedure-related fever in our OMH was 4.2%, which was not different from other studies. Among the procedure-related fever in our OMH, fever related to invasive oriental procedure was higher than that related to western medical procedures (2.9% vs. 1.4%).
Old age, history of antibiotic use, and high WBC were associated with infection. Peak body temperature was not associated with infection in this study. In Trivalle’s study on hospital-acquired febrile illness in the elderly, the number of invasive procedures preceding a febrile episode was a significant predictor of infection . Other studies have shown that higher maximum temperature or higher peak WBC was associated with an infectious etiology for fever .
This study has some limitations. First, it was a retrospective study performed in OHMs. Culture and imaging studies are not performed in every patient with HAF. Therefore, some patients with infection may not have been identified and classified with unknown fever. Second, this study was performed at two OMHs of a university teaching hospital. In our hospitals, patients with advanced cancer or cerebrovascular attack are common. Patient characteristics and severity of underlying illness may be different from those of other OMHs.
In this study, incidence of HAF was not higher in OMHs, [2, 15] and infection was the most common cause of HAF. Fever in patients with history of antibiotic treatment and high was more likely of infectious origin.
Herbal medicine and invasive oriental medical procedures do play some role in HAF as herbal medicine was the most common cause of drug fever and invasive oriental medical procedures caused procedure related fever more frequently than Western medical procedures in OMHs.
To identify the etiology of HAF in OMHs and the risk factors of fever of infectious origin, further multicenter study is suggested.
Institutional review board
Oriental medical hospital
Red blood cell
White blood cell count
Arbo MJ, Fine MJ, Hanusa BH, Sefcik T, Kapoor WN. Fever of nosocomial origin: etiology, risk factors, and outcomes. Am J Med. 1993;95(5):505–12.
Trivalle C, Chassagne P, Bouaniche M, Landrin I, Marie I, Kadri N, Menard JF, Lemeland JF, Doucet J, Bercoff E. Nosocomial febrile illness in the elderly: frequency, causes, and risk factors. Arch Intern Med. 1998;158(14):1560–5.
Chang FY, Singh N, Gayowski T, Wagener MM, Marino IR. Fever in liver transplant recipients: changing spectrum of etiologic agents. Clin Infect Dis. 1998;26(1):59–65.
Chanock SJ, Pizzo PA. Fever in the neutropenic host. Infect Dis Clin N Am. 1996;10(4):777–96.
Donowitz GR. Fever in the compromised host. Infect Dis Clin N Am. 1996;10(1):129–48.
Norman DC. Fever in the elderly. Clin Infect Dis. 2000;31(1):148–51.
Toussaint E, Bahel-Ball E, Vekemans M, Georgala A, Al-Hakak L, Paesmans M, Aoun M. Causes of fever in cancer patients (prospective study over 477 episodes). Support Care Cancer. 2006;14(7):763–9.
Kaul DR, Flanders SA, Beck JM, Saint S. Brief report: incidence, etiology, risk factors, and outcome of hospital-acquired fever: a systematic, evidence-based review. J Gen Intern Med. 2006;21(11):1184–7.
Kumar H, Song SY, More SV, Kang SM, Kim BW, Kim IS, Choi DK. Traditional Korean east Asian medicines and herbal formulations for cognitive impairment. Molecules (Basel, Switzerland). 2013;18(12):14670–93.
Garner JS, Jarvis WR, Emori TG, Horan TC, Hughes JM. CDC definitions for nosocomial infections, 1988. Am J Infect Control. 1988;16(3):128–40.
He XR, Wang Q, Li PP. Acupuncture and moxibustion for cancer-related fatigue: a systematic review and meta-analysis. Asian Pac J Cancer Prev. 2013;14(5):3067–74.
McCabe WR, Jackson G. Gram-negative bacteremia: I. Etiology and ecology. Arch Intern Med. 1962;110(6):847–55.
Cheung F. TCM: made in China. Nature. 2011;480(7378):S82–3.
Park HL, Lee HS, Shin BC, Liu JP, Shang Q, Yamashita H, Lim B. Traditional medicine in china, Korea, and Japan: a brief introduction and comparison. Evidence-based complementary and alternative medicine : eCAM. 2012;2012:429103.
Rostami M, Mirmohammadsadeghi M, Zohrenia H. Evaluating the frequency of postoperative fever in patients with coronary artery bypass surgey. ARYA atherosclerosis. 2011;7(3):119–23.
Lee MS, Choi TY, Kang JW, Lee BJ, Ernst E. Moxibustion for treating pain: a systematic review. Am J Chin Med. 2010;38(5):829–38.
Cunha BA. Clinical approach to fever in the neurosurgical intensive care unit: focus on drug fever. Surg Neurol Int. 2013;4(Suppl 5):S318–22.
Cunha BA, Shea KW. Fever in the intensive care unit. Infect Dis Clin N Am. 1996;10(1):185–209.
Patel RA, Gallagher JC. Drug fever. Pharmacotherapy. 2010;30(1):57–69.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
The study protocol was approved by institutional review board of Kyung Hee University Hospital at Gangdong (IRB No. 2016–03-008). Consent from the patients was waived by IRB, as the study was retrospective.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Moon, S., Park, K., Lee, M.S. et al. Hospital-acquired fever in oriental medical hospitals. BMC Health Serv Res 18, 88 (2018) doi:10.1186/s12913-018-2896-1
- Traditional oriental medicine | <urn:uuid:6e29d1bc-4bec-4374-b111-2cc140c5121c> | CC-MAIN-2019-47 | https://0-bmchealthservres-biomedcentral-com.brum.beds.ac.uk/articles/10.1186/s12913-018-2896-1 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670535.9/warc/CC-MAIN-20191120083921-20191120111921-00341.warc.gz | en | 0.938047 | 4,956 | 2.765625 | 3 |
History of Kollam
Quilon or Coulão pronunciation (help·info) (Malayalam: ക്വയ്ലോണ്), officially Kollam pronunciation (help·info) (Malayalam: കൊല്ലം) is one of the ancient civilizations in India. It is one of the oldest port cities in the Malabar Coast and was the capital city of historic Venad Kingdom and Travancore Kingdom. Quilon was once an important important trading port in India. It was also known as Desinganadu. It is now known as the "Cashew Capital of the World".
Since the ancient times, city of Kollam(Quilon) has played some major roles in the business, economical, cultural, religious and political history of Asia and Indian sub continent. The Malayalam calendar(Kollavarsham) is also known so with the name of the city Kollam. The city is mentioned in historical citations dating back to Biblical times and the reign of King Solomon, connecting with Red Sea ports of the Arabian Sea (supported by a find of ancient Roman coins). The teak wood used in building King Solomon’s throne was taken from Quilon. Merchants from Phoenicia, China, Arab countries, Dutch and the Romae used to visit and trade from Quilon in the ancient times.
The history of the district of Kollam as an administrative unit can be traced back to 1835, when the Travancore state consisted of two revenue divisions with headquarters at Kollam and Kottayam. During the integration of Travancore and Cochin states in Kerala in 1949, Kollam was one among the three revenue divisions in the state. Later, those revenue divisions were converted as the first districts in the state.
- 1 Etymology
- 2 Pre-history of Kollam
- 3 Voyages to Quilon/Kollam
- 4 Inception of the Port of Kollam
- 5 Beginning of Kollam Era
- 6 Venad - Kingdom of Quilon
- 7 Copper Plate Inscriptions from Quilon
- 8 The Portuguese Invasion (1502–1661)
- 9 Dutch Quilon: The Dutch Invasion (1661–1795)
- 10 References
- 11 Bibliography
The city name "Kollam" is believed to have been derived from the Sanskrit word Kollam (Sanskrit: कोल्लं), which means Pepper. During the ancient times, Kollam was world-famous for its trade culture, especially for the availability and export of fine quality Pepper. The sole motive of all the Portuguese, Dutch and British who have arrived the Port of Kollam that time was Pepper and other spices available at Kollam
Pre-history of Kollam
Kollam is the most historic and ancient settlement in Kerala, probably in South India. Ingots excavated from Kollam city, Port, Umayanallur, Mayyanad, Sasthamcotta, Kulathupuzha and Kadakkal proved that the whole district and city were human settlements since Stone Age. Teams of archaeologists and anthropologist have conducted visits to Kollam city and Port many times for treasure hunts and researches.
The period between 1000 BC and 500 AD is referred to as the Megalithic Culture in South India and is similar to the culture in Africa and Europe. In January 2009, the Department of Archaeology discovered a Megalithic age cist burial ground at Thazhuthala in Kollam Metropolitan Area, which had thrown lights to the past glory and ancient human settlements in Kollam area. Similar cists had been earlier discovered from the south east of Kollam. The team discovered three burial chambers, iron weapons, earthen vessels in black and red and remains of molten iron after their first major excavation in Kollam. They found a cairn circle in 1990 during their first major excavation. In August 2009, a team of archaeologists led by Dr P.Rajendran, UGC research scientist and archaeologist at the Department of History of Kerala University, had discovered Lower Paleolithic tools along with the Chinese coins and potteries from the seabed of Tangasseri in the city. This is for the first time that prehistoric cupules and Lower Paleolithic tools were discovered from below the seabed in India. These tools prove that the Stone Age people lived in present Kollam area and surroundings had moved to the coastal areas during the glacial period in the Pleistocene, when the sea-level was almost 300 feet below the present sea-level. Those tools are made of chert and quartzite rocks.
Voyages to Quilon/Kollam
After AD.23, so many merchant travellers, explorers, missionaries, apostles and army commanders visited Quilon, as Quilon was the most important trading port in India. Pliny, Saint Thomas, Mar Sabor and Mar Proth, Marco Polo, Ibn Battuta and Zheng He are a few of them.
|Time||Traveller||Country from which he/she came to Quilon||Name they used to mention Quilon|
|A.D. 23–79||Pliny||Roman Empire||--|
|A.D. 522||Cosmas Indicopleustes||Alexandria, Greece||Male|
|A.D. 825||Mar Sabor and Mar Proth||Syria||Coulão|
|A.D. 845–855||Sulaiman al-Tajir||Siraf, Iran||Male|
|A.D. 1166||Benjamin of Tudela||Tudela, Kingdom of Navarre(Now in Spain)||Chulam|
|A.D. 1292||Marco Polo||Republic of Venice||Kiulan|
|A.D. 1273||Abu'l-Fida||Hamāh, Syria||Coilon / Coilun|
|A.D. 1311–1321||Jordanus Catalani||Sévérac-le-Château, France||Columbum|
|A.D. 1321||Odoric of Pordenone||Holy Roman Empire||Polumbum|
|A.D. 1346–1349||Ibn Battuta||Morocco||--|
|A.D. 1503||Afonso de Albuquerque||Kingdom of Portugal||--|
|A.D. 1502–1503||Vasco da Gama||Kingdom of Portugal||--|
|A.D. 1503||Giovanni da Empoli||Kingdom of Portugal||--|
|A.D. 1510–1517||Duarte Barbosa||Kingdom of Portugal||Coulam|
|A.D. 1578||Henrique Henriques||Kingdom of Portugal||--|
Arrival of Saint Thomas
The arrival of Christianity in Kerala begins with the expedition of Saint Thomas, one of the 12 disciples of Jesus in AD 52. He founded Seven and a half Churches in Kerala, started from Maliankara and then Kodungallur, Kollam, Niranam, Nilackal, Kokkamangalam, Kottakkavu, Palayoor (Chattukulangara) and Thiruvithamcode Arappally (a "half church").
Inception of the Port of Kollam
In 822 AD, two East Syriac bishops Mar Sabor and Mar Proth, settled in Quilon with their followers. After the beginning of Kollam Era (824 AD), Quilon became the premier city of the Malabar region ahead of Travancore and Cochin. Kollam Port was founded by Mar Sabor at Thangasseri in 825 as an alternative to reopening the inland Seaport of Kore-ke-ni Kollam near Backare (Thevalakara), which was also known as Nelcynda and Tyndis to the Romans and Greeks and as Thondi to the Tamils.
Migration of East Syriac Christians to Kerala started in 4th century. Their second migration is dated to the year AD 823 and that was to the city of Quilon. The tradition claims that the Christian immigrants rebuilt the city of Quilon in AD 825 from which date the Malayalam era is reckoned.
Beginning of Kollam Era
Kollam Era (also known as Malayalam Era or Kollavarsham or Malayalam Calendar or Malabar Era) is a solar and sidereal Hindu calendar used in Kerala, India. The origin of the calendar has been dated as 825 CE (Pothu Varsham) at Kollam(Quilon). It replaced the traditional Hindu calendar used widely else where in India and is now prominently used in Kerala. All temple events, festivals and agricultural events in the state are still decided according to the dates in the Malayalam calendar.
There are many theories regarding the origin of the Malayalam calendar - the Kolla Varsham. A major theory is;
According to Herman Gundert Kolla Varsham started as part of erecting a new Shiva Temple in Kollam and because of the strictly local and religious background, the other regions did not follow this system at first. Then once the Kollam port emerged as an important trade center the other countries were also started to follow the new system of calendar. This theory backs the remarks of Ibn Battuta as well.
Venad - Kingdom of Quilon
The Kingdom of Quilon or Venad was one of the three prominent late medieval Hindu feudal kingdoms on the Malabar Coast in South India. The rulers of Quilon, the Venattadi Kulasekharas, traces their relations back to the Ay kingdom and the Later Cheras. The last Chera ruler, Rama Varma Kulashekhara, was the first ruler of an independent state of Quilon. In the early 14th century, King Ravi Varma established a short-lived supremacy over South India. After his death, Quilon only included most of modern-day Kollam and Thiruvananthapuram districts of Kerala and Kanyakumari district of Tamil Nadu. Marco Polo claimed to have visited his capital at Quilon, a centre of commerce and trade with China and the Levant. Europeans were attracted to the region during the late fifteenth century, primarily in pursuit of the then rare commodity, black pepper. Quilon was the forerunner to Travancore.
In the Sangam age most of the present-day Kerala state was ruled by the Chera dynasty, Ezhimala rulers and the Ay rulers. Venad, ruled by the dynasty of the same name, was in the Ay kingdom. However, the Ays were the vassals of the Pandyas. By the 9th century, Venad became a part of the Later Chera Kingdom as the Pandya power diminished and traded with distant parts of the world. It became a semi-autonomous state within the Later Chera Kingdom. In the 11th century the region fell under the Chola empire.
During the 12th century, the Venad dynasty merged the remnants of the old Ay Dynasty to them forming the Chirava Mooppan (the ruling King) and the Thrippappur Mooppan (the Crown Prince). The provincial capital of the local patriarchal dynasty was at port Kollam. The port was visited by Nestorian Christians, Chinese and Arabs. In same century, the capital of the war-torn Later Chera Kingdom was relocated to Kollam and the Kulasekhara dynasty merged with the Venad rulers. The last King of the Kulasekhara dynasty based on Mahodayapuram, Rama Varma Kulashekhara, was the first ruler of an independent Venad. The Hindu kings of Vijayanagar empire ruled Venad briefly in the 16th century.
Copper Plate Inscriptions from Quilon
The Tharisappalli Copper Plates (849 AD) are a copper-plate grant issued by the King of Venad (Quilon), Ayyanadikal Thiruvadikal, to the Saint Thomas Christians on the Malabar Coast in the 5th regnal year of the Chera ruler Sthanu Ravi Varma. The inscription describes the gift of a plot of land to the Syrian Church at Tangasseri near Quilon (now known as Kollam), along with several rights and privileges to the Syrian Christians led by Mar Sapir Iso.
The Tharisappalli copper plates are one of the important historical inscriptions of Kerala, the date of which has been accurately determined. The grant was made in the presence of important officers of the state and the representatives of trade corporations or merchant guilds. It also throws light on the system of taxation that prevailed in early Venad, as several taxes such as a profession tax, sales tax and vehicle tax are mentioned. It also testifies to the enlightened policy of religious toleration followed by the rulers of ancient Kerala.
There are two sets of plates as part of this document, and both are incomplete. The first set documents the land while the second details the attached conditions. The signatories signed the document in the Hebrew, Pahlavi, and Kufic languages.
The Portuguese Invasion (1502–1661)
The Portuguese were the first Europeans came to the city of Quilon. They came as traders and established a trading center at Tangasseri in Quilon during 1502. The then Queen of Quilon first invited the Portuguese to the city in 1501 for discussing about spices trade. But they refused that due to Vasco da Gama's close relations with the Raja of Cochin. Later the Queen negotiated with the Raja and he permitted to send two Portuguese ships to Quilon to buy fine quality pepper. In 1503, the Portuguese General Afonso de Albuquerque went to Quilon as per the Queen's request and collected required spices from there.
Factory in Quilon
The resumption of the pepper blockade seems to have put a crimp in Albuquerque's preparations of the return fleet, much of it having still lacked spice cargoes. Albuquerque dispatched two ships down to Quilon (Coulão, Kollam), with the factor António de Sá, to see if more could be procured there - Quilon having been better connected to Ceylon and points east, its spice supply was not as dependent on the Zamorin's sway and had often invited the Portuguese before. But soon after their departure, Albuquerque heard that the Zamorin of Calicut was preparing a Calicut fleet of some 30 ships for Quilon. Afonso de Albuquerque left Cochin and hurried down to Quilon himself.
Albuquerque arrived in Quilon, and instructed the Portuguese factor to hurry his business along, and sought out an audience with the regent-queen of Quilon. They were still docked when the Calicut fleet arrived, carrying an embassy from the Zamorin with the mission to persuade (or intimidate) Quilon from abandoning the Portuguese. The regent queen of Quilon rejected the Zamorin's request, but also forbade Albuquerque from engaging in hostilities in the harbor. Albuquerque, realizing Quilon was the only spice supply he had access to, acceded to her request. Albuquerque resigned himself to negotiating a commercial treaty and establishing a permanent Portuguese factory in Quilon (the third in India), placing it under factor António de Sá, with two assistants and twenty armed men to protect the factory. That settled, Albuquerque returned to Cochin on January 12, 1504 to make final preparations for his departure. Albuquerque signed a treaty of friendship with the Royal family of Quilon and established a factory there in 1503 itself. That voyage was the beginning of trade relations between Portugal and city of Quilon, which became the center of their trade in pepper. Soon, Quilon emerged as the richest town of the entire Malabar coast.
End of the Portuguese era
The trade relation between Quilon and Portuguese got set back due to an insurrection happened at the Port of Quilon between the Arabs and the Portuguese. The captain of one of the Portuguese fleets saw an Arab ship is loading pepper from the port and that burst fighting between them. Aftermath, the battle started between them. 13 Portuguese men were killed including António de Sá and the St. Thomas church was burned down. To prevent further devastation, the Queen of Quilon signed a treaty with the Portuguese and as a result, they got customs tax exemption and monopoly over the spice and pepper tread with Quilon. The royal family of Quilon agreed to rebuild the destroyed church.
The Portuguese conquered Quilon till 1661. They fought with the Arab traders and captured a huge amount of gold after killing more than 2000 Arab traders. With the arrival of the Dutch and their peace treaty signing with Quilon, the Portuguese started losing their authority on Quilon and later, Quilon officially became a Dutch protectorate.
Dutch Quilon: The Dutch Invasion (1661–1795)
The Dutch started expanding their empire to India in 1602 with the commencement of Dutch East India Company. They arrived Quilon in the year 1658 and signed a peace treaty in 1659. Thus Quilon became the official protectorate of the Dutch and their officer in-charge, Rijcklof van Goens, placed a military troop in the city to protect it from probable invasions from Portuguese and the British. The West Quilon region including Tangasseri named as 'Dutch Quilon' then. But English East India Company began to follow the Dutch Method of "Triangular Trade" in early 1600s itself. The English signed a trade treaty with Portuguese and gained right to trade across the Portuguese holdings in Asia. During the mid-eighteenth century, Travancore's Raja of Marthanda Varma decided to consolidate various independent kingdoms including Quilon, Kayamkulam and Kottarakkara with Kingdom of Travancore. The plan was dismissive because of the presence of the Dutch, who fortified their base city of Quilon against such invasions. The Dutch and allies have conquered Paravur and marched to Attingal and despatched another expedition to Kottarakkara as well. But ignoring the stiff resistance, the army of Marthanda Varma attacked and defeated the Dutch at Colachel in 1741 and marched to Quilon and attacked the city. After signing the Treaty of Mannar, several territories under Quilon royal family were added to the Travancore Kingdom.
- "History of Swathi Thirunal's Lineage". Swathithirunal.in. Retrieved 24 October 2015.
- Ring, Trudy; Watson, Noelle; Schellinger, Paul (12 November 2012). Page No.710, International Dictionary of Historic Places: Asia and Oceania. ISBN 9781136639791. Retrieved 25 October 2015.
- "Kollam, Ashtamudi Lake - great alternatives to Kochi, Vembanad Lake". Economic Times. Retrieved 24 October 2015.
- "Kollam on the itinerary". The Hindu. 14 September 2018. Retrieved 14 September 2018.
- "The legendary beauty of Kollam".
- "History of Kollam". Archived from the original on 2 June 2015.
- "Kollam District". District Authority of Quilon. Retrieved 24 October 2015.
- "The Asiatic Journal and Monthly Register for British India". 1834. Retrieved 24 October 2015.
- "Ingots found in Kollam throws light on life of Megalithic people". Malayala Manorama - On Manorama. Retrieved 24 October 2015.
- "Emergence of antiques triggers treasure hunt in Kollam". The Hindu. Retrieved 24 October 2015.
- "History under Seabed". The Hindu. 9 August 2014. Retrieved 8 December 2015.
- "History under Seabed". TNIE. 9 August 2014. Retrieved 8 December 2015.
- "Megalithic age idols unearthed in Kerala". The Hindu. 12 September 2009. Retrieved 8 December 2015.
- Maritime India: Trade, Religion and Polity in the Indian Ocean. Pius Malekandathil. 2010. ISBN 978-9380607016.
- René Grousset (2010). The Empire of the Steppes: A History of Central Asia. Rutgers. p. 312. ISBN 9780813506272.
- "Ibn Battuta's Trip: Chapter 10". Archived from the original on 12 October 2015. Retrieved 30 October 2015.
- Chan 1998, 233–236.
- Zheng He's Voyages Down the Western Seas. China Intercontinental Press. 2005. pp. 22, 24, 33.
- Chisholm, Hugh, ed. (1911). Encyclopædia Britannica. 1 (11th ed.). Cambridge University Press. p. 516. .
- "Vasco da Gama (c.1469–1524)". Retrieved 4 November 2016.
- Boda, Sharon La (1994). Page No.710, International Dictionary of Historic Places: Asia and Oceania. ISBN 9781884964046. Retrieved 25 October 2015.
- "Tamil saw its first book in 1578 - The Hindu". Retrieved 4 November 2016.
- James Arampulickal (1994). The pastoral care of the Syro-Malabar Catholic migrants. Oriental Institute of Religious Studies, India Publications. p. 40.
- Orientalia christiana periodica: Commentaril de re orientali ...: Volumes 17–18. Pontificium Institutum Orientalium Studiorum. 1951. p. 233.
- Adrian Hastings (15 August 2000). A World History of Christianity. Wm. B. Eerdmans. p. 149. ISBN 978-0-8028-4875-8.
- Aiyya, V.V Nagom, State Manual p. 244
- "Before the Portuguese arrival". Malankara Orthodox Church. Retrieved 5 November 2015.
- "Kollam Era" (PDF). Indian Journal History of Science. Archived from the original (PDF) on 27 May 2015. Retrieved 30 December 2014.
- Broughton Richmond (1956), Time measurement and calendar construction, p. 218
- R. Leela Devi (1986). History of Kerala. Vidyarthi Mithram Press & Book Depot. p. 408.
- "Kollam - Short History". Statistical Data. kerala.gov.in. Archived from the original (Short History) on 21 November 2007. Retrieved 8 October 2014.
- A. Sreedhara Menon (2007) . "CHAPTER VIII - THE KOLLAM ERA". A Survey Of Kerala History. DC Books, Kottayam. pp. 104–110. ISBN 978-81-264-1578-6. Retrieved 7 August 2013.
- Karashima, Noboru, ed. (2014). A Concise History of South India: Issues and Interpretations. Oxford University Press. p. 132. ISBN 9780198099772. Retrieved 25 May 2017.
- Keralam anjum arum noottandukalili, Prof. Ilamkulam Kunjan Pillai
- "Travancore." Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc., 2011. Web. 11 Nov. 2011.
- A Survey of Kerala History - A. Sreedhara Menon. ISBN 81-264-1578-9.
- S.G. Pothan (1963) The Syrian Christians of Kerala, Bombay: Asia Publishing House.
- Cheriyan, Dr. C.V. Orthodox Christianity in India. pp. 85, 126, 127, 444–447.
- M. K. Kuriakose, History of Christianity in India: Source Materials, (Bangalore: United Theological College, 1982), pp. 10–12. Kuriakose gives a translation of the related but later copper plate grant to Iravi Kortan on pp. 14–15. For earlier translations, see S. G. Pothan, The Syrian Christians of Kerala, (Bombay: Asia Publishing House, 1963), pp. 102–105.
- "Kollam - Kerala Tourism". Kerala Tourism. Retrieved 5 November 2015.
- Barros (p.99), Correia (p.405). Curiously, Empoli (p.225), who was with the Quilon ships and witnessed the encounter, identifies the ruler as a king, not a queen. Empoli refers to the king "Nambiadora", almost certainly a variation of 'Naubeadarim' or 'Naubea Daring', names that can be frequently found in Portuguese chronicles to refer to the royal heirs in other Malabari kingdoms (e.g. Calicut, Cochin) and thus almost certainly a title. i.e. it is possible the 'king' Empoli met was just a royal prince, the queen of Quilon's heir, conducting business on her behalf.
- Barros, p.99
- Shipbuilding, Navigation and the Portuguese in Pre-modern India. United Kingdom: Routledge. 2018. ISBN 978-1138094765.
- International Dictionary of Historic Places: Asia and Oceania, Volume 5. United Kingdom: Taylor & Francis. 1994. ISBN 9781884964053.
- "From a Portuguese atlas, 1630". columbia.edu. Retrieved 4 November 2016.
- "How the Portuguese used Hindu-Muslim wars - and Christianity - for the bloody conquest of Goa". Dailyo.in. Retrieved 20 December 2016.
- M. O. Koshy (2007) . "CHAPTER 4 - The Confrontation at Colachel". The Dutch Power in Kerala, 1729-1758. Mittal Publications, New Delhi. pp. 61–65. ISBN 978-81-709-9136-6. Retrieved 9 January 2019.
- Boda, Sharon La (1994). Page No.712, International Dictionary of Historic Places: Asia and Oceania. ISBN 9781884964046. Retrieved 9 January 2019.
- Ring, Trudy (1994). International Dictionary of Historic Places: Asia and Oceania, Volume 5. United Kingdom: Taylor & Francis. ISBN 9781884964053.
- Chan, Hok-lam (1998). "The Chien-wen, Yung-lo, Hung-hsi, and Hsüan-te reigns, 1399–1435". The Cambridge History of China, Volume 7: The Ming Dynasty, 1368–1644, Part 1. Cambridge: Cambridge University Press. ISBN 9780521243322.
- Lin (2007). Zheng He's Voyages Down the Western Seas. Fujian Province: China Intercontinental Press. ISBN 9787508507071.
- Yule, Henry, ed. and trans. (1863). Mirabilia descripta: the wonders of the East. London: Hakuyt Society.
- Yule, Henry (1913). "Additional notes and corrections to the translation of the Mirabilia of Friar Jordanus". Cathay and the way thither: being a collection of medieval notices of China (Volume 3). London: Hakluyt Society. pp. 39–44.
- Henry Yule's Cathay, giving a version of the Epistles, with a commentary, &c. (Hakluyt Society, 1866) pp. 184–185, 192-196, 225-230
- Kurdian, H. (1937). "A correction to 'Mirabilia Descripta' (The Wonders of the East). By Friar Jordanus circa 1330". Journal of the Royal Asiatic Society. 69 (3): 480–481. doi:10.1017/S0035869X00086032.
- F. Kunstmann, Die Mission in Meliapor und Tana und die Mission in Columbo in the Historisch-politische Blätter of Phillips and Görres, xxxvii. 2538, 135-152 (Munich, 1856), &c.
- Beazley, C.R. (1906). The Dawn of Modern Geography (Volume 3). Oxford: Clarendon Press. pp. 215–235.
- Ayyar, Ramanatha (1924). Travancore Archaeological Series: Vol 1 to 7. Trivandrum: Government Press. ISBN 8186365737.
- Menon, A. Sreedhara (1967). A Survey Of Kerala History. Kottayam: DC Books. ISBN 8126415789.
- Pothan, S G (1963). The Syrian Christians of Kerala. Mumbai: Asia Publishing House.
- Cherian, P J. Perspectives on Kerala History.
- Thundy, Zacharias. The Kerala Story: Chera times of the Kulasekharas. Northern Michigan University.
- Nair, Sivasankaran K (2005). Venadinte Parinamam. Kottayam: D C Books. | <urn:uuid:949d71d7-9ffa-409a-b02d-757778d734ae> | CC-MAIN-2019-47 | http://www.let.rug.nl/~gosse/termpedia2/termpedia.php?language=dutch_general&density=7&link_color=000000&termpedia_system=perl_db&url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHistory_of_Kollam | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00179.warc.gz | en | 0.887278 | 6,234 | 3.28125 | 3 |
Open Shortest Path First (OSPF) is the first link-state protocol that you will learn about. Apart from being a link-state protocol, it is also an open standard protocol. What this means is that you can run OSPF in a network consisting of multivendor devices. You may have realized that you cannot run EIGRP in a network that consists of non-Cisco devices. This makes OSPF a very important protocol to learn.
Compared to EIGRP, OSPF is a more complex protocol and supports all features such as VLSM/CIDR and more. A brief summary of OSPF features is given below:
- Works on the concept of Areas and Autonomous systems
- Highly Scalable
- Supports VLSM/CIDR and dis-contiguous networks
- Does not have a hop count limit
- Works in multivendor environment
- Minimizes updates between neighbors.
While the above list is a very basic overview of the features of OSPF and will be expanded on in coming sections, it is a good time to take a step back and compare the four protocols detailed in this chapter. Table 5-2 shows a comparison of the four protocols.
Table 5-2 Comparison of routing protocols.
|Protocol Type||Link state||Hybrid||Distance Vector||Distance Vector|
|Discontiguous Network Support||Yes||Yes||No||Yes|
|Hop count limit||None||255||15||15|
|Routing Updates||Event Triggered||Event Triggered||Periodic||Periodic|
|Complete Routing table shared||During new adjacencies||During new adjacencies||Periodic||Periodic|
|Mechanism for sharing updates||Multicast||Multicast and unicast||Multicast||Broadcast|
|Best Path computation||Dijkstra||DUAL||Bellman-Form||Bellman-Ford|
|Metric used||Bandwidth||Bandwidth and Delay (default)||Hop Count||Hop Count|
It should be noted here that OSPF has many more features that the ones listed in Table 5-2 and than those covered in this book. One feature that really separates OSPF from other protocols is its support of a hierarchical design. What this means is that you can divide a large internetwork into smaller internetworks called areas. It should be noted that these areas, though separate, still lie within a single OSPF autonomous system. This is distinctly different from the way EIGRP can be divided into multiple autonomous systems. While in EIGRP each autonomous system functions independent of others and a redistribution is required to share routes, in OSPF areas are dependent on each other and routes are shared between them without redistribution.
You should also know that like EIGRP, OSPF could be divided into multiple Autonomous Systems. Each autonomous system will be different from the rest and will require redistribution of routes.
The hierarchical design of OSPF provides the following benefits:
- Decrease routing overhead and flow of updates
- Limit network problems such as instability to an area
- Speed up convergence.
One disadvantage of this is that planning and configuring OSPF is more difficult than other protocols. Figure 5-5 shows a simple OSPF hierarchical setup. In the figure notice that Area 0 is the central area and the other two areas connect to it.
Figure 5-5 OSPF hierarchical design
This is always true in an OSPF design. All areas need to connect to Area 0. Areas that cannot connect to area 0 physically need a logical connection it using something known as virtual links. Virtual links are out of the scope of the CCNA exam.
Another important thing to notice in the figure is that for each area, there is a router that connects to area 0 as well. These routers are called Area Border Routers (ABRs). In Figure 5-5, RouterC and RouterD are ABRs because they connect to area 0 as well as another area. The way ABRs connect different areas, routers that connect different autonomous systems are called Autonomous System Boundary Routers (ASBRs). In Figure 5-5, if RouterE connect to another OSPF AS or to an AS of another protocol such as EIGRP, it would be called an ASBR.
From Figure 5-5, you learned about three OSPF terms – Area, ABR and ASBR. Similarly there are many other terms associated with OSPF that you need to be aware of before getting into how OSPF actually works. The next section looks at some of these terms.
Building Blocks of OSPF
Each routing protocol has its own language and terminologies. In OSPF there are various terms that you should be aware of. This section looks at the some of the important terminologies associated with OSPF. In an attempt to make it easier to understand and remember, the terminologies are broken into three parts here – Router level, Area level and Internetwork level.
At the Router level, when OSPF is enabled, it becomes aware of the following first:
- Router ID – Router ID is the IP address that will represent the router throughout the OSPF AS. Since a router may have multiple IP addresses (for its multiple interfaces), Cisco routers choose the highest loopback interface IP address. (Do not worry if you do not know what loopback interfaces are. They are covered later in the chapter). If loopback interfaces are not present, OSPF chooses the highest physical IP address configured within the active interfaces. Here highest literally means higher in number (Class C will be higher than Class A because 192 is greater than 10).
- Links – Simply speaking a Link is a network to which a router interface belongs. When you define the networks that OSPF will advertise, it will match interface addresses that belong to those networks. Each interface that matches is called a link. Each link has a status (up or down) and an IP address associated with it.
Let’s take a simple test here. Look at Figure 5-6 and try to find the Router ID and links on each of the routers.
Figure 5-6 RouterID and links
For RouterA, the RouterID will be 192.168.1.1 because it is the highest physical IP address present. The three links present on RouterA are the networks 192.168.1.0/24, 10.0.0.0/8 and 172.16.0.0/16. Similarly, the Router ID of RouterB is 172.30.1.1 since that is the highest physical IP address on the router. The three links present on RouterB are 10.0.0.0/8, 172.20.0.0/16 and 172.30.0.0/16.
Once a router is aware of the above two things, it will try to find more about its network by seeking out other OSPF speaking routers. At that stage the following terms come into use:
- Hello Packets – Similar to EIGRP hello packets, OSPF uses hello packets to discover neighbors and maintain relationships. Hello packet contains information such as area number that should match for a neighbor relation to be established. Hello packets are sent to multicast address 18.104.22.168.
- Neighbors – Neighbors is the term used to define two or more OSPF speaking routers connected to the same network and configured to be in the same OSPF area. Routers use hello packets to discover neighbors.
- Neighbor Table – OSPF will maintain a list of all neighbors from which hello packets have been received. For each neighbor various details such as RouterID and adjacency state are stored.
- Area – An OSPF area is a grouping of networks and routers. Every router in the area shares the same area id. Routers can belong to multiple areas; therefore, area id is linked to every interface. Routers will not exchange routing updates with routers belonging to different areas. Area 0 is called the backbone area and all other area must connect to it by having at least one router that belongs to both areas.
Once OSPF has discovered neighbors it will look at the network type on which it is working. OSPF classifies networks into the following types:
- Broadcast (multi-access) – Broadcast (multi-access) networks are those that allow multiple devices to access (or connect to) the same network and also provide ability to broadcast. You will remember that when a packet is destined to all devices in a network, it is termed as a broadcast. Ethernet is an example of a broadcast multi-access network.
- Non-Broadcast multi-access (NBMA) – Networks that allow multi-access but do not have broadcast ability are called NBMA networks. Frame Relay networks are usually NBMA.
- Point-to-Point – Point-to-Point networks consist of direct connection between two routers and provide a single path of communication. When routers are connected back-to-back using serial interfaces, a point-to-point network is created. Point-to-point networks can also exist logically across geographical locations using various WAN technologies such as Frame Relay and PPP.
- Point-to-Multipoint – Point-to-Multipoint networks consist of multiple connections between a single interface of a router and multiple remote routers. All routers belong to the same network but have to communicate via the central router, whose interface connects the remote routers.
Depending on the network type that OSPF discovers on the router interfaces, it will need to form Adjacencies. An adjacency is the relation between neighbors that allows direct exchange of routes. Unlike EIGRP, OSPF will not form adjacency with all neighbors always. A router will form adjacencies with a few or all neighbors depending on the network type that is discovered. Adjacencies in each network type is discussed below:
- Broadcast (multi-access) – Since multiple routers can connect to such networks, OSPF elects a Designated Router (DR) and a Backup Designated Router (BDR). All routers in these networks, form adjacencies only with the DR and BDR. This also means that route updates are only shared between the routers and the DR and BDR. It is the duty of the DR to share routing updates with the rest of the routers in the network. If a DR loses connectivity to the network, the BDR will take its place. The election process is discussed later in the chapter.
- NBMA – Since NBMA is also a multi-access network, a DR and a BDR is elected and routers form adjacencies only with them. The problem with NBMA networks is that since broadcast capability and in turn multicast capability is not present, routers cannot discover neighbors. So NBMA networks require you to manually tell OSPF about the neighbors present in the network. Apart from this, OSPF functions as it does in a broadcast multi-access network.
- Point-to-Point – Since there are only two routers present in a point-to-point network, there is no need to elect a DR and BDR. Both routers form adjacency with each other and exchange routing updates. Neighbors are discovered automatically in these networks.
- Point-to-multipoint – Point-to-multipoint interfaces are treat as special point-to-point interfaces by OSPF and it does a little extra work on here that is out of scope of CCNA. There is no DR/BDR election in such networks and neighbors are automatically discovered.
Once OSPF has formed adjacencies, it will start exchanging routing updates. The following two terms come to use here:
- Link State Advertisements – Link State Advertisements (LSAs) are OSPF packets containing link-state and routing information. These are exchanged between routers that have formed adjacencies. The packets essentially tell routers in the networks about different networks (links) that are present and how to reach them. Different types of LSAs are discussed later in the chapter.
- Topology Table – The topology table contains information on every link the router learns about (via LSAs). The information is the topology table is used to compute the best path to remote networks.
At the area level, the only term that gets introduced is:
- Area Border Routers (ABRs) – Routers that connect an area to area 0 are called ABRs. They have one interface belonging to area 0 and other interfaces belonging to one or more areas. They are responsible for propagating routing updates between area 0 and other areas.
At the internetwork level another term that gets introduced is:
- Autonomous System Boundary Router (ASBR) – A router that connects an OSPF AS to another OSPF AS or AS belonging to other routing protocols is called an Autonomous System Boundary Router or ASBR. Route redistribution is setup between the two AS on these routers and hence they become the gateway between the two AS.
Now that you are familiar with OSPF terminology, the rest of the sections will discuss the working of OSPF in detail and help you better understand the terms discussed here.
Loopback interfaces are virtual, logical interfaces that exist in the software only. They are used for administrative purposes such as providing a stable OSPF interface or diagnostics. Using loopback interfaces with OSPF has the following benefits:
- Provides an interface that is always active.
- Provides an OSPF Router ID that is predictable and always same. Making is easier to troubleshoot OSPF.
- Router ID is a differentiator in DR/BDR election. Having a loopback interface with higher order IP address can influence the election.
Configuring a loopback interface is easy – You need to select an interface number and enter the interface configuration mode using the interface command in global configuration mode as shown below:
The interface number can be any number starting from 0. Once in the interface configuration mode, use the ip address command to configure an IP address as you would on a physical interface. An example is shown below:
That’s it! The loopback interface is configured and will be listed as an active interface in the show ip interface command.
The loopback interface can be important for OSPF because it will take the highest loopback IP address as the Router ID. If a loopback interface is not present, the highest physical IP address will be taken.
A loopback interface is logically equivalent to a physical address. The router is going to add an entry into its routing table for the network that the loopback interface address belongs to. So you can even configure a routing protocol to advertise the loopback network. Whether you choose to do that or not depends on whether you want the loopback address to be reachable from the network or not. Remember you will be using a subnet if you decide to advertise the loopback network.
DR/BDR Election and influencing it
As discussed earlier, in multi-access network types, DRs and BDRs are elected and routers in the area only form adjacencies with them. So DRs and BDRs are an important part of OSPF and usually determine how well OSPF will function. In this section you will learn about the process by which DRs/BDRs are elected. Before learning about the process, it is important that you understand the terms neighbors and adjacencies fully since they are central to functioning of OSPF and the election process.
A router running OSPF will periodically send out Hello packets to multicast address 22.214.171.124. These hello packets serve as a way to discover neighbors. When a router receives these packets, it checks the following to ascertain that a neighborship can be established:
- Area ID – The Area ID received in a hello packet should match the area ID associated with the interface the packet was received on. As mentioned earlier, OSPF associates an area ID with each interface it is enabled on. The rationale behind comparing the area ID is that only router having interface in the same area should form neighborship.
- Hello and Dead intervals – Hello packets exchanged by routers running OSPF contain information such as area ID, hello interval and dead interval. Hello interval specifies the time duration between hello packets and dead interval specifies the time duration after which a router will be declared dead if hello packets have not been received from it.
For a neighborship to form, the hello and dead intervals should match between the routers.
- Authentication – OSPF allows you to set a password for an area. For neighborship to form, the password must be same on the routers. Setting a password is optional.
If all the three above conditions match, the router will add the neighbor into the neighbor table and form a neighborship. Even though a neighborship gets formed, OSPF unlike EIGRP will not share routing updates, or link state advertisements in this case, with every neighbor.
For OSPF to share link state advertisements, an adjacency must be formed between the routers. As discussed earlier, how adjacencies are formed depends on the network type. In a multi-access network, a DR and BDR will be elected and all routers in the network will form adjacency with them only. Each router will exchange LSAs with DR and BDR. DR in turn will relay the information to the rest of the routers.
When routers realize that they are connected to a multi-access network, they will look at each Hello packet received to find the priority and Router ID of each router. Then the priority is compared and the router with the highest priority is selected the DR. The router with the second highest priority becomes the BDR. By default the priority of each router is 1 and can be changed on a per-interface basis.
If all routers have the default priority, then the router with the highest Router ID is elected the DR while the router with the second highest Router ID is elected the BDR. If the priority of a router is set to zero, it will not participate in the election process and will never be a DR or BDR.
As you know, the Router ID is the highest physical IP address present on a Router. This can be overridden by using a loopback interface because a router will use the highest loopback address, if one is present.
If you need to influence the DR/BDR election in a network segment, you can do one of the following:
- Manually increase the priority of a router interface to ensure that the router becomes the DR/BDR.
- Configure a loopback interface so that the Router ID becomes higher than that of the other routers in the network segment.
SPF Tree Calculation
Once OSPF exchanges link state advertisements and populates the topology table, each router runs a calculation on the information collected. These calculations use something known as the Shortest Path First (SPF) algorithm. To do so, each router creates a tree putting itself at the root of the tree and the other routers and networks form the branch and leaves. In effect the router puts itself at the start and the area branches out from it. Figures 5-7 and 5-8 show an example of how the SPF tree is created by a router. Figure 5-7 shows the SPF tree with RouterA as the origin while Figure 5-8 shows the SPF tree with RouterG as the origin. Notice how different the network looks from the perspective of each router. The benefit of each router creating this tree is that the shortest path can be found from each router to each destination and there is no routing by rumor as seen with distance vector protocols.
Figure 5-7 SPF tree Example 1
Figure 5-8 SPF tree Example 2
It is important to understand that each router creates this tree only for the area it belongs to. If a router belongs to multiple areas, it will create a separate tree for each area.
A big part of the tree is also the cost associated with each path. Cost is the metric used by OSPF is the sum of the cost of the entire path from the router to the remote network. The OSPF RFC defines cost as an arbitrary value, so Cisco calculates cost as 108/bandwidth. Bandwidth in this equation is the bandwidth configured on the interface. Using this equation, an Ethernet interface with a bandwidth of 10Mbps has a cost of 10 and a 100Mbps interface has a cost of 1. You may have noticed that interfaces having a bandwidth of more than 100Mbps will have a cost in fraction but Cisco does not use fractions and rounds of the value to 1 for such interfaces.
In Figure 5-8, if all interfaces are FastEthernet interfaces with a bandwidth of 100Mbps, each link has a cost of 1. So for the path from RouterG to the 192.168.7.0/24, the total cost will be 5 and to the network 192.168.3.0/24, the total cost will be 2.
The cost of each interface can be changed using the ip ospf cost command in the interface configuration mode. It should be noted that since the OSPF RFC does not exactly define the metric that makes up the cost, each vendor uses a different metric. When using OSPF in a multivendor environment, you will need to adjust cost to ensure parity.
Link State Advertisements
The fundamental building blocks of OSPF are the link state advertisements that are sent from every router to advertise links and their states. Given the complexity and scalability of OSPF, different LSA types are used to keep the OSPF database updated. Out of the various LSAs, the first five are most relevant to the limited OSPF discussion covered in this chapter and are discussed below:
- Type 1 – Router LSA – Each router in the area sends this LSA to announce its presence and list the links to other routers and networks along with metrics to them. These LSAs do not cross the boundary of an area.
- Type 2 – Network LSA – The DR in a multi-access network sends out this LSA. It contains a list of routers that are present in the network segment. These LSAs also do not cross the boundary of an area.
- Type 3 – Summary LSA – The ABR takes the information learned in one area (and optionally summarizes this information) and sends it out to another area it is attached to. This information is contained in LSA type 3 and is responsible for propagation of Inter-area routes.
- Type 4 – ASBR Summary LSA – ASBRs originate external routes (redistributed routes) and send them throughout the network. While the external routes are listed in type 5 LSA, the details of the ASBR themselves in listed in type 4 LSAs. This LSA is originated by the ABR of the area where the ASBR resides.
- Type 5 – External LSA – This LSA lists routes redistributed into OSPF from another OSPF process or another routing protocol. This LSA is originated by the ASBR and propagates across the OSPF AS. | <urn:uuid:06f8c906-4a3e-4b6d-81c9-e799f41b2d42> | CC-MAIN-2019-47 | https://www.freeccnastudyguide.com/study-guides/ccna/ch5/ospf/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00379.warc.gz | en | 0.92399 | 4,851 | 3.1875 | 3 |
- Open Access
Intake of macro- and micronutrients in Danish vegans
Nutrition Journal volume 14, Article number: 115 (2015)
The Erratum to this article has been published in Nutrition Journal 2016 15:16
Since information about macro- and micronutrient intake among vegans is limited we aimed to determine and evaluate their dietary and supplementary intake.
Seventy 18–61 years old Danish vegans completed a four-day weighed food record from which their daily intake of macro- and micronutrients was assessed and subsequently compared to an age-range-matched group of 1 257 omnivorous individuals from the general Danish population. Moreover, the vegan dietary and supplementary intake was compared to the 2012 Nordic Nutrition Recommendations (NNR).
Dietary intake differed significantly between vegans and the general Danish population in all measured macro- and micronutrients (p < 0.05), except for energy intake among women and intake of carbohydrates among men. For vegans the intake of macro- and micronutrients (including supplements) did not reach the NNR for protein, vitamin D, iodine and selenium. Among vegan women vitamin A intake also failed to reach the recommendations. With reference to the NNR, the dietary content of added sugar, sodium and fatty acids, including the ratio of PUFA to SFA, was more favorable among vegans.
At the macronutrient level, the diet of Danish vegans is in better accordance with the NNR than the diet of the general Danish population. At the micronutrient level, considering both diet and supplements, the vegan diet falls short in certain nutrients, suggesting a need for greater attention toward ensuring recommended daily intake of specific vitamins and minerals.
Health, ethical and spiritual concerns have apparently motivated abstention from meat since ancient Greece . In Denmark it is estimated that approximately 1 % of the population is either vegetarian or vegan . The definition of a strict vegan diet is a diet that excludes all products of animal origin including meat, poultry, fish and seafood, dairy products, eggs and honey . Previous studies have suggested that a vegan diet provides relatively large amounts of cereals, legumes, nuts, fruits and vegetables and is usually high in carbohydrates, n-6 fatty acids, dietary fibers, beta-carotene, folic acid, vitamin C, vitamin E, iron and magnesium. In contrast, the vegan diet is suggested to be relatively low in protein, saturated fat, long-chain n-3 fatty acids, retinol, vitamin B12, vitamin D, calcium and zinc [3, 4]. Furthermore, vegans are more likely to use single-nutrient supplements compared to non-vegetarians . Epidemiological studies have shown that vegans have lower BMI and lower total plasma cholesterol compared to omnivores . Vegans have also been reported to have reduced risk of cardiovascular diseases, type 2 diabetes and certain forms of cancer [6, 7]. Whether these observations solely relate to dietary habits or are partly explained by a healthier lifestyle in general, including less smoking and higher level of physical activity, is unsettled.
Information about intake of macro- and micronutrients in vegans is scarce [4, 5, 8–11] and only two studies have previously reported the use of supplements among vegans [5, 11]. Furthermore, most previous studies are limited by methodological shortcomings with respect to ascertainment of nutrient content. In previous studies, participants were categorized as vegans by different approaches, either by self-report and/or based on the dietary records. One could speculate that the longer you adhere to a vegan diet the more focused you are on achieving an optimal level of macro- and micronutrient intake; however, only three studies comparing vegan to omnivorous diets stated the duration of adherence to the former [5, 9, 10]. Furthermore, studies rely on databases with various coverage with regard to food items and nutrients [4, 5, 8–11]. One paper did not specify the database used and another database covered 130 food items only , potentially compromising the validity of the results. Thus, the dietary information has so far primarily been based on food frequency questionnaires (FFQ) [4, 8], non-weighed food records and 24-h recalls [5, 10]. Use of FFQ minimizes the error of day-to-day variability but it has a lower level of detail and may be influenced by recall bias. The 24-h recall method has a greater specificity than the food frequency method, yet relies on memory and is prone to optimistic bias . Food records have a high specificity and minimize the reliance on memory . However, only one British study including 38 vegans using a three-day weighed food record has been reported .
In the present study, we aimed to (i) determine the dietary and supplementary intake of macro- and micronutrients in a sample of Danish vegans based on a four-day weighed food-record and a food database including 1 049 food items (ii) compare dietary intake to the intake among age-range-matched individuals from the general Danish population and (iii) compare dietary and supplementary intake of macro- and micronutrients in the sample of Danish vegans to the 2012 Nordic Nutrition Recommendations (NNR) .
Subjects and methods
The data presented was collected as part of an observational study investigating the effect of a strict vegan diet on the gut microbiota (unpublished). Seventy-five healthy volunteers adhering to a vegan diet for a minimum of 1 year were recruited by advertising in local newspapers and through online resources, including social media. Subjects were aged 18-61 years and weight-stable (±1 kg, assessed by interview) for a minimum of 2 months prior to study entry. Pregnant and lactating women were ineligible for inclusion in the study.
The vegan sample was compared with an age-range-matched group of individuals (n = 1 627) from the Danish National Survey of Dietary Habits and Physical Activity (DANSDA) 2005–2008. DANSDA is a nation-wide and representative cross-sectional survey among 4 to 75-year-old children and adults . Vegetarians and vegans were excluded from the DANSDA survey prior to age-matching.
Both the vegan and the DANSDA study was approved by the Danish Data Protection Agency and conducted in accordance with the Helsinki Declaration (vegan: j.no 2013-54-0501, DANSDA: j.no. 2008-54-0430). The vegan study was approved by the Ethical Committee for the Capital Region of Denmark (j.no. H-3-2012-145), while DANSDA did not require ethical approval according to Danish legislation.
Vegan participants were weighed on an electronic scale (TANITA WB-110MA, Tanita Corporation of America, Arlington Heights, Illinois, USA) without shoes, dressed in light clothing or underwear after having emptied their bladder. The height of the participants was measured to the nearest 0.5 cm without shoes, using a wall-mounted stadiometer (ADE MZ10023, ADE, Hamburg, Germany). Anthropometric measures of the DANSDA population were self-reported as previously described .
Dietary intake and supplements
Vegan dietary intake was estimated based on a four-day weighed food diary, including two working days and two weekend days within one week. Dietary recording was obtained as vegan participants were included in the study from start-December 2013 to mid-July 2014. Foods were quantified to the nearest 0.1 g using a calibrated precision scale (ProScale XC-2000, HBI Europe, Erkelenz, Germany). Instruction in filling in the diary was given by qualified medical staff. The nutrient intake was calculated using the Dankost Pro software (version 18.104.22.168), which is based on the food database at the Danish Food Composition Databank containing 1 049 food items (www.foodcomp.dk) and the NNR . Vegan recipes not included in the database were constructed by qualified personnel holding a Master’s degree in Human Nutrition and based on foods with complete validity in the database. Average daily intake (ADI) of macro- and micronutrients was calculated as:
In DANSDA, dietary intake was recorded every day for seven consecutive days in food records with pre-coded response categories, which included open-answer options. Details about the method and calculation of intake of food and nutrients have been described elsewhere . Both methods of determining nutrient content were based on the same food database (www.foodcomp.dk) and in both cases values are presented as median (interquartile range (IQR)) due to skewness. BMR was calculated from equations published in the NNR . The diet records were validated by calculating the ratio of mean energy intake (EI) and basal metabolic rate (BMR) and the accepted value was set to ≥1.06 . Data from subjects with EI:BMR ratios below 1.06 were excluded from the analyses (DANSDA: n = 343). Of the 75 vegans included in the original study two were excluded from the analyses due to an incomplete diet record, and three were excluded from the analyses due to an EI:BMR below 1.06. Of the 1 627 subject in the DANSDA study 27 were excluded due to missing anthropometric data (BMI) and 343 were excluded from the analyses due to an EI:BMR below 1.06. Three vegan subjects recorded dietary intake for three days only. However, exclusion of these individuals revealed no difference in overall results (data not shown) and consequently they were not excluded from the final analyses. Accordingly, diet data from 70 vegans and 1 257 DANSDA study individuals from DANSDA was included in the final analyses.
The vegan subjects were asked to bring their dietary supplements on the day of examination at which point every nutrient was noted with exact daily dose.
Statistical analyses were performed using the statistical software ‘R’ version 0.98.501 (The R Foundation for Statistical Computing 2013, http://www.r-project.org/) and a significance level of P < 0.05 was used. Analyses were performed separately for men (n = 33) and women (n = 37). Linear regression adjusted for either 1) age and energy intake or 2) age and BMI was used to examine differences in dietary intake between vegans and omnivores. Model assumptions were assessed graphically. In case of non-normality of residuals, natural logarithmic transformation was applied. In three cases (intake of trans fatty acids, intake of retinol and intake of vitamin D) model assumptions were not met through transformation, in which cases Welch’s t-test were applied. Additional analyses were performed using an age and gender specific, individually matched control group.
Characteristics of the vegan and DANSDA subjects are presented in Table 1. Gender distribution was equal in the two studies, whereas subjects in the DANSDA study were older, had higher BMI, their educational attainment was lower and they were more likely to be current smokers.
Vegan diet compared to the diet in the general Danish population
Table 2 shows EI and macronutrient intake by sex and population group. A 10.5 % (difference = 1 028 kJ/d, 95 % CI: 132, 1 924; P = 0.03) higher EI was present in vegan men compared to men in the general population whereas for women there was no difference in EI (P = 0.2). With regard to intake of dietary lipids vegans had a lower intake of SFA, MUFA, trans fatty acids and cholesterol and a higher intake of PUFA as well as a higher PUFA:SFA ratio compared to the general population. For both men and women intake of added sugar and protein was lower, while intake of total dietary fibre was higher in vegans compared with the general population. Intake of carbohydrates was lower among vegan women compared to women from DANSDA but no difference was observed among men (Table 2).
Table 3 shows the vitamin and mineral intake by sex and population group. Intake of all examined vitamins and minerals differed between vegans and the general population.
The vegan dietary intake of vitamin A, vitamin D, riboflavin, niacin and vitamin B12 were lower compared with the general population (P < 0.001). Vegans had a higher intake of beta carotene, vitamin E, thiamine, B6, folic acid and vitamin C compared with the general population (P < 0.001). The dietary intake of calcium, phosphorus, zinc, iodine and selenium were lower in vegans compared with the general population (P < 0.001) while intake of magnesium, potassium and iron was higher in vegans compared with the general population (P < 0.001) (Table 3).
Three vegans had extremely high intake of iodine (971, 1 709 and 1 982 μg/d) due to consumption of seaweed.
Vegans in reference to the Nordic Nutrition Recommendations (NNR)
Vegans reached the recommended daily intake of energy and fats but did not reach the recommended daily intake of protein (Table 2). Table 4 shows the intake of micronutrients from both diet and supplements in vegans. The intake of micronutrients solely from supplements is presented in Additional file 1: Table S3. Forty-six vegan individuals (65.7 %) reported supplementing their diet. As a group, the vegans reached the recommended daily intake of every vitamin and mineral except intake of vitamin D, iodine and selenium for both sexes and vitamin A in women (Table 4). However, at the individual level, only intake of thiamine, folic acid, magnesium and iron were reached by every vegan man and none of the recommendations for micronutrient intake were reached by every vegan woman (Table 4).
Data on the multiple linear regressions including BMI is not shown since the results were similar in every variable except intake of phosphorus in men.
The additional analyses using an individually matched (age and gender) control group revealed similar results as found in the range-matching analyses (age) - except for EI, MUFA, beta carotene, thiamine and potassium in women, for which the levels of significance were changed (Additional file 2: Table S1 and Additional file 3: Table S2). The analyses using range-matching (adjusted for age) were prioritized due to the larger sample size (n = 1 327 vs. n = 140) and thereby more narrow 95 % confidence intervals (CI) in the tests.
At the macronutrient level the vegan diet can be considered healthy since the distribution of macronutrients corresponds well to that proposed by the NNR. Specifically, a high PUFA:SFA ratio, as that observed in vegans, is suggested to be favorable in regard to the risk of coronary heart disease . Furthermore, a low intake of added sugar, as reported in vegans, is considered beneficial for human health and similarly a high dietary fibre intake may provide specific benefits in relation to gastrointestinal and metabolic functions [18, 19]. Intake of added sugar has previously only been examined in one American study comprising 15 vegans and 6 omnivores, in which, no difference was reported . The low intake of added sugar in the present study might be due to a lower intake of processed food and sugar-sweetened beverages. This is only speculative since we do not have data at food item level; it is, however, supported by a recent study, in which vegans reported consuming fewer servings of sweets per day than omnivores . An evaluation of the diet in the general Danish population has been done previously .
The vegan population had, however, an insufficient intake of several vitamins, which could have a negative health effect. In contrast to our findings, previous studies, reporting intake of total vitamin A (retinol equivalents; RE) found higher intake of vitamin A in vegans compared to omnivores [5, 9, 10]. However, it is difficult to compare results across studies since the amount of retinol and beta-carotene, from which the vitamin A intake is calculated, is not presented. These opposing results could be due to different methods of calculating vitamin A RE from beta-carotene and other carotenoids. The vegan intake of retinol was very low in the present study compared to previous studies reporting this [4, 11]. In the present study the 2001 Institute of Medicine Interconversion of Vitamin A and Carotenoid Units was applied . Retinol is primarily found in animal products , why the finding in the present study seems plausible. Approximately half of the vegans did not reach the recommendations for vitamin A including both intake from diet and supplements. Symptoms of vitamin A deficiency are night blindness, dry and scaly skin, increased number of infections in the respiratory tract, the gut and the urinary tract and severe vitamin A deficiency has furthermore been associated with cancer at these sites .
The vegans participating in the present study had a low intake of riboflavin and vitamin B12, which corresponds to previous findings [4, 5, 9–11]. The major food sources of riboflavin in Nordic diets are milk and meat products , which explain the low intake of this vitamin among vegans. Recommended intake of riboflavin and vitamin B12, including intake from supplements, is not reached by 29 and 31 vegans (of 70), respectively. Little attention has been paid to the effects of riboflavin deficiency on human health. An in vitro study using duodenal biopsies demonstrated that riboflavin depletion in adult humans impairs proliferation of intestinal cells and thereby may have implications for gastrointestinal function and nutrient absorption . Without supplementation the low dietary intake of vitamin B12 among vegans could enhance risk of pernicious anemia and polyneuropathy .
In 41 (of 70) vegans the total intake of vitamin D did not meet the recommendations. The consequence of insufficient intake of vitamin D is lower absorption of calcium and phosphorus, which may impact bone metabolism. Furthermore, it has been suggested that high vitamin D plasma concentration or vitamin D supplementation is associated with decreased risk of colorectal cancer, cardiovascular disease and type 2 diabetes .
The intake of vitamin D and vitamin B12 among vegans was very low in the present study and it is lower than what has been observed in previous studies [4, 5, 9–11]. This could be due to low availability of fortified foods, which are common in some countries but was prohibited in Denmark by law until 2003 and has still not been introduced on a wider scale.
In the present study, intake of dietary sodium among vegans was low compared with the general population. Three studies have previously examined the sodium content of a vegan diet, two of which also showed lower sodium intake among vegans [8, 9], whereas the third study found no difference between a vegan and an omnivore diet . The low intake of sodium in the vegan population in this study might be due to a lower intake of processed foods, which usually contain high amounts of salt . Most of the vegans (55 of 70) did not meet the recommendation for iodine intake when including intake from supplements. Only one study has previously investigated iodine intake in vegans reporting an iodine intake from diet and supplements among vegans of only 50–70 % of the dietary reference value . In Denmark, fortified table salt is a major source of iodine and it is therefore also subject to potential under- reporting. Other sources of iodine include fish and sea plants . In general, sodium and iodine intakes are difficult to quantify and the results should be interpreted with caution . Less than half of the vegans (24 of 70) reached the recommended intake of selenium. No studies have previously examined selenium intake in vegans. An insufficient intake of iodine and selenium might potentially have negative health impact such as development of goiter. Adequate iodine intake is important throughout life, but especially in childhood and during pregnancy and breastfeeding . Low serum selenium levels (≤1 μmol/l) have been associated with increased risk of myocardial infarction in men ; however, the lower critical threshold for intake of this mineral is unknown.
Even though the intake of iron and calcium in male vegans met the Nordic recommendations, the absorption of these minerals might not be correspondingly high. Iron and calcium are known to have low bioavailability in the context of plant-based foods. This is due to the chemical form in which the minerals are present, the nature of the food matrix and the presence of bioavailability-lowering compounds in plant-based foods (e.g. phytate, dietary fiber and oxalic acid) [28, 29]. Considering the low intake of these minerals among vegan women, a reduced bioavailability may further increase the risk of outright deficiency and related disorders. However, it is well established that the bioavailability of iron increases with the presence of ascorbic acid (vitamin C) and other organic acids . Intake of vitamin C was very high among vegans, thus potentially compensating for the low bioavailability of iron in plant–based foods . It has previously been proposed that the recommended intake of iron for vegans should be 1.8 times higher than that of omnivores because of the lower bioavailability ; in the present study the iron intake among vegan men met this recommendation.
Vitamins and minerals interact and are dependent on sufficient availability of one another in order to function and contribute to ensure human health. Examples are vitamin A and zinc, every B-vitamin as well as vitamin D and calcium. This emphasizes the importance of adequate intake of every individual mineral since, for instance, an inadequate intake of vitamin A affects the function of zinc even though this mineral is consumed in sufficient amounts as is the case for the vegans in this study.
To examine whether the low mineral intake among vegans in this study has a negative health effect, pertinent biochemistry of study participants should be evaluated. Furthermore, a healthy diet is not only a diet balanced for macro- and micronutrients. Other components such as amount of processed food, heated food, food preservatives and additives as well as the sources of the nutrients (e.g. protein from meat versus plants) could be important in evaluating a diet. However, in the present study, these data were not available.
The present study covers the macro- and micronutrient composition of vegans in general. At present very little is known about the health effect of a vegan diet under specific circumstances such as pregnancy, lactation, childhood and advanced age; circumstances which the present study does not address.
The main strength of the present study is the method applied to assess dietary intake. A weighed four-day food record is considered the gold standard for specifying dietary intake . However, a potential error may be introduced by the diet record, as it may be a burden to weigh and register all food items, thus a person may eat more simple foods than usual. Another important strength is the availability of detailed data on dietary supplements, the most comprehensive to date in a European sample, allowing us to get a more complete picture of the micronutrient intake.
An almost unavoidable limitation with diet recording is under-reporting . In the vegan population the EI: BMR ratio was acceptable in 96.0 % of cases, indicating that under-reporting EI might not be an issue in the examined vegan population. Nevertheless, it is a potential limitation that a vegan diet is non-conventional and some of the recorded meals or food items were unavailable in the database. However, this shortcoming was evaded in the present study by constructing recipes using only completely validated food items.
Whereas the number of vegans included in some other studies exceeds the number in our study by far, most have used less accurate methods [4, 5, 8–11] and the present study is the largest to date using the gold standard method to assess dietary intake. However, while sufficiently powered to detect differences between vegans and the general population it is important to exhibit caution when drawing conclusions based on a small sample, which may not be representative of the Danish vegan population in general. Another potential source of bias arises from the fact that the included vegans represent a self-selected group of participants. Particularly, healthy lifestyle bias may be an issue. Vegans may be more concerned about healthy lifestyle, consumption of healthy foods and optimal amounts of macro- and micronutrients. Furthermore, the vegan subjects had higher educational attainment compared to the general Danish population, a factor that has been associated with healthier lifestyle in general . This might partially explain the better compliance to the recommendations at macronutrient level compared to the general Danish population.
Overall in this sample of vegans, at the macronutrient level, the diet appears well-balanced with a healthy distribution of fatty acids and a low content of added sugar and high in dietary fiber. However, at the micronutrient level the vegan diet including supplements is inadequate compared to authorized recommendations. This suggests a need for greater attention toward ensuring recommended daily intake of specific vitamins and minerals to avoid micronutrient deficiency and risk of associated disorders. This is a study on a relatively small sample of vegans and more studies are needed to make general conclusions regarding dietary and supplementary intake of macro- and micronutrients in vegans.
Average daily intake
Body mass index
Basal metabolic rate
Danish National Survey of Dietary Habits and Physical Activity
Food frequency questionnaire
Monounsaturated fatty acids
Nordic Nutrition Recommendations
Polyunsaturated fatty acids
Saturated fatty acids
Spencer C. The Heretic’s Feast: A History of Vegetarianism. London: Fourth Estate; 1996.
Pedersen AN, Fagt S, Groth MV, Christensen T, Biltoft-Jensen AP, Matthiessen J, et al. Danskernes kostvaner 2003-2008: hovedresultater. Denmark: DTU Fødevareinstituttet; 2010.
Key TJ, Appleby PN, Rosell MS. Health effects of vegetarian and vegan diets. Proc Nutr Soc. 2006;65:35–41.
Davey GK, Spencer EA, Appleby PN, Allen NE, Knox KH, Key TJ. EPIC–Oxford: lifestyle characteristics and nutrient intakes in a cohort of 33 883 meat-eaters and 31 546 non meat-eaters in the UK. Public Health Nutr. 2003;6(03):259–68.
Haddad EH, Berk LS, Kettering JD, Hubbard RW, Peters WR. Dietary intake and biochemical, hematologic, and immune status of vegans compared with nonvegetarians. Am J Clin Nutr. 1999;70(3 Suppl):586S–93S.
Wong JM. Gut microbiota and cardiometabolic outcomes: influence of dietary patterns and their associated components. Am J Clin Nutr. 2014;100 Suppl 1:369S–77S.
Tantamango-Bartley Y, Jaceldo-Siegl K, Fan J, Fraser G. Vegetarian diets and the incidence of cancer in a low-risk population. Cancer Epidemiol Biomarkers Prev. 2013;22(2):286–94.
Clarys P, Deliens T, Huybrechts I, Deriemaeker P, Vanaelst B, De Keyzer W, et al. Comparison of nutritional quality of the vegan, vegetarian, semi-vegetarian, pesco-vegetarian and omnivorous diet. Nutrients. 2014;6(3):1318–32.
Janelle KC, Barr SI. Nutrient intakes and eating behavior see of vegetarian and nonvegetarian women. J Am Diet Assoc. 1995;95(2):180–9.
Wu GD, Compher C, Chen EZ, Smith SA, Shah RD, Bittinger K, et al. Comparative metabolomics in vegans and omnivores reveal constraints on diet-dependent gut microbiota metabolite production. Gut. 2014. doi:10.1136/gutjnl-2014-308209.
Draper A, Lewis J, Malhotra N, Wheeler LE. The energy and nutrient intakes of different types of vegetarian: a case for supplements? Br J Nutr. 1993;69(01):3–19.
Willett W. Nutritional Epidemiology. Oxford: Oxford University Press, US; 2013.
Nordic Council of Ministers. Nordic Nutrition Recommendations 2012: Integrating nutrition and physical activity. 5th ed. Copenhagen: Norden. 2014.
Nordic Council of Ministers. Nordic Nutrition Recommendations 2004: Integrating nutrition and physical activity. 4th ed. Copenhagen: Norden. 2004.
Goldberg GR, Black AE, Jebb SA, Cole TJ, Murgatroyd PR, Coward WA, et al. Critical evaluation of energy intake data using fundamental principles of energy physiology: 1. Derivation of cut-off limits to identify under-recording. Eur J Clin Nutr. 1991;45(12):569–81.
Virtanen JK, Mursu J, Tuomainen TP, Voutilainen S. Dietary fatty acids and risk of coronary heart disease in men: the Kuopio Ischemic Heart Disease Risk Factor Study. Arterioscler Thromb Vasc Biol. 2014;34(12):2679–87.
Lustig RH, Schmidt LA, Brindis CD. Public health: the toxic truth about sugar. Nature. 2012;482(7383):27–9.
Bingham SA, Day NE, Luben R, Ferrari P, Slimani N, Norat T, et al. Dietary fibre in food and protection against colorectal cancer in the European Prospective Investigation into Cancer and Nutrition (EPIC): an observational study. Lancet. 2003;361(9368):1496–501.
Gemen R, de Vries JF, Slavin JL. Relationship between molecular structure of cereal dietary fiber and health effects: focus on glucose/insulin response and gut health. Nutr Rev. 2011;69(1):22–33.
Beezhold B, Radnitz C, Rinne A, DiMatteo J. Vegans report less stress and anxiety than omnivores. Nutr Neurosci. 2015;18(7):289–96.
Institute of Medicine (US) Panel on Micronutrients. Dietary reference intakes for vitamin A, vitamin K, arsenic, boron, chromium, copper, iodine, iron, manganese, molybdenum, nickel, silicon, vanadium, and zinc. 1st ed. Washington (DC): National Academies Press (US); 2001.
Nakano E, Mushtaq S, Heath PR, Lee E, Bury JP, Riley SA, et al. Riboflavin depletion impairs cell proliferation in adult human duodenum: identification of potential effectors. Dig Dis Sci. 2011;56(4):1007–19.
Theodoratou E, Tzoulaki I, Zgaga L, Ioannidis JP. Vitamin D and multiple health outcomes: umbrella review of systematic reviews and meta-analyses of observational studies and randomised trials. BMJ. 2014;348:g2035.
World Health Organization. Creating an enabling environment for population-based salt reduction strategies: report of a joint technical meeting held by WHO and the Food Standards Agency, United Kingdom. Geneva: World Health Organization; 2010.
Bentley B. A review of methods to measure dietary sodium intake. J Cardiovasc Nurs. 2006;21(1):63–7.
Zimmermann MB. Iodine Deficiency Disorders and Their Correction Using Iodized Salt and/or Iodine Supplements. In: Iodine Chemistry and Applications. Hoboken, NJ: Wiley; 2014. p. 421–31. Zürich, Switzerland.
Suadicani P, Hein H, Gyntelberg F. Serum selenium concentration and risk of ischaemic heart disease in a prospective cohort study of 3000 males. Arteriosclerosis. 1992;96(1):33–42.
Gibson RS, Perlas L, Hotz C. Improving the bioavailability of nutrients in plant foods at the household level. Proc Nutr Soc. 2006;65(2):160–8.
Hallberg L, Hulthen L. Prediction of dietary iron absorption: an algorithm for calculating absorption and bioavailability of dietary iron. Am J Clin Nutr. 2000;71(5):1147–60.
Teucher B, Olivares M, Cori H. Enhancers of iron absorption: ascorbic acid and other organic acids. Int J Vitam Nutr Res. 2004;74(6):403–19.
Trumbo P, Yates AA, Schlicker S, Poos M. Dietary reference intakes: vitamin A, vitamin K, arsenic, boron, chromium, copper, iodine, iron, manganese, molybdenum, nickel, silicon, vanadium, and zinc. J Am Diet Assoc. 2001;101(3):294–301.
Biltoft-Jensen A, Matthiessen J, Rasmussen LB, Fagt S, Groth MV, Hels O. Validation of the Danish 7-day pre-coded food diary among adults: energy intake v. energy expenditure and recording length. Br J Nutr. 2009;102(12):1838–46.
Wardle J, Steptoe A. Socioeconomic differences in attitudes and beliefs about healthy lifestyles. J Epidemiol Community Health. 2003;57(6):440–3.
The authors would like to thank A. Forman, T. Lorentzen and M. Nielsen for technical assistance and G. Lademann, T. Toldsted, P. Sandbeck and K. Kaadtmann for managerial assistance. Furthermore, we would like to thank J. Gæde for entering dietary data.
The study was supported by research grants from The Novo Nordisk Foundation Center for Basic Metabolic Research (www.metabol.ku.dk), an independent research center at the University of Copenhagen partially funded by an unrestricted donation from the Novo Nordisk Foundation and by The Danish Council for Independent Research. The Danish National Survey of Dietary Habits and Physical Activity (DANSDA) was financed by the Ministry of Food, Agriculture and Fisheries. The funding organizations had no role in the design, analysis or writing of this article.
The authors declare that they have no competing interests.
NBK and MLM performed the statistical analyses and draft the manuscript. THH, RJG and OP designed the study and helped to draft the manuscript. THH collected the data of the vegan study. SF and CH collected the data of the general Danish population. All authors read and approved the final manuscript.
Nadja B. Kristensen and Mia L. Madsen contributed equally to this work.
An erratum to this article is available at http://dx.doi.org/10.1186/s12937-016-0136-2.
Overview of supplement intake among the vegans. (DOCX 18 kb)
Sex-stratified macronutrient intake in the vegan and the Danish National Survey of Dietary Habits and Physical Activity (DANSDA) study samples (using an age and gender specific, individually matched control group) with the 2012 Nordic Nutrition Recommendations (NNR). (DOCX 26 kb)
Sex-stratified micronutrient intake in the vegan and the Danish National Survey of Dietary Habits and Physical Activity (DANSDA) study samples (using an age and gender specific, individually matched control group) with the 2012 Nordic Nutrition Recommendations (NNR). (DOCX 27 kb)
About this article
Cite this article
Kristensen, N.B., Madsen, M.L., Hansen, T.H. et al. Intake of macro- and micronutrients in Danish vegans. Nutr J 14, 115 (2015) doi:10.1186/s12937-015-0103-3
- Habitual diet
- Nutrition recommendation | <urn:uuid:a35cec53-f3c3-487f-9e78-97b009749036> | CC-MAIN-2019-47 | https://nutritionj.biomedcentral.com/articles/10.1186/s12937-015-0103-3 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00298.warc.gz | en | 0.924634 | 7,599 | 2.71875 | 3 |
The weather glossary is a list of every meteorological term that I use often organized alphabetically. If a term you do not understand or know is not on this glossary, please comment below and I will be sure to add it.
LAST UPDATED: September 2019
This glossary is currently being updated.
AAO- Antarctic Oscillation. This is the least important of the four teleconnections, with little research being done on how it affects the Northern Hemisphere. I only use it to see when there are big swings, which can signal global shifts in the pattern. It can be found here.
Advisory- This is a product issued by the National Weather Service. It is a signal that conditions will be dangerous either due to flooding, fog, wind, snow, or wintry weather. However, an Advisory does not mean conditions will be life threatening, just that dangerous conditions are imminent.
Analog- In weather, when making a longer term forecast or looking at storms, it is often good to compare to previous patterns or storms, called analogs.
AO- Arctic Oscillation. This is one of the three most important teleconnections. When the AO is negative, there is stronger blocking in the atmosphere that sends sets colder air up better to eject into the eastern United States. When the AO is positive, it is harder to get colder air into the eastern United States. A negative AO is more favorable for snow, and the AO forecasts can be found here.
ARW- Advanced Research WRF model. This is one of two 48 hour short range, high resolution models. It is used to help forecast precipitation totals and fine tune forecasts right before storms hit. Similar to the NMM and 4km NAM. It can be found here: http://www.meteo.psu.edu/~fxg1/ewallhires.html
Blocking- In simple terms, blocking in the atmosphere is formed by ridges or areas of stagnant high pressure that dictate storm movement and can slow storms down. In a progressive pattern, there is less blocking so storms generally move from west to east, though with blocking the pattern is more amplified so storms can move in many different directions, making them harder to forecast but also increasing the chance that a storm moves up the east coast and becomes a Nor’ Easter, hitting Southwestern Connecticut with snow. Details from the National Weather Service can be found here.
BUFKIT- This is a program that takes a vertical cross-section of the atmosphere and shows temperatures and other data throughout all of it. At any certain time, it enables the viewer to see the temperature at any point in the troposphere, helping forecast precipitation types and snowfall ratios. This also takes model output for a specific location and shows what predicted accumulations from rain/snow are. More details on how to get BUFKIT for free can be found here.
CAPE: stands for Convective Available Potential Energy, is a frequently used severe weather index that basically is a measure of how much energy the atmosphere has available to fuel convective storms (thunderstorms). A higher CAPE value would suggest a greater chance for severe storms.
CMC- Canadian Meteorological Centre, which hosts the CMC weather model. This weather model is on average slightly less accurate than the GFS and ECMWF weather models, and does well in patterns with a lot of blocking or amplification, and tends to do much worse with amplified patterns. It is run twice a day, and comes out around midnight and noon. 0z run can be viewed here and the 12z run can be viewed here.
CPC- Climate Prediction Center, which forecasts droughts and longer term weather, such as 3-month forecasts, 2-3 week forecasts, etc. Their products I find are rarely very informative across microclimates like Southwestern Connecticut, which is why I rarely reference them, unless they highlight an area as having potential for heavy rain/snow. The CPC’s homepage is here.
DGEX- This is one of the least accurate weather models, which occasionally can show a decent solution but is rarely used seriously. It is run twice a day at 6z and 18z, and essentially comes out at 5 AM and 5 PM. 6z run can be found here and 18z run can be found here.
ECMWF- European Center for Medium-Range Weather Forecasting. They produce one of the most accurate weather models, called either the ECMWF or the Euro, along with one of the most accurate model suites. The model comes out twice a day with a 0z run and a 12z run, but the preliminary model suite is typically available around 2 AM and 2 PM, with full model information available around 3:30 AM and 3:30 PM. It is operated privately and only crude images of forecasts every 24 hours are available to the public, which can be found here for 0z and here for 12z. I will reference it a lot but rarely, if ever, link it, as I have to pay for access. The model typically does better in periods with more blocking, and was the one model to nail Hurricane Sandy, giving it more credence in the meteorological community.
Ensembles- These are runs of a model run with slightly different different conditions multiple times. Most models have around 12 ensemble members, which are then averaged together to get an “ensemble mean”. Because of the number of the options in these, these means are generally more accurate. The GFS, ECMWF, NOGAPS, and CMC models are the ones that have ensembles that are used the most often, with the GFS and ECMWF means usually being the most accurate. I will always post on the blog when I am referencing a specific ensemble so you can view it, unless they are the ECMWF ensembles which I cannot post.
Front: (Cold Front/Warm Front) a division between two distinct air masses of different temperatures. Precipitation frequently forms on frontal boundaries.
GFS- This is the Global Forecasting System weather model, which is the main American weather model used in the medium to long range. It typically has a bias to be slightly too progressive, which is why it often places storms too far to the east, and often suffers from feedback at times as well. I will outline whenever I think there are outlines with the model, but I still do use it often in forecasts. It is for times a day with data from 6z, 12z, 18z, and 0z, and comes out around 5 AM, 11 AM, 5 PM, and 11 PM respectively. It can be found on the Penn State Ewall, which is here.
GEFS- Global Ensemble Forecast System. These are the ensemble members of the GFS, and sometimes instead of saying GFS ensembles it is easier to abbreviate as just GEFS. These come out about an hour later than the GFS, and I will post either the individual members or the GEFS mean. They can also be found here.
GOA- Gulf of Alaska. Low/high pressure systems in this region can have significant impacts on the overall pattern, which can sometimes be discussed in blog posts where I am analyzing the overall pattern.
GOM- Gulf of Mexico. This typically provides moisture for storms coming up into the Northeast.
HPC- Hydrometeorological Prediction Center. This issues updates inside 3 days on winter weather, and their products are sometimes posted/tweeted as they focus specifically on precipitation, flooding potential, heavy snow/ice/sleet potential, etc. I find they are typically fairly accurate, and since their forecasts are short term they are in depth and their products useful.
HRRR- High Resolution Rapid Refresh Model. This is a short range model that only goes out 15 hours from the point. It measured many different levels of the atmosphere for temperature and other indices, and can be used for development of snowfall banding and can forecast short term snowfall accumulations. While sometimes hard to find, it can be found here. I reference this model a lot as storms are occurring, but it is so short term it isn’t useful except for when storms are hitting.
HWO- Hazardous Weather Outlook. This is issued by the National Weather Service typically days in advance of storms to highlight potentially dangerous weather in the future. These typically do not make headlines, and just highlight potentially dangerous weather sometime in the future. I will reference them on Twitter when they issue them, as they affirm that I am seeing similar things to the National Weather Service.
mb- While mb (millibars) is a measure for pressure, this is also used to refer to different levels in the atmosphere. The most common level I refer to is 850mb, which is at 5,000 feet, 500mb (close to 20,000 feet) and 925 mb (around 2,500 feet). Refer to the bottom for details on each.
K-Index: a severe weather index that sometimes can help to judge how widespread storms will be. It is based on temperature between different heights in the atmosphere.
MJO- Madden/Julian Oscillation. It is divided up into eight different octets, and each octet results in different conditions around the world. It is mainly based off of where convection is most prevalent, and depending on where it is it can often signal stormier patterns, colder patterns, or dry/warmer patterns. Little other research has been done except to see the impacts that the different octets can bring. Octets 1 and 8 in winter months are most favorable for snow and cold, while octets 4 and 5 are warmest and usually lack snow. More details can be found here.
MOS- Model Output Statistics. MOS readings from weather models are helpful in determining temperatures at the surface given on models, and are useful to help predict high and low temperatures. I do not use MOS often in forecasts, but I do reference them occasionally. Details can be found here.
MREF- Medium Range Ensemble Forecasts. This is another way to say GEFS, essentially, as it is the only ensembles readily available with individual members in the medium term. Technically, the CMC and NOGAPs ensembles apply too, but generally this is used in place of GEFS. They can be seen here.
NAM 4km- The North American Model 4km resolution is a model that goes out 60 hours and is a high definition model. It comes out in both a 4km resolution and a 12km resolution. A 4km resolution means every 4 kilometers there is a data point that the model projects a forecast at, which means it is extremely high resolution. It forecasts specific snowfall amounts, winds, temperatures, etc. and can be important in forecasts, though often overdoes rainfall and snowfall. It can be found here.
NAM 12km- The North American Model 12km resolution is a model that goes out 84 hours and is also high resolution, but not to the level of the shorter range higher resolution NAM 4km. This model is only really accurate inside of 48-60 hours, which is when it is used most, and also can overdo snowfall or rainfall with storm systems. However, its higher resolution makes it quite accurate when inside 48 hours, when it gets weighted heavily in forecasts. It is run four times a day, 0z, 6z, 12z, 18z, though 0z and 12z are the two times it has fresh data. It can be found on the Penn State Ewall here.
NAO- North Atlantic Oscillation. This is arguably the most important teleconnection for forecasting winter snow storms along the east coast. When in a negative phase, there is a ridge near Greenland creating blocking and a storm path typically favors storms moving up along the east coast, throwing back snow into the area. When positive, more options are on the table, with more storms likely resulting in rain. While one of many different factors in a storm, a pattern with a negative NAO typically tries to surprise the area with snow. NAO forecasts and additional discussions can be found here.
NHC- National Hurricane Center. I don’t really ever mention them in winter, but in summer when tracking tropical systems they produce numerous useful products and also produce data showing the current strength of tropical systems. To view the NHC homepage, just click here.
NMM- Nonhydrostatic Mesoscale Model. This is similar to the ARW in that it is a very high definition 4km model that comes out twice a day, at 0z and 12z, and runs out only 48 hours. I have questioned its accuracy in the past as, similar to the NAM, it tends to slightly overdo precipitation for the area, but it is good for temperatures and has a useful simulated radar as well. I only reference it when storms are imminent since it only goes out 48 hours maximum. All runs can be found here.
NOAA: National Oceanic and Atmospheric Administration
NOGAPS/NAVY- This is the Navy’s operational weather model, which is in a similar camp as the DGEX in that it preforms very poorly in winter months. It was created to predict tropical systems, and thus handles storm systems in the winter awfully, typically being far too progressive and showing nothing on the east coast when large storms are likely. It very rarely gets a storm correct, but sometimes I reference it to show which ways models are trending, and if it is less progressive than another model that can also be a red flag.
NWS- National Weather Service. This is the government agency that issues Watches, Warnings, Advisories, HWOs, SWSs, etc. I reference them a lot and use their forecasts to compare mine to when making a forecast for SW CT to see who is accurate more often, and where the other goes wrong.
PNA- Pacific/North American Pattern. This is the final teleconnection that I use in forecasts, and it also plays an important role in temperatures, storm tracks, and the overall pattern. A positive PNA means there is ridging near the Pacific Northwest, which sets up a pattern with troughs in the east helping storms move up the east coast and throw back more snow. A positive PNA, especially when paired with a negative AO, is what can set up some of the coldest patterns for SW CT. A negative PNA results in a more progressive pattern typically with less cold air, resulting in storms along the east coast that typically are not as strong and/or do not have as much cold air, so SW CT sees more rain/sleet in them, especially near the coast. Of course, these are generalities, but that is what the PNA is generally used for. PNA forecasts can be found here.
PV- Polar Vortex. This is a large vortex of the colder air, and the placement of this typically dictates where the pattern is going to bringing colder air as well. A Polar Vortex up in the GOA (Gulf of Alaska) or anywhere around that area typically keeps the cold air bottled up there, while a PV further southeast in central Canada or even further south results in more sustained cold weather for SW CT.
Radar- This is the main tool to see current precipitation following across any region. When storms are occurring, I will reference the radar often to see what precipitation is moving this way and what is already falling. The radar I use the most often and find the easiest to use with the most options is here.
RAP– RAPid Refresh Model. This is similar to the HRRR in that it is very high resolution but only forecasts out for a short amount of time. The RAP only forecasts out 18 hours from when it begins, so it is only used right before storms hit or as storms hit. It is weighted similarly to the HRRR in forecasts, and as it is so short range it is typically fairly accurate. I continue to use it and the HRRR even when nowcasting. It can be found here.
Ridge- A ridge is essentially the opposite of a trough. Think of them similar to this: ^ where they make an upside-down V or an upside-down U in the atmosphere. The opening to the south allows warmer air from the south to flow up, while areas east of the ridge typically get cold air as it slides down the east side. A ridge over the Pacific Northwest with a positive PNA is thus beneficial as cold air flows down the east side of that ridge into the United States, but a ridge over the eastern United States allows warmer air to flow up from the south and pushes storm systems up over it and to our west, meaning we are in their warm sector and get rain and fog.
RUC- Rapid Update Cycle Model. Just this past year this model was upgraded to the RAP, so it is no longer seriously used, but I sometimes consult other versions of it and will reference them if I can find them.
Severe Weather Indices: various numeric calculations that can be made based on observations to judge the potential for severe storms.
Sounding: A sounding (a.k.a. a skew-t log-p or psychrometric diagram) is a diagram that depicts a cross section of the atmosphere. It is a way of depicting the data collected from a weather balloon as it travels up through the atmosphere. Seeing a vertical cross section of the atmosphere is very useful for forecasting.
SPC: Storm Prediction Center
SREFs- Short Range Ensemble Forecasts. These are ensemble members of the NMM, ARW, and other short range models, and they are then averaged together to find a mean. They go out the same amount of time as the NAM does, so they are often used together in forecasts. They are not very accurate until they get inside 60 hours, which is when they begin to get more so. They forecast percentage chances of different snowfall accumulation amounts, precip types, winds, temperatures, etc. so they provide me with a lot of different data. They can be found here.
SWS– Special Weather Statement. This is typically issued by the National Weather Service when they expect dangerous conditions in the short term, but conditions likely are not bad enough to warrant an Advisory. For example, if only an inch or two of snow is falling a Special Weather Statement would be issued instead of a Winter Weather Advisory.
Trough- This is a feature that I mention most at the 500mb level in the atmosphere. When a trough is over the east coast, it creates almost a U with the east coast in the middle, resulting in cold air being able to flow down from Canada into the east and storms typically ride on the east side, which is why troughs most often result in east coast snow storms. This is a very simplified explanation of a very complex meteorological process at 500mb, but troughs over the east coast typically are beneficial to snow and snow storms. If I am talking about troughs in other areas, just remember they are U shaped and I should explain the rest in the blog post.
UKMET– This is the United Kingdom’s Meteorological Model, which is in many ways similar to the ECMWF, and the UKMET is not very high resolution nor does it present many details like other models, so it is never used very often. I with reference it typically just to add in another possible solution, and if it stands out from the other models it can be noticeable, but it does not preform exceptionally well. It is run twice a day and only goes out 144 hours, so it is short to medium range with no long range component. It is run twice a day, and the 0z run can be found here with the 12z run found here.
Vorticity- Energy at the 500mb level. See 500mb for more details.
Warning– This is issued by the National Weather Service when dangerous to life-threatening conditions are expected within the next 24 hours. A Warning is issued for a snow storm when 6 inches of snow are expected within 12 hours, or 8 inches within 24 hours. There are warnings whenever life-threatening weather of any sort threatens, and these should always be taken with the utmost seriousness. I will post the details of any warning issued.
Watch- This is issued by the National Weather Service to indicate that a Warning conditions are possible within 48 hours, but we are not within the time range yet to issue or a Warning. Warnings are whenever life-threatening conditions are “imminent” or, for winter storms, within 24 hours, while Watches are when they are “possible”, or, for winter storms, within 48 hours. While not as serious as Warnings, these typically can be followed by Warnings.
Weather Model: weather models are various mathematical computer models/algorithms produced by different organizations and countries that use observational data and complex computing to provide weather forecasts. Forecasters use these models as a tool to develop their own forecast. All models have their own biases and must be taken with a grain of salt and only used as one tool in developing a forecast along with the forecaster’s knowledge and other tools.
12z, 0z, etc.- These are timing signals for models, essentially done in military time with 24 hours. The main model times are 0z, 6z, 12z, and 18z, though most models come out at 0z and 12z. New data is taken officially every 12 hours, as 0z and 12z, which is why those are the two model runs with fresh data. The timing is actually that in London, where the first weather balloons were launched and timing needed to be standard across the globe for weather. Thus, 0z is midnight in England and 7 PM during winter in SW CT. 12z is noon in England, 7 AM here in SW CT. Models come out anywhere between 2 and 6 hours later than the timing in z shows they will, but the time stamp just shows what data was input.
500mb- This level in the atmosphere is just below 20,000 feet, and this is where models measure energy, sometimes referred to as vorticity. This energy is what drives overall patterns, and this is also where ridges and troughs in the upper atmosphere are found. When delving deeper into meteorology, one finds that it is the 500mb setup that drives the overall pattern and dictates storm direction, which is why I refer to this level so much. It does take years to get accustomed to the different complexities at the 500mb level, and this is where blocks can be observed as well. I tend to try and explain exactly what is going on at this level in the atmosphere whenever I use 500mb charts, but troughs in the east allow colder air down and support snow storms off the east coast, while ridges on the east coast allow warm air to flow up along the east and push storms to our west, resulting in rain.
850mb- This level is one where all models output temperatures, and it is around 5,000 feet up in the atmosphere. As a VERY general rule, whenever temperatures at 850mb are below freezing, snow or sleet can be expected at the surface. If surface temperatures are significantly warmer than 850mb/5,000 feet, then precipitation can fall as rain, but the 850mb freezing line typically is very important when predicting precipitation types, which is why I reference it so much. | <urn:uuid:1b072d7c-a004-4578-911a-54cf08528e41> | CC-MAIN-2019-47 | https://swctweather.com/weather-glossary/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00459.warc.gz | en | 0.95761 | 4,877 | 2.9375 | 3 |
The Ford Motor Company was incorporated in 1903 but can trace its history back to 1896 with Henry Ford’s first automobile. In those 123 years, the company has revolutionized automobile manufacturing, supported the American efforts in two world wars, created iconic vehicles like the Model T, Thunderbird, Mustang and Bronco and even helped NASA get to the moon.
In addition to making cars, the company has been involved in almost every form of motorsport from Formula 1 to Off-Road Baja racing. Here are the paramount points of history from the company’s long and illustrious lifespan from 1896 to its 100th anniversary in 2003.
1896 – The Quadricycle
Henry Ford, founder of Ford Motor Company, built his first automobile in June of 1896. He called it the “Quadricycle” as it used four bicycle wheels. Powered by a two-cylinder engine producing four-horsepower and driving the rear wheels, the Quadricycle was good for a heady 20 mph, thanks to it’s two-speed gear box.
The very first Quadricycle was sold for $200. Ford sold an additional two more before Ford Motor Company was founded. Henry Ford bought back the original Quadricycle for $60, and it presently lives in the Henry Ford Museum in Dearborn, Michigan.
1899 – Detroit Automobile Company
The Detroit Automobile Company (DAC) was founded on August 5, 1899, in Detroit, Michigan by Henry Ford. The first vehicle, completed in 1900, was a gasoline-powered delivery truck. Despite positive press, the truck was slow, heavy and unreliable.
DAC closed in 1900 and was reorganized into the Henry Ford Company in November of 1901. In 1902, Henry Ford was bought out of the company by his partners, which included Henry Leland, who would quickly reorganize the company again and turn it into the Cadillac Automobile Company.
1901 – The Duel
After the Detroit Automobile Company closed Henry Ford was in need of investors to continue his automotive ambitions. In order to raise his profile, attract financing and to prove that his cars could be a commercial success, he decided to enter a race promoted by the Detroit Automobile Club.
The race took place on a one-mile dirt oval horse racing track. After mechanical issues plagued the field of cars, the race started with only Henry Ford and Alexander Winston taking the start. Henry Ford would win the race, the only one he would ever enter and collect a $1000 prize.
1902 – The Beast
The 999 was one of two identical race cars created by Henry Ford and Tom Cooper. The cars had no suspension, no differential and a crude pivoting metal bar for steering mated to a 100-horsepower inline four-cylinder engine that displaced 18.9 liters.
The car won the Manufacturer’s Challenge Cup, driven by Barney Oldfield while setting the course record at the very same track Henry Ford had won at the previous year. The car would go on to win many times over its career, and, with Henry Ford behind the wheel, would set a new land speed record of 91.37 mph on an ice-covered lake in January of 1904.
1903 – Ford Motor Company Incorporated
In 1903, after successfully raising enough investment, the Ford Motor Company was founded. Included in the initial stockholders and investors were John and Horace Dodge, who would go on to start Dodge Brothers Motor Company in 1913.
During the formative years of Ford Motor Company, the Dodge brothers supplied the complete chassis for the 1903 Ford Model A. The Ford Motor Company sold the first Model A on July 15, 1903. Before the debut of the iconic Model T in 1908, Ford produced the Model A, B, C, F, K, N, R, and S.
1904 – Ford Canada Opens
Ford’s first international plant was built in 1904 in Windsor, Ontario Canada. The facility sat directly across the Detroit River from the original Ford assembly plant. Ford Canada was established as a completely separate organization, not a subsidiary of Ford Motor Company, to sell cars in Canada and also throughout the British Empire.
The company used patent rights to produce Ford vehicles. In September of 1904, a Ford Model C was the first car to roll out of the factory and was the very first car produced in Canada.
1907 – Ford’s Famous Logo
The Ford logo, with its distinctive script, was first created by Childe Harold Wills, the company’s first chief engineer/designer. Wills used his grandfather’s stencil set for the font, which is patterned after the type of writing taught in schools during the late 1800s.
Wills worked on and helped with the 999 race car, but was most influential on the Model T. He designed the transmission on the Model T and the detachable cylinder head of the engine. He would leave Ford in 1919 to start his own automobile company, Wills Sainte Claire.
1908 – The Popular Model T
The Ford Model T, produced from 1908 to 1926, is a car that revolutionized transportation. During the early 1900s, cars were still rare, expensive and terribly unreliable, the Model T changed all of that with a simple, durable design that was easy to maintain and affordable to average Americans. Ford sold 15,000 Model T’s in the first year.
The Model T was powered by a 20-horsepower four-cylinder engine with a two-speed plus reverse transmission driving the rear wheels. Top speed was somewhere between 40 – 45 mph, which is fast for a car that doesn’t have brakes at the wheels, only a brake on the transmission.
1909 – Ford of Britain Founded
Unlike Ford of Canada, Ford of Britain is a subsidiary of Ford Motor Company. Ford had been selling cars in the U.K. since 1903 but needed a legitimate production facility to expand in Great Britain. Ford Motor Company Limited was established in 1909 and the first Ford dealership opened in 1910.
In 1911, Ford opened the Trafford Park assembly plant to build Model T’s for the foreign market. Six-thousand cars were built in 1913, and the Model T became Britain’s top-selling car. The following year, the moving assembly line was integrated into the factory and Ford of Britain could produce 21 cars per hour.
1913 – The Moving Assembly Line
The assembly line in automotive manufacturing has been around since 1901 when Ransom Olds used it to build the first mass-produced automobile, the Oldsmobile Curved-Dash. Ford’s great innovation was to create the moving assembly line, which allowed a worker to do the same job over and over again without having to move from his position.
Before the moving assembly line, a Model T took 12.5 hours to build, after the moving assembly line was integrated into the factory, the build time per car dropped to 1.5 hours. The speed at which Ford could build cars allowed them to continually drop the price, allowing more people to afford to buy a car.
1914 – The $5 Workday
When Ford introduced the “$5 Per Day” pay rate, it was double what a typical factory worker earned. At the same time, Ford changed from a nine-hour workday to an eight hour day. This meant that the Ford factory could have three work shifts per day instead of two.
The increase in pay and the change in the work day meant that employees were more likely to stay at the company, had more free time, and could afford to purchase the cars they produced. The day after Ford announced the “$5 Day,” 10,000 people lined up at the company’s offices hoping for a job.
1917 – River Rouge Complex
In 1917 Ford Motor Company began construction of the Ford River Rouge Complex. When it was finally completed in 1928, it was the largest factory in the world. The Complex itself is 1.5 miles wide and one mile long with 93 buildings and 16 million square feet of factory floor space.
The factory had its own ship docks and more than 100 miles of railroad tracks run inside the buildings. It also had its own power plant and steel mill, meaning that it could take all the raw materials and turn them into vehicles within a single factory. Before the Great Depression, the River Rouge Complex employed 100,000 people.
1917 – The First Ford Truck
The Ford Model TT was the first truck made by the Ford Motor Company. Based on the Model T car, it shared the same engine but was equipped with a heavier frame and rear axle to be able to cope with the work that the Model TT was expected to perform.
The Model TT proved to be very durable, but was slow, even by the standards of 1917. With the standard gearing, the truck was capable of 15 mph, and with the optional special gearing, 22 mph was the recommended top speed.
1918 – World War 1
In 1918, the United States, along with its allies were engaged in the horrific war raging in Europe. At the time, it was called the “Great War” but we know it now as WWI. As a means to support the war effort, the Ford River Rouge Complex began to manufacture the Eagle-Class patrol boat, a 110-foot long ship designed to chase down submarines.
In total, 42 of these ships were built at the Ford plant, along with 38,000 Model T military cars, ambulances and trucks, 7,000 Fordson Tractors, two types of armored tanks, and 4,000 Liberty airplane engines.
1922 – Ford Purchases Lincoln
In 1917, Henry Leland and his son Wilfred founded the Lincoln Motor Company. Leland is also famous for founding Cadillac and establishing the personal luxury automobile segment. It’s somewhat ironic that two of the most famous luxury automobile brands in the United States were founded by the same person, with the same goal of building luxury automobiles, ended up as direct competitors for over 100 years.
Ford Motor Company bought the Lincoln Motor Company in February of 1922 for $8 million. The purchase allowed Ford to compete directly against Cadillac, Duesenberg, Packard and Pierce-Arrow for a share of the luxury automobile market.
1925 – Ford Produces Airplanes
The Ford Trimotor, so named because of its three engines, was a transportation aircraft designed for the civil aviation market. Very similar in design to the airplanes of the Dutch Fokker F.VII and the work of German airplane designer Hugo Junkers, the Ford Trimotor was found to have infringed upon the patents of Junkers and was barred from sale in Europe.
In the U.S., Ford built 199 Trimotor planes, of which about 18 are believed to survive to this day. The first models featured Wright J-4 engines with 200-horsepower and the final variant came equipped with 300-horsepower engines.
1925 – The 15 Millionth Model T
In 1927, Ford Motor Company celebrated an incredible milestone, the construction of the fifteen-millionth Model T. The actual car was built as a touring model; four-door with a retractable top and seating for five people. Its design and construction is very similar to the very first Model T of 1908 and is powered by the same four-cylinder engine with two forward and one reverse gear.
On May 26, 1927, the car rolled off the assembly line driven by Edsel Ford, Henry Ford’s son, with Henry riding shotgun. The car currently lives at the Henry Ford Museum.
1927 – The Ford Model A
After the fifteen-millionth Model T was built, Ford Motor Company shut down for six-moths to completely re-tool the factory for a brand new car, the Model A. Production ran from 1927 to 1932 with almost 5 million being built.
The car, amazingly, was available in 36 different variants and trims, from a two-door coupe, to convertible, to mail truck, and to wood-paneled delivery vans. Power came from a 3.3-liter inline four-cylinder engine with 40-horsepower. Mated to a three-speed transmission, the Model A was capable of 65 mph.
1928 – Ford Founds “Fordlandia”
In the 1920s, the Ford Motor Company was searching for a strategy to avoid the British monopoly over the supply of rubber. Rubber products are used for everything from tires to door seals to suspension bushings and numerous other components. Ford negotiated with the Brazilian government for 2.5 million acres of land, to grow, harvest and export rubber, in the State of Para in northern Brazil.
Ford would be exempt from Brazilian taxes in exchange for 9% of the profits. The project was abandoned and relocated in 1934 after a number of problems and revolts. In 1945, synthetic rubber reduced the demand for natural rubber and the area was sold back to the Brazilian government.
1932 – The Flathead V8
Even though the Ford Flathead V8 was not the first production V8 motor available in a car, it is perhaps the most famous and helped start the “hot-rod” community jumpstarted America’s love affair with the engine.
Developed first in 1932, the Type 221 V8 displaced 3.6-liters, was good for 65-horsepower and was first fitted to the 1932 Model 18 car. Production ran from 1932 to 1953, in the U.S. The final version, the Type 337 V8, produced 154-horsepower when fitted to Lincoln’s cars. Even today, the flathead V8 remains popular with hot-rodders for its durability and ability to produce big horsepower.
1938 – Ford Creates The Mercury Brand
In 1938, Edsel Ford founded the Mercury Motor Company as an entry-level premium brand that slotted between the luxury cars of Lincoln and the basic cars of Ford. The Mercury brand is named after the Roman god, Mercury.
The first car Mercury produced was the 1939 Mercury 8 Sedan. Powered by the Type 239 flathead V8 with 95-horsepower, the 8 cost $916 new. The new brand and line of cars proved popular and Mercury sold over 65,000 vehicles in their first year. The Mercury brand was discontinued in 2011 after poor sales and a brand identity crisis.
1941 – Ford Builds Jeeps
The original Jeep, named after “GP” or “general purpose,” was initially designed by the Bantam company for the U.S. Army. At the start of WWII, it was believed that Bantam was too small to be able to build enough Jeeps for the military, who had requested 350 per day, and the design was supplied to Willys and Ford.
Bantam designed the original, Willys-Overland modified and improved the design and Ford was chosen as an additional supplier/producer. Ford is actually credited with designing the familiar “Jeep Face.” By the end of WWII, Ford had produced just over 282,000 Jeeps for military use.
1942 – Retooling For War
During World War II, most of American manufacturing was allocated to produced equipment, munitions, and supplies for the war effort. In February of 1942, Ford stopped all civilian car manufacturing and began producing a staggering amount of military equipment.
Ford Motor Company, at all facilities, produced over 86,000 complete airplanes, 57,000 airplane engines, and 4,000 military gliders. Its factories made Jeeps, bombs, grenades, four-wheel-drive trucks, airplane engine superchargers, and generators. The giant Willow Run Factory in Michigan produced B-24 Liberator bombers on an assembly line that was 1-mile long. At full production, the factory could produce one airplane per hour.
1942 – Lindbergh and Rosie
In 1940, the U.S. Government asked Ford Motors to build B-24 bombers for the war effort. In response, Ford built a massive factory with over 2.5 million square feet of floor space. During that time, famous aviator Charles Lindbergh served as a consultant at the plant calling it, “The Grand Canyon of the mechanized world.”
Also at the Willow Run facility was young female riveter named Rose Will Monroe. After actor Walter Pidgeon had discovered Mrs. Monroe at the Willow Run Plant she was chosen to appear in promotional films for war bond sales. The role made her a household name during WWII.
1948 – The Ford F-Series Pickup Truck
The Ford F-Series pickup truck was the first truck that Ford designed specifically for truck use that didn’t share a chassis with their cars. The first generation, built from 1948 to 1952 was available in eight different chassis’ from F-1 to F-8. The F-1 truck was a light duty half-ton pickup truck and the F-8 was a three-ton “Big Job” commercial truck.
Engines and power depended on the chassis and the popular F-1 pickup was available with either a straight-six engine or the Type 239 Flathead V8. All of the trucks, regardless of chassis, were equipped with three, four or five-speed manual transmissions.
1954 – The Ford Thunderbird
First unveiled at the Detroit Auto Show in February of 1954, the Ford Thunderbird was initially conceived to compete directly against the Chevrolet Corvette, which debuted in 1953. However, the marketing at Ford touted the car’s comfort and convenience features over the sportiness of the chassis.
Despite the focus on comfort, the Thunderbird outsold the Corvette in its first year with just over 16,000 sales compared to the Corvette’s 700. With 198-horsepower from its V8 and a top speed of just over 100 mph, the Thunderbird was a capable performer and more luxurious than the Corvette at the time.
1954 – Ford Begins Crash Testing
In 1954, Ford started to prioritize the safety of their cars. Being concerned about how the cars and the occupants managed a vehicle crash, Ford began to conduct safety tests with their vehicles. Ford’s cars were crashed into each other to analyze their safety and learn about how they could be made safer.
These tests, along with countless others from other vehicle manufacturers, would lead to dramatic improvements in vehicle safety and the survivability of car crashes. Three-point safety belts, crumple zones, airbags, and side-impact protection are all innovations that came about through crash testing automobiles.
1956 – Ford Motor Company Goes Public
On January 17, 1956, the Ford Motor Company went public. It was the largest initial public offering (IPO) in American history up to that time. In 1956, the Ford Motor Company was the third largest company in the U.S., behind GM and Standard Oil Company.
The IPO of 22% of the Ford Motor Company was so big that over 200 banks and firms were included and involved. Ford offered 10.2 million Class A shares at an IPO price of $63. By the end of the first day of trading, the price per share had risen to $69.50, which meant the company could be valued at $3.2 billion.
1957 – Ford Introduces The Edsel Brand
In 1957 the Ford Motor Company introduced a new brand, Edsel. Named after Edsel B. Ford, the son of founder Henry Ford, the company was expected to increase Ford’s market share in order to compete with General Motors and Chrysler.
Unfortunately, the cars never sold particularly well and the public perception was that the cars were over-hyped and overpriced. Controversial design, reliability issues and the start of an economic recession in 1957 all contributed to the downfall of the brand. Production was ceased in 1960 and the company closed as well. In total, 116,000 vehicles were produced, which was less than half of what the company needed to break-even.
1963 – Ford Attempts to Buy Ferrari
In January of 1963, Henry Ford II and Lee Iacocca planned to buy the Ferrari Company. They had wanted to get involved with international GT racing and figured that the best way to do that was to purchase a well-established, experienced company.
After much negotiating, a deal was struck between Ford and Ferrari for the sale of the company. However, at the last minute, Ferrari pulled out of the deal. Much has been written and speculated about the deal, the negotiations and the reasons, but the net result was Ford Motors left empty handed and formed Ford Advanced Vehicles in England to build a GT car, the GT40, that could beat Ferrari at Le Mans.
1964 – The Iconic Ford Mustang
Introduced on April 17, 1964, the Mustang is perhaps Ford’s most famous car, next to the Model T. Initially built on the same platform as the compact Ford Falcon, the Mustang was an instant hit and created the “pony car” class of American muscle cars.
Known for being affordable, sporty and infinitely customizable, the Mustang changed the game when it came to American muscle cars. Ford sold 559,500 Mustangs in 1965 and in total, over ten million as of 2019. One of the biggest draws of the Mustang has always been its customizability and the upgrades that are available from the factory.
1964 – Ford GT40 Debuts At Le Mans
A year after a failed attempt to buy Ferrari, Ford Motor Company brought their “Ferrari Fighter”, the GT40, to Le Mans. The car’s name comes from Grand Touring(GT) and the 40 derives from the height of the car, 40-inches tall.
Powered by a 289 cubic inch V8, the same as used in the Mustang, the GT40 could surpass 200 mph at Le Mans. Teething issues with the new car, instability and reliability problems took their toll during the 1964 Le Mans race and none of the three cars that entered finished, giving Ferrari another overall Le Mans win.
1965 – Ford And The Race To The Moon
In 1961, Ford Motor Company purchased electronics manufacturer PHILCO, creating PHILCO-Ford. The company provided Ford with car and truck radio receivers and produced computer systems, televisions, washing machines and a large array of other consumer electronics. In the 1960s, NASA contracted with PHILCO-Ford to build the tracking systems for the Project Mercury space missions.
PHILCO-Ford was also responsible for design, manufacture, and installation of “Mission Control” at NASA’s space center in Houston, Texas. The control consoles were used for the Gemini, Apollo moon missions, Skylab and the Space Shuttle missions until 1998. They are preserved for their historical significance at NASA today.
1966 – Ford Wins At Le Mans
After two heart-breaking years of a motorsports program designed to beat Ferrari at the 24 Hours of Le Mans, Ford finally delivered in 1966 with the MKII GT40. Ford stacked the field in the race by entering eight cars. Three from Shelby American, three from Holman Moody and two from UK based Alan Mann Racing, a development partner in the program. Additionally, five privateer teams entered MKI GT40s, giving Ford thirteen cars in the race.
The MKII GT40 was powered by the larger 427 cubic inch V8 producing 485-horsepower. Ford won the race finishing 1-2-3 with the number 2 car winning overall. This was to be the first of four consecutive Le Mans victories.
1978 – The Incredible Exploding Pinto
The Ford Pinto, a name that will live in infamy for all of eternity, was a compact car designed to counter the gaining popularity of imported compact cars from Volkswagen, Toyota and Datsun. It debuted in 1971 and was produced until 1980.
The poor design of the fuel system resulted in several incidents in which the the fuel tank could rupture in a rear-end crash and catch fire or explode. Several high-profile incidents resulted in lawsuits, criminal prosecutions and one of the largest automotive recalls in history. The publicity and costs nearly ruined Ford’s reputation as a car manufacturer.
1985 – The Ford Taurus Changes The Industry
Introduced in 1985 as a 1986 model, the Ford Taurus changed the game for American made sedans. Its rounded shape differed significantly from the competition, earning it the “jelly bean” design nickname, and started an era of increased attention to quality at Ford.
The aerodynamic design made the Taurus more fuel efficient and ultimately led to a design revolution in American car making. Both General Motors and Chrysler quickly developed aerodynamic cars to capitalize on the Taurus’ success. In the first year of production, Ford sold over 200,000 Taurus’ and the car was named the 1986 Motor Trend Car of the Year.
1987 – Ford Buys Aston-Martin Lagonda
In September of 1987, Ford Motor Company announced the purchase of famed British automaker Aston-Martin. The purchase of the company likely saved Aston-Martin from bankruptcy and added a high-end luxury sports car company to Ford’s portfolio. Ford set about modernizing the way that Aston-Martins were produced, opening a new factory in 1994.
Previous to Ford’s ownership, Aston-Martins were largely built by hand, including the bodywork. This added expense and reduced the number of cars that could be produced. Ford owned Aston-Martin until 2007 when it sold the company to a group led by British motorsports and advanced engineering company, Prodrive.
1989 – Ford Buys Jaguar
At the end of 1989, Ford Motors began buying up shares of Jaguar and was fully integrated into Ford’s business by 1999. Ford’s purchase of Jaguar, along with Aston Martin was lumped into the Premier Automotive Group, which was intended to provide Ford with upscale luxury vehicles while the brands received modernization and production help from Ford.
Under Ford’s ownership, Jaguar never made a profit, as the models that were introduced, such as the S-Type and X-Type, were lackluster and thinly disguised Ford sedans with a Jaguar badge. Ford ultimately sold Jaguar to Tata Motors in 2008.
1990 – Ford Explorer
The Ford Explorer was the SUV that was built to battle the Chevrolet Blazer and the Jeep Cherokee. Introduced in 1990 for the 1991 model year, the Explorer was available as a two-door or four-door and equipped with the German-made Cologne V6. Amazingly, the Explorer holds the distinction of being the very first four-door SUV produced by Ford.
The Explorer is perhaps best known for the Firestone Tire controversy of the late 1990s. Under-inflated tires, as recommended by Ford, likely led to tire tread separation and a large number of accidents. Firestone was forced to recall 23 million tires after 823 injuries and 271 deaths.
2003 – Ford Celebrates 100th Anniversary
The Ford Motor Company celebrated its 100th Anniversary in 2003. Despite Ford producing vehicles all the way back to 1896, the Ford Motor Company, as we know it today, was founded in 1903.
In its long history, the company has contributed to revolutionizing car ownership, modernizing the assembly line, advancing factory worker quality of life, helped in two American war efforts and created some of the most influential and iconic vehicles in the history of the automobile. Today, Ford stands as one of the great automobile manufacturers the world has ever seen. | <urn:uuid:4b5e5505-3e94-467e-9efe-1d07bd5a321d> | CC-MAIN-2019-47 | http://www.brakeforit.com/classic/history-of-ford-motors/?view-all | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00180.warc.gz | en | 0.968945 | 5,701 | 3.0625 | 3 |
Synopsis and Illustrations in Folio Edition
Paradise Lost by John Milton
Commentary by Robert J. Wickenheiser, Ph. D.
Without a doubt, Terrance Lindall is the foremost illustrator of Paradise Lost in our age, comparable to other great illustrators through the ages, and someone who has achieved a place of high stature for all time.
Throughout almost four centuries of illustrating Milton’s Paradise Lost, no one has devoted his or her life, artistic talents and skills and the keenness of the illustrator’s eye more fully and few as completely as Terrance Lindall has done in bringing to life Milton’s great epic. He has also devoted his brilliant mind to studying Milton, his philosophy, and his theology in order to know as fully as possible the great poet to whom he has devoted his adult life and to whose great epic he has devoted the keenness of his artistic eye in order to bring that great epic alive in new ways in a new age and for newer ages still to come.
From virtually the outset Milton has been appreciated as the poet of poets. It was John Dryden who said it first and best about Milton shortly after Milton died in 1674:
Three Poets in three distant Ages born ––
Greece, Italy and England did adorn.
The First in loftiness of thought Surpass'd;
The Next in Majesty: in both the Last.
The force of Nature could no further goe;
To make a Third she joyn'd the Former two.
Milton's use of unrhymed iambic pentameter verse in a manner never used before raises the lofty goals of his epic to a level never before achieved in the English language. Moreover, the poet who said at age 10 that he intended to write an epic which will do for England what Homer had done for Greece and Virgil for Rome, accomplished masterfully the goal he set himself and more than has ever been achieved before or since.
This is by no means to say that there are no great poets who have achieved high goals after Milton, and in doing so have joined Milton and even rivaled him. But Milton is the giant who stands at the door to English poetry urging all who would enter to master their art, to write with the highest respect for language and a passionate recognition of what language is capable of achieving.
In Milton's Paradise Lost we see, too, that in great poetry there is always great passion, clarity of voice in support of the purpose at hand, and at its best, with the prophetic and the visionary joined to compel the reader to rise to new heights in what is read and seen through the poet-prophet.
Milton’s Paradise Lost challenges everyone to achieve goals beyond any they might have dreamed possible before, and to take from his own great epic, goals which help define all that is worthy of sustaining while providing English poetry with what it did not yet have. To declare at age 10 that he would become the greatest English poet is one thing, and a quite spectacular thing at that, but to go on then and fulfill this goal shows not only the great vision Milton had as a poet, but also his tremendous confidence in becoming that great poet.
Milton sings with the voice of the visionary poet and so he becomes the poet for those who see in him clarity of voice and of vision; poets like William Blake who, in the early 19th century thought he was Milton (stretching the point a bit as Blake was wont to do) and who therefore relied very much on Milton and even wrote a poem entitled “Milton” designed and hand-colored as with other of Blake’s great works. While Blake openly admired Milton, William Wordsworth, a few decades later, was calling out for Milton in an age that had need of him, proclaiming: “Milton! Thou should’st be living at this hour.”
As the visionary poet Milton was, he had acute interest in such monumental issues as the relationship between God and man, free will and its vital importance to all of mankind along with the responsibility that goes with it, the relationship between man and woman, divorce and the need for acceptance of it, definition of “monarchy” along with important issues related thereto, and a great deal more. Milton defined many issues at a time when England was engaged in a Civil War precisely because of those very significant issues, issues which Milton helped not only to define but also to defend.
His life spared after the Civil War and his reputation as a poet and writer of important treaties reasserted, Milton retired to the country, to Chalfont St. Chiles, where he dedicated himself to completing Paradise Lost, and ultimately, Paradise Regain'd and Samson Agonistes. What a profound loss it would have been had Milton not been allowed to write his greatest poetical works!
Yet how did the poet write his monumental works, especially given the loss of his eyesight while writing significant treatises both before and during the Civil War? Here we have the blind poet dictating to an amanuensis (his daughters, as many preferred to believe for a long time, but in reality his nephew), whole passages defining important relationships and memorable scenes which are themselves of epic proportion: the creation of man in Adam and of woman in Eve; Eve seeing herself in the pond for the first time and likewise our seeing Eve at the same time she sees herself; Adam seeing Eve for the first time; the moving depiction of the “bower of bliss” and then of the creation; the war in heaven; the depiction of Satan and hell, with Satan rallying his troops in passages that take poetry to new heights; the temptation of Eve and then Adam, in equally powerful scenes, and the departure of Adam and Eve from Eden.
Surely Milton deserves not only our gratitude for the prose treatises he wrote, but also for the poetry, much of it written under the most dire of circumstances (some thought he might be put to death for his part in the Civil War and his service to Cromwell, and also more specifically because of his treatise in defense of “beheading a King”).
Here is a poet to be reckoned with: for standing up in defense of eternal values, something Milton not only did himself, but something he expected his readers to do as well; and then to appreciate his poems, his epic verse and organ voice, his epic vision, and his bringing to life, despite (or perhaps because of) his blindness, something so unique that Dryden and others long after him have recognized in Milton the genius that “Surpass’d” Homer and Virgil before him.
As Milton left his supreme poetic gifts for mankind to appreciate in reading his great works during the centuries following him, so, too, he used his blindness to bring to life visions befitting the dynamic scope and epic dimensions of his great epic; visions undertaken in the first, and still one of the greatest illustrated editions of Paradise Lost published not long after Milton died, in a folio format in 1688. Medina's illustrations, primarily, are those which appear in the 1688 folio edition of Paradise Lost, but aside from the significance of what his stature brought to this publishing venture, the 1688 folio remains a highly sought after book today because it is England's first grand publication and therefore holds its own place for the first time with books printed on the Continent where books had long been praised for their publishing distinction and artistic design and success.
Through the centuries John Milton’s Paradise Lost has continued to inspire artists, which tells us much about Milton and about his great epic, a poem which readily lends itself to the eye of the artist, and in this, affords all of us a visual perspective, a visual capturing of the poet’s vision, which words alone can seldom achieve. Commentary and criticism certainly have their place, but seldom does the written word adequately capture the poet’s vision or replace the illustration or illustrations of the artist’s view of a poem and his capturing that view on a canvas. The aspirations of each, however, critic and artist/illustrator, need not be pitted against one another; indeed should not. Rather, they should be welcomed for the manner in which each complements a view or views of a poem thereby bringing together two significant disciplines: that of the writer/poet together with that of the artist/illustrator.
Poets who aspire to lofty goals lend themselves most readily to being illustrated, providing us with the opportunity of looking at how a poem or group of poems is seen by the eye of an artist. Instead of learning about the themes and poetry of a given age or period as seen only through the eyes of writers and critics, we are privileged to have the views of the artist to help us see and appreciate the poetic vision of the poet, sometimes in great variation from one period to the next or as viewed by one generation to the next.
Obviously, given the monumental issues in Paradise Lost as well as Milton's portrayal of them, it should be no surprise to say that Paradise Lost may well be the most illustrated of poems and epics. I intend no controversy by saying this, but wish simply to call attention to how epic scenes have been brought to life for viewers by master artists capable of depicting grand visions within grand poems; by artists capable of capturing with visionary view what words alone can never do. The painter/illustrator, in capturing moments which might otherwise have been given less recognition than they deserve, provides a vital service in bringing to life scenes or moments, images or views depicted in poetic form by the poet, thereby enabling the viewer to appreciate all the more what the poet has achieved and how he has achieved it.
Lindall has himself said about Milton’s epic: “With Paradise Lost, the written word in its greatest form, Milton was able to evoke. . .immense space and project spectacular landscapes of both heaven and hell, and create also the monumentally tragic character of Satan, courageous yet debased, blinded by jealousy and ambition, heroic nonetheless. The blind poet brings powerful visionary life to one of the world’s greatest stories, id est, the Western legend of man’s creation and fall, a story encompassing philosophical concepts of free will, good and evil, justice and mercy, all presented with the greatest artistry to which the written word can aspire.”
Lindall also believes “that insight into Milton and the aesthetic and intellectual pleasures of Paradise Lost can elevate every individual’s experience in education, thought, and human endeavor. . .through the inspiration of the written word.”
It is this cherished belief, which has compelled Lindall to want to bring Paradise Lost alive to others, to urge all to see in Milton, as he does, the power of the word and image, and to want to illustrate Milton’s epic for others to see in relation to the eternal truths and values captured by Milton and conveyed in his great epic poem.
Lindall has synopsized the story of Paradise Lost with genuine care in order to bring Milton’s great epic alive to young and old. His synopsis is poetic in its own beauty, with each word carefully chosen to be true to Milton while maintaining integrity with his great epic and the rendering of it into a readily understandable format. Lindall’s synopsis maintains the spirit of Milton’s epic while revealing the genius of the poet in telling “Of Man’s first disobedience and the fruit / Of that forbidden tree, whose mortal taste / Brought death into the world and all our woe, / With loss of Eden, till one grater Man / Restore us and regain the blissful seat, / Sing heav/nly Muse. . .”
Terrance Lindall has spent decades perfecting his painting skill and illustrating technique in order to capture all that is best and visionary about Milton, providing illustrations of Milton’s great epic, early on, e.g., along with his synopsis in a fold-out brochure in order to bring Milton’s epic alive to students in schools. Lindall’s first edition of his synopsized version of Paradise Lost along with his illustrations (1983) were designed to encourage young readers to look into the brilliance and eloquence of Milton’s visionary poetic landscape and his great organ voice.
More recently he has gone beyond illustrating Paradise Lost by capturing the essence of Milton’s epic and its meaning down through the centuries and beyond in a “Gold Illuminated Paradise Lost Scroll” (size with border 17” x 50”), with nine panels to be read from right to left, as with Hebrew; the Scroll is Lindall’s “tribute to his love [of] and sincere gratitude for Milton’s great contribution to humanity.” He finished the “Gold Illuminated Paradise Lost Scroll” in 2010.
He has also brought Milton’s epic alive in a very large “Altar Piece,” called “The Paradise Lost Altar Piece” (oil on wood), consisting of two large panels, each 24” x 40”. When opened, the panels might be seen as pages from an illuminated manuscript of the Renaissance. One panel shows the gates to the “Garden of Eden.” The second panel shows the “Gates to Hell.” In both panels, pages from the epic poem Paradise Lost lie revealed in the foreground at the center of the illustration. “The Paradise Lost Altar Piece” was completed in 2009.
Lindall’s passion for Milton and his desire to bring the poet and his great epic alive to modern readers reveal themselves over nearly four decades. During this same period, from the late 1970s to 2012, Lindall’s “love of Paradise Lost” and his “sincere gratitude for Milton’s great contribution to humanity” grew enormously.
To get a sense of this as well as of Lindall’s broader artistic background and its influence on his illustrations of Paradise Lost, there is his large cover illustration of the comic book Creepy (now considered a classic – both the comic book and Lindall’s “creepy” cover illustration of “Visions Of Hell (6/79).” Likewise his cover to Creepy (#116, May 1980), entitled “The End of Man” (again, the comic book and Lindall’s cover illustration now considered classic).
About this same time some of Lindall’s earliest illustrations for Paradise Lost in the late 1970s appeared in comic book form, Heavy Metal Magazine (1980). Appearance in Heavy Metal enabled Lindall’s illustrations to reach a very large audience. That issue in 1980 of Heavy Metal Magazine became an acquisition proudly reported by the Bodleian Library in 2010 (with one of Lindall’s paintings, Visianry Foal, appearing at the top of the acquisitions page), alongside such other acquisition listings at the same time as Philip Neve’s A Narrative of the Disinterment of Milton’s Coffin. . .Wednesday, 4th of August, 1790 (1790) and Philip Pullman’s His Dark Materials trilogy (1995-2000), a rewriting of PL by “a modern master,” among others. The oil painting by Lindall from the Nii Foundation collection was used by the Oxford University major exhibit "Citizen Milton" at the Bodleian Library in its celebration of the 400th anniversary of Milton’s birth in 2008, thereby recognizing Lindall's contribution to the continuing Miltonian artistic legacy.
Joseph Wittreich, esteemed Milton scholar and friend of both Lindall and me, has kindly given a copy of the 1980 issue of Heavy Metal Magazine to the Huntington Library. My own collection has several copies along with the other acquisitions listed above by the Bodleian Library in 2010.
Shortly after the appearance of a portion of Terrance Lindall’s illustrations of Paradise Lost in Heavy Metal Magazine (1980), there appeared in 1983 his synopsis of Paradise Lost along with his illustrations of Milton’s epic, privately published together in a small book (5 ½” x 8 ¼”) in a limited number of copies, entitled: John Milton’s Paradise Lost synopsized and with illustrations by Terrance Lindall. The color print illustrations, inspiration now taking real form and mature character, were tipped in across from the printed synopsis of the illustrated lines of Milton being illustrated.
The whole was a wonderful success and Lindall’s reputation as an artist and as someone committed to illustrating Milton’s great epic were growing in stature, while his illustrations were gaining recognition for the artistic achievement they represented. The surrealist provocateur was moving in a direction that suited his own goals as an artist and a scholar, an illustrator of Paradise Lost and someone even more strongly committed to continuing his illustrating of the poet’s great epic. The World Wide Web has long since given access to Lindall’s paintings by millions, making Lindall’s illustrations among the best known of Paradise Lost.
Lindall’s attention to Milton’s epic and to details in the epic, ever from the eye of the dedicated and committed artist/illustrator, grew beyond his early attention to detail. From a small-size private publication with tipped-in cards measuring 3 ½” x 4 ¼” or sometimes 4 ½”, Lindall moved to a quarto-sized publication in 2009, again done in a very limited number of copies (this time 20) and with each illustration measuring 5 ¾” x 7 5/8” and signed and dated by the artist.
The quarto edition has been followed by his massive and richly triumphant elephant folio illustrating Paradise Lost (No. 1 completed in 2011 and No. 2 in 2012), the remarkable edition we celebrate here. All concepts that were growing in meaning and stature during the nearly forty years before now were drawing themselves into place for this ultimate expression of Lindall's interpretation of Paradise Lost in this one final work, his Elephant Folio. Like his other works before him, this large edition is also being done in a very limited number of copies (10), all by hand, a vast expansion in size and scope over his quarto edition, with 64 pages, each page measuring 13” x 19”, illustrations mostly measuring 9” x 12”, title page measuring 11” x 11”. The binding of each folio is intended to be leather bound by the renowned binder Herb Weitz, hand tooled & gilt-decorated, unique, and each personally dedicated to the owner. The covers will be identified by different motifs, such as the "The Archangel Michael Folio" or "The Lucifer Folio," etc. Each copy will have one original conceptual drawing at the front.
I use “being done” in describing both instances, the quarto and the folio editions, because both editions have been (and will continue to be) “done” by hand, with loving care, and with each illustration printed on the highest quality paper stock available anywhere and signed and dated by the artist. Both the quarto and the folio editions have been, and will be, done as “originals, as signed prints,” and in the case of the Elephant Folio, as prints with original paintings surrounding them.
In itself, the quarto edition is superb, truly one of a kind, and distinctive now and for years to come. “The Paradise Lost Elephant Folio,” however, is amazing and goes far beyond the quarto edition in untold ways; it is the culmination of Terrance Lindall’s life’s devotion to Milton, to Paradise Lost, and to all that Milton represents and his great epic means. Because of Lindall’s supreme dedication and artistic achievements, Milton will live in yet another new age, brought to life in refreshingly new ways, made “relevant” in remarkably profound ways. Because of Terrance Lindall, great new numbers of readers will be attracted to Milton and his profound epic than would otherwise, most assuredly, have been the case.
“The Paradise Lost Elephant Folio," in particular, is a hand-embellished and gold illuminated 13 x 19 inch book containing 14 full-page color 1000 dpi prints with 23.75 carat gold leaf edging on Crane archival paper. Each illustration is signed by Terrance Lindall, some pages with hand-painted illustrated or decorated borders and large, carefully embellished head- or tail-piece illustrations, others with historiated initials with 23.75 carat gold leaf embellishments. All add to the depth and meaning of a given illustration of Lindall’s synopsized Paradise Lost (1983) appearing across from an illustration. For the Elephant Folio, Terrance Lindall is also providing a final painting, The Celestial Orbit, as a frontispiece. It is Lindall's "ultimate statement" as an artist's interpretation of Milton's great epic. This painting will only be produced as a print for the Elephant Folio and will not be reproduced for collectors as a signed print in any other format.
And while Lindall may now think that he has finished his work with Milton, he hasn’t, because Milton lives within Lindall in a special way, as surely as Lindall remains dedicated to bringing Milton alive to new generations in fresh and vibrant new ways, doing the same for countless generations in centuries to come.In his folio edition and the illustrations in it, Terrance Lindall shows the influence by certain great master illustrators of Paradise Lost through the centuries before him, especially with the inclusion of richly illustrated margins for each color illustration, the margins colored in 23.75 carat gilt and consisting of brightly colored details drawn from the epic in order to advance the meaning of the given illustration. Moreover, again in the tradition of certain great master illustrators of Milton‘s Paradise Lost through the centuries, historiated initials, in imitation of the initial letter in an illuminated manuscript, each in rich gilt and bright colors, are used as the first initial of a section and decorated with designs representing scenes from the text, in order to heighten the intensity of the cumulatively related details in each component part: illustration, border, and historiated initial.
The illustrated borders in the elephant folio are complete paintings in themselves. Although the border art focuses principally on elements of design, they also sometimes tell stories or make commentary about what is illustrated in the featured central painting. The borders likewise pay tribute to both humanity’s great achievements, such as music, dance and architecture, as well as tribute to those individuals and institutions and friends who have had important influences on Lindall’s ideas, or who have shown substantial support or affinity. For example, the Filipino surrealist artist Bienvenido “Bones” Banez, Jr., discovered Lindall’s repertoire during the world renowned “Brave Destiny” exhibit in 2003, an exhibit to which Bienvenido had been invited to display one of his works. Thereafter, a friendship and mutual admiration between the two great artists grew, to the benefit of each.
Bienvenido communicated to Lindall the idea of how “Satan brings color to the world.” Lindall thought the idea to be an insightful and original "affinity," and so in the elephant folio plate, “Pandemonium,” which is a tribute to art, architecture, construction, sculpture, painting, and the like, he especially honors the Filipino surrealist artist by placing Bienvenido’s name on the artist's palette at the very top of the border, the palette in flaming colors.Like the great illustrators of Milton‘s Paradise Lost before him, Lindall uses many and various techniques and styles to bring Milton’s great epic alive. As with Medina, e.g., in the first illustrated edition of Paradise Lost in 1688, Lindall has mastered how to use the synopsized scenic effect to focus our attention on an important moment in the epic while capturing all around it other significant moments or scenes in the epic related to that important central one.
TO READ THE REST, CLICK HERE: | <urn:uuid:2e07e241-ccc1-4fe1-868c-8ce3ef24c18b> | CC-MAIN-2019-47 | https://www.grandparadiselost.com/about-terrance-lindall | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00221.warc.gz | en | 0.967238 | 5,072 | 3.140625 | 3 |
Hydrogen peroxide is a chemical compound with the formula . In its pure form, it is a very pale blue, clear liquid, slightly more viscous than water. Hydrogen peroxide is the simplest peroxide (a compound with an oxygen–oxygen single bond). It is used as an oxidizer, bleaching agent and antiseptic. Concentrated hydrogen peroxide, or "high-test peroxide", is a reactive oxygen species and has been used as a propellant in rocketry. Its chemistry is dominated by the nature of its unstable peroxide bond.
Hydrogen peroxide is unstable and slowly decomposes in the presence of light. Because of its instability, hydrogen peroxide is typically stored with a stabilizer in a weakly acidic solution. Hydrogen peroxide is found in biological systems including the human body. Enzymes that use or decompose hydrogen peroxide are classified as peroxidases.
The boiling point of has been extrapolated as being 150.2 °C, approximately 50 °C higher than water. In practice, hydrogen peroxide will undergo potentially explosive thermal decomposition if heated to this temperature. It may be safely distilled at lower temperatures under reduced pressure.
In aqueous solutions hydrogen peroxide differs from the pure substance due to the effects of hydrogen bonding between water and hydrogen peroxide molecules. Hydrogen peroxide and water form a eutectic mixture, exhibiting freezing-point depression; pure water has a freezing point of 0 °C and pure hydrogen peroxide of −0.43 °C. The boiling point of the same mixtures is also depressed in relation with the mean of both boiling points (125.1 °C). It occurs at 114 °C. This boiling point is 14 °C greater than that of pure water and 36.2 °C less than that of pure hydrogen peroxide.
Hydrogen peroxide is a nonplanar molecule as shown by Paul-Antoine Giguère in 1950 using infrared spectroscopy, with (twisted) C2 symmetry. Although the O−O bond is a single bond, the molecule has a relatively high rotational barrier of 2460 cm−1 (29.45 kJ/mol); for comparison, the rotational barrier for ethane is 12.5 kJ/mol. The increased barrier is ascribed to repulsion between the lone pairs of the adjacent oxygen atoms and results in hydrogen peroxide displaying atropisomerism.
The molecular structures of gaseous and crystalline are significantly different. This difference is attributed to the effects of hydrogen bonding, which is absent in the gaseous state. Crystals of are tetragonal with the space group DP4121.
Hydrogen peroxide has several structural analogues with Hm−X−X−Hn bonding arrangements (water also shown for comparison). It has the highest (theoretical) boiling point of this series (X = O, N, S). Its melting point is also fairly high, being comparable to that of hydrazine and water, with only hydroxylamine crystallising significantly more readily, indicative of particularly strong hydrogen bonding. Diphosphane and hydrogen disulfide exhibit only weak hydrogen bonding and have little chemical similarity to hydrogen peroxide. All of these analogues are thermodynamically unstable. Structurally, the analogues all adopt similar skewed structures, due to repulsion between adjacent lone pairs.
Alexander von Humboldt synthesized one of the first synthetic peroxides, barium peroxide, in 1799 as a by-product of his attempts to decompose air.
Nineteen years later Louis Jacques Thénard recognized that this compound could be used for the preparation of a previously unknown compound, which he described as eau oxygénée (French: oxygenated water) – subsequently known as hydrogen peroxide. (Today this term refers instead to water containing dissolved oxygen (O2).)
An improved version of Thénard's process used hydrochloric acid, followed by addition of sulfuric acid to precipitate the barium sulfate byproduct. This process was used from the end of the 19th century until the middle of the 20th century.
Thénard and Joseph Louis Gay-Lussac synthesized sodium peroxide in 1811. The bleaching effect of peroxides and their salts on natural dyes became known around that time, but early attempts of industrial production of peroxides failed, and the first plant producing hydrogen peroxide was built in 1873 in Berlin. The discovery of the synthesis of hydrogen peroxide by electrolysis with sulfuric acid introduced the more efficient electrochemical method. It was first implemented into industry in 1908 in Weißenstein, Carinthia, Austria. The anthraquinone process, which is still used, was developed during the 1930s by the German chemical manufacturer IG Farben in Ludwigshafen. The increased demand and improvements in the synthesis methods resulted in the rise of the annual production of hydrogen peroxide from 35,000 tonnes in 1950, to over 100,000 tonnes in 1960, to 300,000 tonnes by 1970; by 1998 it reached 2.7 million tonnes.
Pure hydrogen peroxide was long believed to be unstable, as early attempts to separate it from the water, which is present during synthesis, all failed. This instability was due to traces of impurities (transition-metal salts), which catalyze the decomposition of the hydrogen peroxide. Pure hydrogen peroxide was first obtained in 1894—almost 80 years after its discovery—by Richard Wolffenstein, who produced it by vacuum distillation.
Determination of the molecular structure of hydrogen peroxide proved to be very difficult. In 1892 the Italian physical chemist Giacomo Carrara (1864–1925) determined its molecular mass by freezing-point depression, which confirmed that its molecular formula is H2O2. At least half a dozen hypothetical molecular structures seemed to be consistent with the available evidence. In 1934, the English mathematical physicist William Penney and the Scottish physicist Gordon Sutherland proposed a molecular structure for hydrogen peroxide that was very similar to the presently accepted one.
Today, hydrogen peroxide is manufactured almost exclusively by the anthraquinone process, which was formalized in 1936 and patented in 1939. It begins with the reduction of an anthraquinone (such as 2-ethylanthraquinone or the 2-amyl derivative) to the corresponding anthrahydroquinone, typically by hydrogenation on a palladium catalyst. In the presence of oxygen, the anthrahydroquinone then undergoes autoxidation: the labile hydrogen atoms of the hydroxy groups transfer to the oxygen molecule, to give hydrogen peroxide and regenerating the anthraquinone. Most commercial processes achieve oxidation by bubbling compressed air through a solution of the anthrahydroquinone, with the hydrogen peroxide then extracted from the solution and the anthraquinone recycled back for successive cycles of hydrogenation and oxidation.
The simplified overall equation for the process is simple:
A process to produce hydrogen peroxide directly from the elements has been of interest for many years. Direct synthesis is difficult to achieve, as the reaction of hydrogen with oxygen thermodynamically favours production of water. Systems for direct synthesis have been developed, most of which employ finely dispersed metal catalysts similar to those used for hydrogenation of organic substrates. None of these has yet reached a point where they can be used for industrial-scale synthesis.
Hydrogen peroxide is most commonly available as a solution in water. For consumers, it is usually available from pharmacies at 3 and 6 wt% concentrations. The concentrations are sometimes described in terms of the volume of oxygen gas generated; one milliliter of a 20-volume solution generates twenty milliliters of oxygen gas when completely decomposed. For laboratory use, 30 wt% solutions are most common. Commercial grades from 70% to 98% are also available, but due to the potential of solutions of more than 68% hydrogen peroxide to be converted entirely to steam and oxygen (with the temperature of the steam increasing as the concentration increases above 68%) these grades are potentially far more hazardous and require special care in dedicated storage areas. Buyers must typically allow inspection by commercial manufacturers.
In 1994, world production of was around 1.9 million tonnes and grew to 2.2 million in 2006, most of which was at a concentration of 70% or less. In that year bulk 30% sold for around 0.54 USD/kg, equivalent to US$1.50/kg (US$0.68/lb) on a "100% basis".
Hydrogen peroxide occurs in surface water, groundwater and in the atmosphere. It forms upon illumination or natural catalytic action by substances contained in water. Sea water contains 0.5 to 14 μg/L of hydrogen peroxide, freshwater 1 to 30 μg/L and air 0.1 to 1 parts per billion.
The rate of decomposition increases with rising temperature, concentration and pH, with cool, dilute, acidic solutions showing the best stability. Decomposition is catalysed by various compounds, including most transition metals and their compounds (e.g. manganese dioxide, silver, and platinum). Certain metal ions, such as or, can cause the decomposition to take a different path, with free radicals such as (HO·) and (HOO·) being formed. Non-metallic catalysts include potassium iodide, which reacts particularly rapidly and forms the basis of the elephant toothpaste experiment. Hydrogen peroxide can also be decomposed biologically by the enzyme catalase. The decomposition of hydrogen peroxide liberates oxygen and heat; this can be dangerous, as spilling high-concentration hydrogen peroxide on a flammable substance can cause an immediate fire.
Hydrogen peroxide exhibits oxidizing and reducing properties, depending on pH.
In acidic solutions, is one of the most powerful oxidizers known and is stronger than chlorine, chlorine dioxide, and potassium permanganate. When used for removing organic stains from laboratory glassware it is referred to as Piranha solution. Also, through catalysis, can be converted into hydroxyl radicals (·OH), which are highly reactive.
In acidic solutions is oxidized to (hydrogen peroxide acting as an oxidizing agent):
2 (aq) + + 2 (aq) → 2 (aq) + 2 (l)
In basic solution, hydrogen peroxide can reduce a variety of inorganic ions. When it acts as a reducing agent, oxygen gas is also produced. For example, hydrogen peroxide will reduce sodium hypochlorite and potassium permanganate, which is a convenient method for preparing oxygen in the laboratory:
NaOCl + → + NaCl +
2 + 3 → 2 + 2 KOH + 2 + 3
Ph + → Ph +
Alkaline hydrogen peroxide is used for epoxidation of electron-deficient alkenes such as acrylic acid derivatives, and for the oxidation of alkylboranes to alcohols, the second step of hydroboration-oxidation. It is also the principal reagent in the Dakin oxidation process.
It also converts metal oxides into the corresponding peroxides. For example, upon treatment with hydrogen peroxide, chromic acid(+) forms an unstable blue peroxide CrO(.
+ 4 + 2 NaOH → 2 +
converts carboxylic acids (RCO2H) into peroxy acids (RC(O)O2H), which are themselves used as oxidizing agents. Hydrogen peroxide reacts with acetone to form acetone peroxide and with ozone to form trioxidane. Hydrogen peroxide forms stable adducts with urea (Hydrogen peroxide - urea), sodium carbonate (sodium percarbonate) and other compounds. An acid-base adduct with triphenylphosphine oxide is a useful "carrier" for in some reactions.
Hydrogen peroxide is both an oxidizing agent and reducing agent. The oxidation of hydrogen peroxide by sodium hypochlorite yields singlet oxygen. The net reaction of a ferric ion with hydrogen peroxide is a ferrous ion and oxygen. This proceeds via single electron oxidation and hydroxyl radicals. This is used in some organic chemistry oxidations, e.g. in the Fenton's reagent. Only catalytic quantities of iron ion is needed since peroxide also oxidizes ferrous to ferric ion. The net reaction of hydrogen peroxide and permanganate or manganese dioxide is manganous ion; however, until the peroxide is spent some manganous ions are reoxidized to make the reaction catalytic. This forms the basis for common monopropellant rockets.
Hydrogen peroxide is formed in human and animals as a short-lived product in biochemical processes and is toxic to cells. The toxicity is due to oxidation of proteins, membrane lipids and DNA by the peroxide ions. The class of biological enzymes called SOD (superoxide dismutase) is developed in nearly all living cells as an important antioxidant agent. They promote the disproportionation of superoxide into oxygen and hydrogen peroxide, which is then rapidly decomposed by the enzyme catalase to oxygen and water.
Formation of hydrogen peroxide by superoxide dismutase (SOD)
Peroxisomes are organelles found in virtually all eukaryotic cells. They are involved in the catabolism of very long chain fatty acids, branched chain fatty acids, D-amino acids, polyamines, and biosynthesis of plasmalogens, etherphospholipids critical for the normal function of mammalian brains and lungs. Upon oxidation, they produce hydrogen peroxide in the following process:
This reaction is important in liver and kidney cells, where the peroxisomes neutralize various toxic substances that enter the blood. Some of the ethanol humans drink is oxidized to acetaldehyde in this way. In addition, when excess H2O2 accumulates in the cell, catalase converts it to H2O through this reaction:
Another origin of hydrogen peroxide is the degradation of adenosine monophosphate which yields hypoxanthine. Hypoxanthine is then oxidatively catabolized first to xanthine and then to uric acid, and the reaction is catalyzed by the enzyme xanthine oxidase:
The degradation of guanosine monophosphate yields xanthine as an intermediate product which is then converted in the same way to uric acid with the formation of hydrogen peroxide.
Eggs of sea urchin, shortly after fertilization by a sperm, produce hydrogen peroxide. It is then quickly dissociated to OH· radicals. The radicals serve as initiator of radical polymerization, which surrounds the eggs with a protective layer of polymer.
The bombardier beetle has a device which allows it to shoot corrosive and foul-smelling bubbles at its enemies. The beetle produces and stores hydroquinone and hydrogen peroxide, in two separate reservoirs in the rear tip of its abdomen. When threatened, the beetle contracts muscles that force the two reactants through valved tubes into a mixing chamber containing water and a mixture of catalytic enzymes. When combined, the reactants undergo a violent exothermic chemical reaction, raising the temperature to near the boiling point of water. The boiling, foul-smelling liquid partially becomes a gas (flash evaporation) and is expelled through an outlet valve with a loud popping sound.
Hydrogen peroxide has roles as a signalling molecule in the regulation of a wide variety of biological processes. The compound is a major factor implicated in the free-radical theory of aging, based on how readily hydrogen peroxide can decompose into a hydroxyl radical and how superoxide radical byproducts of cellular metabolism can react with ambient water to form hydrogen peroxide. These hydroxyl radicals in turn readily react with and damage vital cellular components, especially those of the mitochondria. At least one study has also tried to link hydrogen peroxide production to cancer. These studies have frequently been quoted in fraudulent treatment claims.
The second major industrial application is the manufacture of sodium percarbonate and sodium perborate, which are used as mild bleaches in laundry detergents. Sodium percarbonate, which is an adduct of sodium carbonate and hydrogen peroxide, is the active ingredient in such products as OxiClean and Tide laundry detergent. When dissolved in water, it releases hydrogen peroxide and sodium carbonate:
By themselves these bleaching agents are only effective at wash temperatures of or above and so are often used in conjunction with bleach activators, which facilitate cleaning at lower temperatures.
It is used in the production of various organic peroxides with dibenzoyl peroxide being a high volume example. It is used in polymerisations, as a flour bleaching agent and as a treatment for acne. Peroxy acids, such as peracetic acid and meta-chloroperoxybenzoic acid are also produced using hydrogen peroxide. Hydrogen peroxide has been used for creating organic peroxide-based explosives, such as acetone peroxide.
Hydrogen peroxide is used in certain waste-water treatment processes to remove organic impurities. In advanced oxidation processing, the Fenton reaction gives the highly reactive hydroxyl radical (·OH). This degrades organic compounds, including those that are ordinarily robust, such as aromatic or halogenated compounds. It can also oxidize sulfur based compounds present in the waste; which is beneficial as it generally reduces their odour.
Hydrogen peroxide can be used for the sterilization of various surfaces, including surgical tools and may be deployed as a vapour (VHP) for room sterilization. H2O2 demonstrates broad-spectrum efficacy against viruses, bacteria, yeasts, and bacterial spores. In general, greater activity is seen against Gram-positive than Gram-negative bacteria; however, the presence of catalase or other peroxidases in these organisms can increase tolerance in the presence of lower concentrations. Higher concentrations of H2O2 (10 to 30%) and longer contact times are required for sporicidal activity.
Hydrogen peroxide is seen as an environmentally safe alternative to chlorine-based bleaches, as it degrades to form oxygen and water and it is generally recognized as safe as an antimicrobial agent by the U.S. Food and Drug Administration (FDA).
Historically hydrogen peroxide was used for disinfecting wounds, partly because of its low cost and prompt availability compared to other antiseptics. It is now thought to inhibit healing and to induce scarring because it destroys newly formed skin cells. Only a very low concentration of H2O2 can induce healing, and only if not repeatedly applied. Surgical use can lead to gas embolism formation. Despite this, it is still used for wound treatment in many countries but is also prevalent as a major first aid antiseptic in the United States.
Dermal exposure to dilute solutions of hydrogen peroxide cause whitening or bleaching of the skin due to microembolism caused by oxygen bubbles in the capillaries.
Diluted (between 1.9% and 12%) mixed with aqueous ammonia is used to bleach human hair. The chemical's bleaching property lends its name to the phrase "peroxide blonde". Hydrogen peroxide is also used for tooth whitening. It can be found in most whitening toothpastes. Hydrogen peroxide has shown positive results involving teeth lightness and chroma shade parameters. It works by oxidizing colored pigments onto the enamel where the shade of the tooth can indeed become lighter. Hydrogen peroxide can be mixed with baking soda and salt to make a home-made toothpaste.
Practitioners of alternative medicine have advocated the use of hydrogen peroxide for various conditions, including emphysema, influenza, AIDS and cancer, although there is no evidence of effectiveness and in some cases it may even be fatal.
The practice calls for the daily consumption of hydrogen peroxide, either orally or by injection, and is based on two assumptions. First, that hydrogen peroxide is naturally produced by the body to combat infection; and second, that human pathogens (including cancer: See Warburg hypothesis) are anaerobic and cannot survive in oxygen-rich environments. The ingestion or injection of hydrogen peroxide is therefore believed to kill disease by mimicking the immune response in addition to increasing levels of oxygen within the body. This makes it similar to other oxygen-based therapies, such as ozone therapy and hyperbaric oxygen therapy.
Both the effectiveness and safety of hydrogen peroxide therapy is scientifically questionable. Hydrogen peroxide is produced by the immune system but in a carefully controlled manner. Cells called phagocytes engulf pathogens and then use hydrogen peroxide to destroy them. The peroxide is toxic to both the cell and the pathogen and so is kept within a special compartment, called a phagosome. Free hydrogen peroxide will damage any tissue it encounters via oxidative stress; a process which also has been proposed as a cause of cancer. Claims that hydrogen peroxide therapy increases cellular levels of oxygen have not been supported. The quantities administered would be expected to provide very little additional oxygen compared to that available from normal respiration. It should also be noted that it is difficult to raise the level of oxygen around cancer cells within a tumour, as the blood supply tends to be poor, a situation known as tumor hypoxia.
Large oral doses of hydrogen peroxide at a 3% concentration may cause irritation and blistering to the mouth, throat, and abdomen as well as abdominal pain, vomiting, and diarrhea.Intravenous injection of hydrogen peroxide has been linked to several deaths.
The American Cancer Society states that "there is no scientific evidence that hydrogen peroxide is a safe, effective or useful cancer treatment." Furthermore, the therapy is not approved by the U.S. FDA.
High-concentration is referred to as "high-test peroxide" (HTP). It can be used either as a monopropellant (not mixed with fuel) or as the oxidizer component of a bipropellant rocket. Use as a monopropellant takes advantage of the decomposition of 70–98% concentration hydrogen peroxide into steam and oxygen. The propellant is pumped into a reaction chamber, where a catalyst, usually a silver or platinum screen, triggers decomposition, producing steam at over, which is expelled through a nozzle, generating thrust. monopropellant produces a maximal specific impulse (Isp) of 161 s (1.6 kN·s/kg). Peroxide was the first major monopropellant adopted for use in rocket applications. Hydrazine eventually replaced hydrogen-peroxide monopropellant thruster applications primarily because of a 25% increase in the vacuum specific impulse. Hydrazine (toxic) and hydrogen peroxide (less-toxic [ACGIH TLV 0.01 and 1 ppm respectively]) are the only two monopropellants (other than cold gases) to have been widely adopted and utilized for propulsion and power applications. The Bell Rocket Belt, reaction control systems for X-1, X-15, Centaur, Mercury, Little Joe, as well as the turbo-pump gas generators for X-1, X-15, Jupiter, Redstone and Viking used hydrogen peroxide as a monopropellant.
As a bipropellant, is decomposed to burn a fuel as an oxidizer. Specific impulses as high as 350 s (3.5 kN·s/kg) can be achieved, depending on the fuel. Peroxide used as an oxidizer gives a somewhat lower Isp than liquid oxygen, but is dense, storable, noncryogenic and can be more easily used to drive gas turbines to give high pressures using an efficient closed cycle. It can also be used for regenerative cooling of rocket engines. Peroxide was used very successfully as an oxidizer in World War II German rocket motors (e.g. T-Stoff, containing oxyquinoline stabilizer, for both the Walter HWK 109-500 Starthilfe RATO externally podded monopropellant booster system, and for the Walter HWK 109-509 rocket motor series used for the Me 163B), most often used with C-Stoff in a self-igniting hypergolic combination, and for the low-cost British Black Knight and Black Arrow launchers.
In the 1940s and 1950s, the Hellmuth Walter KG-conceived turbine used hydrogen peroxide for use in submarines while submerged; it was found to be too noisy and require too much maintenance compared to diesel-electric power systems. Some torpedoes used hydrogen peroxide as oxidizer or propellant. Operator error in the use of hydrogen-peroxide torpedoes was named as possible causes for the sinkings of HMS Sidon and the Russian submarine Kursk. SAAB Underwater Systems is manufacturing the Torpedo 2000. This torpedo, used by the Swedish Navy, is powered by a piston engine propelled by HTP as an oxidizer and kerosene as a fuel in a bipropellant system.
Hydrogen peroxide has various domestic uses, primarily as a cleaning and disinfecting agent.
Regulations vary, but low concentrations, such as 6%, are widely available and legal to buy for medical use. Most over-the-counter peroxide solutions are not suitable for ingestion. Higher concentrations may be considered hazardous and are typically accompanied by a Safety data sheet (SDS). In high concentrations, hydrogen peroxide is an aggressive oxidizer and will corrode many materials, including human skin. In the presence of a reducing agent, high concentrations of will react violently.
High-concentration hydrogen peroxide streams, typically above 40%, should be considered hazardous due to concentrated hydrogen peroxide's meeting the definition of a DOT oxidizer according to U.S. regulations, if released into the environment. The EPA Reportable Quantity (RQ) for D001 hazardous wastes is 100lb, or approximately 10USgal, of concentrated hydrogen peroxide.
Hydrogen peroxide should be stored in a cool, dry, well-ventilated area and away from any flammable or combustible substances. It should be stored in a container composed of non-reactive materials such as stainless steel or glass (other materials including some plastics and aluminium alloys may also be suitable). Because it breaks down quickly when exposed to light, it should be stored in an opaque container, and pharmaceutical formulations typically come in brown bottles that block light.
Hydrogen peroxide, either in pure or diluted form, can pose several risks, the main one being that it forms explosive mixtures upon contact with organic compounds. Highly concentrated hydrogen peroxide itself is unstable and can cause a boiling liquid expanding vapour explosion (BLEVE) of the remaining liquid. Distillation of hydrogen peroxide at normal pressures is thus highly dangerous. It is also corrosive, especially when concentrated, but even domestic-strength solutions can cause irritation to the eyes, mucous membranes and skin. Swallowing hydrogen peroxide solutions is particularly dangerous, as decomposition in the stomach releases large quantities of gas (10 times the volume of a 3% solution), leading to internal bloating. Inhaling over 10% can cause severe pulmonary irritation.
With a significant vapour pressure (1.2 kPa at 50 °C), hydrogen-peroxide vapour is potentially hazardous. According to U.S. NIOSH, the immediately dangerous to life and health (IDLH) limit is only 75 ppm. The U.S. Occupational Safety and Health Administration (OSHA) has established a permissible exposure limit of 1.0 ppm calculated as an 8-hour time-weighted average (29 CFR 1910.1000, Table Z-1). Hydrogen peroxide has also been classified by the American Conference of Governmental Industrial Hygienists (ACGIH) as a "known animal carcinogen, with unknown relevance on humans". For workplaces where there is a risk of exposure to the hazardous concentrations of the vapours, continuous monitors for hydrogen peroxide should be used. Information on the hazards of hydrogen peroxide is available from OSHA and from the ATSDR. | <urn:uuid:dd98444e-19d7-469f-a741-e0e9f3528485> | CC-MAIN-2019-47 | http://everything.explained.today/Hydrogen_peroxide/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00220.warc.gz | en | 0.935942 | 5,896 | 4 | 4 |
-———Illustrasjon: Oda Aurora Norlund———-
Herman Cappelen is a professor of philosophy at the University of Oslo with a part-time position at the University of St. Andrews. He has published influential books and papers on many topics, especially in the philosophy of language but also philosophical methodology, epistemology, metaphysics, and philosophy of mind. In particular, he has worked on the topic of relativism about truth, and, together with John Hawthorne, he wrote the book Relativism and Monadic Truth (2009). The book presents and argues against relativism about truth, while maintaining that truth is a monadic property: if something is true, it is true full stop. In this interview, Cappelen discusses his understanding of relativism about truth and the arguments for and against the view, together with how the debate relates to other questions in philosophy.
In general, what is relativism? And what are some examples of specific kinds of relativism we can find in contemporary philosophy?
I think it is important to distinguish two ways the term relativism is used in philosophy, including contemporary philosophy. The historical use of it had to do primarily with a very vague thought that is not as common today. The vague thought is that in some domain, or maybe very generally, truth is relativized to some kind of parameter. The easiest way to think about that is in the moral or normative domain: the claim something ought to be done is true or false only relative to something, e.g., your background, or your community, or your choices, or something like that.
This vague thought then gets cashed out or explained in two different ways in more contemporary work, and the weird thing that happens is that now one of the things that was called relativism in the past turns out to be the opponent of contemporary relativism. One way to spell out what I just said about relativization is that you say “well, when I say that it’s good to φ what I’m really saying is that it’s good for me to φ relative to…” and then you put in the parameter that you relativize to, into the claim made. And so when I say, just to make things very simple, that it’s right to φ, what I’m really saying is that it is right for Herman to φ, and then what you are saying is that it’s right for you to φ, and so on. So people make these claims that have the relativization built into it. That’s still a form of relativism in the old-fashioned traditional sense, since it is still going to end up with there being, in some sense, no objective truths about what you ought to do. There is the truth for Herman and there is a truth for other people, and those things can differ.
This move, where you build the relativization into the content of the claim, is what at least many of us today call contextualism. The relativization doesn’t have to do with truth itself it just has to do with the content of what you said. So, when I tried to explain this right now I said: what you said is that it is the right thing for you to do, what I said is that it is the right thing for me to do. So, the relativization becomes a part of the content. What you have done isn’t to fiddle with truth you just fiddled with the content of what is said. Now, in the past people used the term “relativism” to cover those kinds of views. Gilbert Harman, for example, in defending versions of relativism, would talk as if that was relativism. And there are people today who still talk in that old-fashioned way, so David Velleman published a book on relativism and he uses “relativism” in the way that I just described, where you are fiddling with the content of the claim, not anything having to do with truth. That is one way to use “relativism”, and it might be the way that it was used throughout much of its history.
But, then, today I think we’ve made some very significant progress, in that we’ve distinguished the view that I just described from a very different view, where it is not the content of what you say that has the relativization built into it, but the truth-evaluation of what is said. On this view, when you say that it is right to φ and I say that it is right to φ, we’re saying the same thing. You didn’t say that it was right for you and I didn’t say it was right for me. What was said was just the same thing, so in a sense we agree. If you say it is right to φ and I say it is not right to φ, then we disagree, because you’ve affirmed something that I denied. Recall that on the previous view you wouldn’t have said something that I denied, right, because you would’ve have just said that for you it is right to φ, and I would’ve said that for me it is not right to φ, and those are perfectly compatible. But in this new way, that I think of as the contemporary way, of using the term “relativism”, you and I have disagreed, because you affirmed what I denied.
So, where does the relativism come in? Well, that comes in at the level of how we evaluate claims as true or false, not in how we individuate claims. On this view, when you evaluate the one thought, or content, or proposition (people use different terms) “it is right to φ” or “one should φ”, you can say it’s true, I can say it’s false, and we can both be right. When I have been writing about relativism I’ve taken that to be the relevant sense of the term. It is a form of relativization that doesn’t build the parameter-relativity into the content but builds into the truth-assessment, whatever you think that is. I have now talked about normative claims, but, of course, you can be relativist in any number of domains, you could think that, to be super extreme, mathematical claims are only true or false relative to a certain type of parameter, or you could think the same for claims about knowledge, that it is relative whether someone knows something or not, and you could go through different domains and see where relativism applies and where it doesn’t. And, of course, the limit of that is you could be a global relativist where everything is relativistic.
Just one more thing about how different those two initial ways of using the term relativism are, the one that builds it into the content and the one that makes it to be about the truth-evaluation: You could be a relativist in the old-fashioned sense and build it into the content, so that when you utter the sentence “it is good to φ” or “you ought to φ”, then what you really said is that you ought to φ, you just talked about yourself. The person who denies the second kind of relativism could agree with that, and just say “yes, you built that into the content but the truth-assessment is now objective and universal”. So, the two views are very different, and the paradoxical and extremely unfortunate way about mixing these two ways of using “relativism” is that old-fashioned relativism is now sort of understood as the alternative, the opposing view, to contemporary relativism. So, the terminology gets confusing quickly. If you want to get into contemporary debates, the way to do it is to think in the second way where you are not fiddling with the content expressed, but just with how you assess truth and falsity.
You defended, together with John Hawthorne, in your book Relativism and Monadic Truth a non-relativistic understanding of truth. You call it the simple view where truth is a monadic property. Could you describe that view?
The simple description of that simple view is just that it is the denial of the second kind of relativism, let’s just call that relativism from now on. It is the view that when you assess something as true or false it is simply true or false. Another way to express the view is: it is true or false simpliciter, or it is true or false full stop. All those little extras at the end are just supposed to remind you that there isn’t anything more. It’s just true or false and there is no relativization.
In the book I wrote with John Hawthorne, we talked about this little package of views that we thought went well together: when you speak you express something we call propositions, or contents, and they are also the contents of beliefs. Propositions, then, serve two roles initially, they are the content of sentences and the sentences express what you believe. And those sentences are monadically true or false. This we thought was a package of views that go well together. They are, anyway, the traditional picture, we think.
How should we think about assessing for truth and falsity on this simple view?
The truth-predicate just applies to something in the following simple way: if you have a content or a proposition, it’s either true full stop or false full stop. Then there is this activity of trying to figure out which one it is, and, of course, in that activity we’ll be engaged in all kinds of complicated things and we’ll disagree and so on. Whether you will end up agreeing with me about whether it’s true or false will depend upon all sorts of things about you and all sorts of things about me. But the point is that that doesn’t affect whether it’s true or false. These activities of assessing are not constitutive of the property of being true or false, there is a super-important disconnect.
You have described the opposing view, relativism in the contemporary sense, where you have some content and the truth-value can vary due to some parameter. In the book you describe in more detail what you think the best version of that view is. Could you say something more about what you take relativism about truth to be? And what arguments people give in support of that kind of relativism?
It might help to give a little bit of history. The thought that relativism, in this contemporary sense, is true, had not been very popular among those thinking about truth and content and those kinds of things. The view just hadn’t been worked out very much. What had been worked out reasonably well was the thing that is now called contextualism, where you build it into the content, which had been worked out in all kinds of ways. The idea that you just have one content but the actual truth-value was relative to some kind of parameter, that view hadn’t been very well worked out. And then there was some, I think, groundbreaking work done: by Max Kölbel, who wrote a book, Peter Lasersohn, who wrote some papers, and then somewhat strange historically, a paper by my co-author John Hawthorne, Andy Egan and Brian Weatherson – maybe one of the first papers that tried to articulate this relativistic position in more detail. Then after that the person who ended up, I think, getting a lot of the credit for relativism and developing it throughout many papers and in a book was John MacFarlane.
The simple version of the view that they actually articulated was that, with respect to some particular terms – the examples they often went back to had to do with a certain kind of might-claims, “it might be the case that…”, they called it epistemic modals – they tried to find areas where they thought this kind of relativism is plausible and argue that there is evidence for it, even. Another case is what they call predicates of personal taste. An easy way to think about it might be something like “it is funny”, like in, “that movie was funny”. And the achievement, if you can call it that, of this tradition was to first develop a formal framework that included a truth-predicate that wasn’t monadic but relativized in a relevant sense. Then the hard work, so to speak, was to articulate and describe the way that framework explained a whole bunch of phenomena. The view itself, if you just put it without the thing that it’s supposed to do, is just: you have a formal system where all attributions of truth or falsity are indexed to what MacFarlane calls a context of assessment. So that is the view, and then the next question is why would you want to do that?
The driving idea, the core simple idea, that anyone can understand, is that if you say “that movie was fun” and I say “no, it wasn’t fun”, there is a very strong sense that we have disagreed with each other, and you have said something that I have denied. That’s data point number one. It says we disagreed, so you want to explain the sense that we disagreed, that there is a genuine disagreement, and, of course, if you had just said it was fun for you and I had said it is fun for me, then it looks like we haven’t had a disagreement. But if it’s just this proposition, “it was fun”, and you assert it, I deny it, that looks like a genuine disagreement. But on the other hand there is a sense that you haven’t done anything wrong, and I haven’t done anything wrong. The second desideratum, then, is to respect the intuition that there is some kind of subjectivity in this domain, there isn’t an objective truth about what’s fun, that all depends on your sense of humor and so on. What they did to respect that was to say “well, we capture the disagreement by letting the content be these non-relativized things”, so that’s the disagreement bit, but then from your context of assessment there will be one standard of humor and from mine there will be another, and so since truth is always relativized in that way you get to be right from your point of view and I get to be right from my point of view, even though you say it’s true and I say it’s false.
So, they provided a structure for saying that we disagree but we can both be right. Some people, like MacFarlane, Max Kölbel, and Peter Lasersohn, likes to describe it as a form of faultless disagreement. There is disagreement but the disagreement involves no fault on behalf of one or the other participants. If you are looking for arguments, that’s argument number one, that’s like the data-driven argument.
Then there is another argument that Hawthorne and I talk quite a bit about, there is a whole chapter devoted to it, and it’s a bit more technical and a bit harder to get people to see. It’s an argument that somehow comes from David Lewis, it is found in some of the work of Jeff King, and you can find it in parts of MacFarlane’s writings, though he downplayed it a little bit when he published his book. Well, the way we describe it, it has twelve different premises and a conclusion, so I don’t think it would be very suitable for this interview, and the way Jeff King does it, it’s also super complicated and long. But to give just the spirit of it: In almost all formal systems for languages in formal semantics theorists tend to relativize truth to some parameter or other, so it actually looks like some form of relativism in these formal systems is the standard view. David Kaplan, for example, and this is a very important precedence for it, says, well, truth and falsity is relative to a world, it is true in this world but it could have been false, so it is false relative to another possible world. So people seem comfortable thinking that truth or falsity is relative to a possible world. And many others are comfortable with the idea that you relativize to times, a proposition could be true at one time and false at another, and Kaplan even included places as parameters. So you don’t have simply truth or falsity. But this was sort of independent of the original motivations of relativism, they were just formal moves that were made. And then MacFarlane, in particular, used to say “hey, so what’s weird about including standards of taste, or a sense of humor, or some body of evidence”, so you just add a parameter to something we’re completely comfortable with having parameters with respect to anyway.
So, that is two arguments that support relativism: the case of faultless disagreement and the fact that a lot formal theorizing in linguistics and logic seems to have added this relativity anyway by having parameters when you assess for truth-value. How do you respond to those arguments?
So, Hawthorne and I, in that book, we say, well, first it was a mistake to accept all of those other relativizations. Truth is monadic, across the board. When we talk about truth relative to a world that is a derivative notion, the basic notion is the notion of truth simpliciter. It was a mistake to include relativization to times and it was a mistake to include relativization to places. Now, that’s hard work, because now you have to show that you don’t need it in the case of modality, talking about what is possible and necessary, you don’t need it with respect to time, and you don’t need it with respect to place. And so we do a bit of that work throughout the book, showing how in each of those cases this relativized notion really is derivative and that the basic notion is a monadic one. So, it was hard work writing that book because you had to say something about modal logic and modality, say something about tense, you had to talk about all these different areas in which people have made relativizations and say, you know, that was a useful theoretical tool but it doesn’t cut at what is fundamental, it doesn’t cut at the basic structure of language and thought. That was one strategy of replies, go after that “look, we’re doing it many places already, so why not add them” and reply “no, you shouldn’t have gone down that road”. Basically, you misinterpret people if you go down that road.
As the second response to what I described as the first argument, the one from faultless disagreement, we tried to show that you can generate that sense of disagreement and the sense of faultlessness, without going relativistic. You could do that in two kinds of ways: you can explain those intuitions in better ways and you can show that the relativist predicts things that aren’t real, that relativism overgenerates phenomena, predicting that there should be phenomena that don’t really exist. In particular, we say there isn’t always that sense of disagreement, and we give a bunch of cases where one person says “this is fun” and another person says “that is not fun” and there is no sense of disagreement. If you think about very weird cases, like talking animals, it is very weird, when you realize how totally different from us they are, to think that there is a deep sense of disagreement. But the relativist would get us to think “no, there are these genuine deep disagreements in all these cases” and we show that, typically, that isn’t the case. And in the cases in which there is a sense of disagreement, there are many ways for the non-relativist to explain that. A natural case to think about is standards. You build the standards into the content, not into the truth-assessment, now, when you are talking about what’s funny and I talk about what’s funny, we try to generate a kind of common standard, and part of what we’re disagreeing about is what is funny or not relative to that communal standard. That’s a sense of disagreement but it doesn’t require that there are two separate truths – it’s in fact an effort to coordinate.
Since we wrote that book, which was quite a few years ago now, this literature has continued and it is a hard literature to get into. There are now literally hundreds of dissertations and papers written on little sub-parts of each of these issues. That’s great. This way it becomes more sophisticated. Through collective effort we now know massively more about of how to defend relativism and how to argue against it than we ever did in the history of philosophy. Which I think is a sign that we’re making incredible progress very, very fast. But it also means that if you were to try to get into to this now, it would take years and years of work just to look at all the explanatory models.
Have you seen any work defending relativism within a domain that you think is more persuasive than other work?
So, still a global anti-relativist?
Well, I like the arguments we have in the book, they are pretty good arguments. I mean, the way I work I think about something for many years and then I write a book about it. Then we wrote, I think, ten replies to different leading relativists who were replying to us. And as John and I were writing up those replies, none of that made us change our minds. And then, I felt like I’ve made enough of a contribution to that field and I started working on something else, I think after that I started worrying about intuitions, and I kind of left studying relativism-topics, not behind, really, because I have students working on it and so on, but… yeah, maybe I’ll go back to it at some point and see what people have done.
You mentioned earlier contextualism about meaning.
Yeah, the parameter you want to relativize to gets built into the content of what you say. So, when you say “one ought to φ” what you’re saying is relative to your standards, and that’s actually part of the content. You didn’t say it out loud but it’s sort of hidden in the content there, the thing you asserted, the proposition expressed, has a reference to your standards in it. Or, if you say “it’s fun”, you said that by your standards it’s fun. So, let’s just try to speak in that way: I say “by Herman’s standards this is fun”. Now, that could be true for absolutely everyone, everywhere. It’s perfectly compatible for that sentence to be monadically true. You could say Herman expressed some proposition and it was the proposition that relativized the funness to his standards but that relativized claim itself is non-relatively true, That’s going to be true for you even though you disagree, you would say “it’s not fun”, because then you’re saying that by your standards.
Kind of the way that some traditional logicians wanted to deal with tense and place, for example, you would look to a fully specified proposition. Then when we utter something like “it’s raining” we’re really saying it is raining at that place at that date at that time, and that gives you a proposition that is invariantly true or false.
So, this view is compatible with the monadic understanding of truth. You have earlier argued against contextualism about meaning and some of those arguments that you used against contextualism can be used in support of relativism about truth. Could you say something more about the relationship between these two strands in you thinking?
So, the background is: I was right out of graduate school, it was a long time ago, and in the early 2000s, so almost twenty years ago, I wrote a book with Ernie Lepore called Insensitive Semantics. That book is an effort to argue in favor of something we call minimalism about semantic content. So, there we were in favor of a semantics that didn’t include much context sensitivity. However, in that book we also argued for the view that we need a notion of what was said that is very rich. We argued that semantics doesn’t exhaust what is said. There are many things said and one little part of that is the semantic content and that part doesn’t have all these relativizations built into it. So, what we argued against was a kind of contextualism about semantic content, not against contextualism about what was said, we’re in favor of contextualism about what was said.
Now, it should be said that there are some very interesting connections here, the way I see it. For example, a lot of MacFarlane’s early work just took the arguments from Insensitive Semantics – I mean, he didn’t steal them but he used the same kinds of arguments – and used them as a theory in favor of relativism about truth. So what he did with truth, to let truth vary with assessors, we did all that work with having what was said be much richer. That’s the history of it. Just after Insensitive Semantics came out MacFarlane published a reply where he sort of said “no, you guys really should have been relativists about truth”, and that’s an early MacFarlane paper where he says all these arguments are great arguments for relativism about truth. I still think that much of what MacFarlane wants to do with the relativization of the truth-predicate, can be done by being more pluralistic and rich about the notion of what was said.
I guess the distinctive thing about your view is that you accept a pluralism where, when you utter something, a lot of propositions are put into play at the same time, not just one.
Right. Another, even more radical part of my view is that one sentence can express different propositions for different people. So the view I have is that I utter a sentence, the sentence will express many propositions, one of them will be the semantic content. Relative to you the cluster of propositions could be C and relative to, say, Bjørn Ramberg it could be C2, and C and C2 need not be the same cluster. So, what I think is that I say each of the things in your cluster, so relative to you I will have said something that I didn’t say relative to Bjørn. This gets very tricky, I know it sounds very relativistic but it isn’t. You could actually correctly, truly, say that Herman said something true, Bjørn could say that Herman said something false, but that’s all compatible with monadic truth because one of the things I said relative to Bjørn is false but one of the things I said relative to you is true. If you have this picture, where there’s a cluster of content, you get something that, again, sounds a little bit relativistic. But I’m not worried about that because it doesn’t make the truth-predicate relative, it’s just a consequence of how what I said will depend in part on the interpreter. So, most of what is contextual I like to build this kind of relativization into what was said, say, what Herman said relative to an interpreter. Again, that is compatible with a monadic truth-predicate, because it only relativizes the saying-relation. This is what I call content relativism – and that’s a form of relativism that I endorse. It’s not about truth, but about content, i.e. about what is said.
Recently, you’ve been working on the topic of conceptual engineering, how concepts change and maybe improve. In some discussions of conceptual engineering, people talk of some kind of “relativism”, where depending on the concept we’re using, or the version of a concept we’re using, the truth-value of a claim might differ. So, one example is “fish”. Say that 400-500 years ago people just called anything living in the sea “fish”, so a whale would be a fish, but then on the modern understanding of the word there are much more stringent criteria, and the whale would not be counted standardly as a fish but as a mammal. The question is whether or not it was true that whale were fish when we had this concept and now it’s false once we’ve changed the concept? What do you think of this kind of seeming relativism?
It is important to keep track of what we mean by relativism here. This sort of phenomenon is not in any way related to the relativism that I talked about earlier. Here’s something that could happen quite easily, and, I think, happens a lot: You mean one thing by “fish” and then you utter the sentence “whales are fish”. By that you express a certain proposition, say, that the whale is an animal that swims in the ocean. That’s monadically true or false. Then I have a different meaning for “fish”, where it excludes mammals, for example, and then I say “whales are fish”. I will be expressing a different proposition from you, and mine might be false while yours is true. But given the way we set up things earlier that just means that we expressed different contents, it is the contents that have changed. Now, that’s the answer to the initial question.
Then there is a whole cluster of complications that look kind of relativistic, but if I have thought my way through it properly, they are really just versions of this content relativism that I’ve just described to you. Let use the example of “fish” again, but let’s make it a little bit different: At some point in the past people used “fish” in such a way that that little thing, one little thing, call that thing A, was a fish. Then I want to say that, well, concepts can change over time, so that things that once was in, is now out. Now we go a little bit further into the future, and A is no longer correctly described as a fish. Now, I just described why, so far, there is no form of relativism here. But there is a problem because I also want the following to be true: so I’m the person speaking now, I want to be able to say what the person in the past said and I want to do it, what we call, homophonically, I want to use the same word, that is, I want a kind of continuity of topic. So, I want the following to be true: that when I utter “you said that A is a fish” I’ve said something true. But, what I say when I say “A is a fish” is false, and when you said “A is a fish” in the past it was true, but at the same time I said what you said when I say “you said that A is a fish”. So, now it looks like we’ve both said something true and said something false relative to different times, but I don’t want that kind of relativism. What I really think is that what has happened is that what you said has changed over time. So, it’s a form of content relativism. These are complicated issues, they’re very fuzzy.
So, you want the content of the assertion of the original speaker to have changed at the subsequent time?
Yeah, but I also want it to be true that I can say what you said using the same sentence, the sentence that is now changed in meaning.
OK, because there is no relativism about truth if the concept change, but still we might want to say, at least many wants, to say that it is false that, for example, A is a fish.
It is false given what I mean by it. At the same time, it is also true for me to say that you said that A is a fish. But I know that you meant something different by it, when you said it, it was true. So, there is a clash. What you said was true because of what you meant but at the same time you said that A is a fish and that’s false. A lot of work needs to be done to resolve that tension. But it’s not a view that is the kind of relativism we talked about at the beginning. | <urn:uuid:9662da50-dd43-4946-94dd-79faa1151028> | CC-MAIN-2019-47 | https://filosofisksupplement.no/true-or-false-full-stop-an-interview-with-herman-cappelen/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00220.warc.gz | en | 0.973266 | 7,072 | 2.671875 | 3 |
International Year of Astronomy
The International Year of Astronomy (IYA2009) was a year-long celebration of astronomy that took place in 2009 to coincide with the 400th anniversary of the first recorded astronomical observations with a telescope by Galileo Galilei and the publication of Johannes Kepler's Astronomia nova in the 17th century. The Year was declared by the 62nd General Assembly of the United Nations. A global scheme, laid out by the International Astronomical Union (IAU), was also endorsed by UNESCO, the UN body responsible for educational, scientific, and cultural matters.
The IAU coordinated the International Year of Astronomy in 2009. This initiative was an opportunity for the citizens of Earth to gain a deeper insight into astronomy's role in enriching all human cultures. Moreover, served as a platform for informing the public about the latest astronomical discoveries while emphasizing the essential role of astronomy in science education. IYA2009 was sponsored by Celestron and Thales Alenia Space.
- 1 Significance of 1609
- 2 Intended purpose
- 3 The Secretariat
- 4 Cornerstone projects
- 5 See also
- 6 References
- 7 External links
Significance of 1609
On 25 September 1608, Hans Lippershey, a spectacle-maker from Middelburg, traveled to the Hague, the then capital of the Netherlands, to demonstrate to the Dutch government a new device he was trying to patent: a telescope. Although Hans was not awarded the patent, Galileo heard of this story and decided to use the "Dutch perspective glass" and point it towards the heavens.
In 1609, Galileo Galilei first turned one of his telescopes to the night sky and made astounding discoveries that changed mankind's conception of the world: mountains and craters on the Moon, a plethora of stars invisible to the naked eye, and moons around Jupiter. Astronomical observatories around the world promised to reveal how planets and stars are formed, how galaxies assemble and evolve, and what the structure and shape of our Universe actually are. In the same year, Johannes Kepler published his work Astronomia nova, in which he described the fundamental laws of planetary motions.
However Galileo was not the first to observe the Moon through a telescope and make a drawing of it. Thomas Harriot observed and detailed the Moon some months before Galileo. "It's all about publicity. Galileo was extremely good at irritating people and also using creative writing to communicate what he was learning in a way that made people think," says Pamela Gay in an interview with Skepticality in 2009.
The vision of IYA2009 was to help people rediscover their place in the Universe through the sky, and thereby engage a personal sense of wonder and discovery. IYA2009 activities took place locally, nationally, regionally and internationally. National Nodes were formed in each country to prepare activities for 2009. These nodes established collaborations between professional and amateur astronomers, science centres and science communicators. More than 100 countries were involved, and well over 140 participated eventually. To help coordinate this huge global programme and to provide an important resource for the participating countries, the IAU established a central Secretariat and the IYA2009 website as the principal IYA2009 resource for public, professionals and media alike.
Astronomy, perhaps the oldest science in history, has played an important role in most, if not all, cultures over the ages. The International Year of Astronomy 2009 (IYA2009) was intended to be a global celebration of astronomy and its contributions to society and culture, stimulating worldwide interest not only in astronomy, but in science in general, with a particular slant towards young people.
The IYA2009 marked the monumental leap forward that followed Galileo's first use of the telescope for astronomical observations, and portrays astronomy as a peaceful global scientific endeavour that unites amateur and professional astronomers in an international and multicultural family that works together to find answers to some of the most fundamental questions that humankind has ever asked. The aim of the Year was to stimulate worldwide interest in astronomy and science under the central theme "The Universe, Yours to Discover."
Several committees were formed to oversee the vast majority of IYA2009 activities ("sidewalk astronomy" events in planetariums and public observatories), which spun local, regional and national levels. These committees were collaborations between professional and amateur astronomers, science centres and science communicators. Individual countries were undertaking their own initiatives as well as assessing their own national needs, while the IAU acted as the event's coordinator and catalyst on a global scale. The IAU plan was to liaise with, and involve, as many as possible of the ongoing outreach and education efforts throughout the world, including those organized by amateur astronomers.
The major goals of IYA2009 were to:
- Increase scientific awareness;
- Promote widespread access to new knowledge and observing experiences;
- Empower astronomical communities in developing countries;
- Support and improve formal and informal science education;
- Provide a modern image of science and scientists;
- Facilitate new networks and strengthen existing ones;
- Improve the gender-balanced representation of scientists at all levels and promote greater involvement by underrepresented minorities in scientific and engineering careers;
- Facilitate the preservation and protection of the world's cultural and natural heritage of dark skies in places such as urban oases, national parks and astronomical sites.
As part of the scheme, IYA2009 helped less-well-established organizations from the developing world to become involved with larger organizations and deliver their contributions, linked via a huge global network. This initiative also aimed at reaching economically disadvantaged children across the globe and enhancing their understanding of the world.
The central hub of the IAU activities for the IYA2009 was the IYA2009 Secretariat. This was established to coordinate activities during the planning, execution and evaluation of the Year. The Secretariat was based in the European Southern Observatory headquarters in the town of Garching near Munich, Germany. The Secretariat was to liaise continuously with the National Nodes, Task Groups, Partners and Organizational Associates, the media and the general public to ensure the progress of the IYA2009 at all levels. The Secretariat and the website were the major coordination and resource centers for all the participating countries, but particularly for those developing countries that lack the national resources to mount major events alone.
The International Year of Astronomy 2009 was supported by eleven Cornerstone projects. These are global programs of activities centered on specific themes and are some of the projects that helped to achieve IYA2009's main goals; whether it is the support and promotion of women in astronomy, the preservation of dark-sky sites around the world or educating and explaining the workings of the Universe to millions, the eleven Cornerstones were the key elements in the success of IYA2009.
100 Hours of Astronomy
100 Hours of Astronomy (100HA) is a worldwide astronomy event that ran 2–5 April 2009 and was part of the scheduled global activities of the International Year of Astronomy 2009. The main goal of 100HA was to have as many people throughout the world as possible looking through a telescope just as Galileo did for the first time 400 years ago. The event included special webcasts, students and teachers activities, a schedule of events at science centers, planetariums and science museums as well as 24 hours of sidewalk astronomy, which allowed the opportunity for public observing sessions to as many people as possible.
The Galileoscope was a worldwide astronomy event that ran 2–5 April 2009, where the program was to share a personal experience of practical astronomical observations with as many people as possible across the world. It was collaborating with the US IYA2009 National Node to develop a simple, accessible, easy-to-assemble and easy-to-use telescope that can be distributed by the millions. In theory, every participant in an IYA2009 event should be able to take home one of these little telescopes, enabling them to observe with an instrument similar to Galileo's one.
The Cosmic Diary, a worldwide astronomy event that ran 2–5 April, was not about the science of astronomy, but about what it is like to be an astronomer. Professionals were to blog in texts and images about their life, families, friends, hobbies and interests, as well as their work, latest research findings and the challenges they face. The bloggers represented a vibrant cross-section of working astronomers from all around the world. They wrote in many different languages, from five continents. They have also written feature article "explanations" about their specialist fields, which were highlighted in the website. NASA, ESA and ESO all had sub-blogs as part of the Cosmic Diary Cornerstone.
The Portal to the Universe
The Portal to the Universe (PTTU) was a worldwide astronomy event that ran 2–5 April 2009, to provide a global, one-stop portal for online astronomy contents, serving as an index, aggregator and a social-networking site for astronomy content providers, laypeople, press, educators, decision-makers and scientists. PTTU was to feature news, image, event and video aggregation; a comprehensive directory of observatories, facilities, astronomical societies, amateur astronomy societies, space artists, science communication universities; and Web 2.0 collaborative tools, such as the ranking of different services according to popularity, to promote interaction within the astronomy multimedia community. In addition, a range of "widgets" (small applications) were to be developed to tap into existing "live data". Modern technology and the standardisation of metadata made it possible to tie all the suppliers of such information together with a single, semi-automatically updating portal.
She Is an Astronomer
Promoting gender equality and empowering women is one of the United Nations Millennium Development Goals. She Is an Astronomer was a worldwide astronomy event that ran 2–5 April 2009, to promote gender equality in astronomy (and science in general), tackling bias issues by providing a web platform where information and links about gender balance and related resources are collected. The aim of the project was to provide neutral, informative and accessible information to female professional and amateur astronomers, students, and those who are interested in the gender equality problem in science. Providing this information was intended to help increase the interest of young girls in studying and pursuing a career in astronomy. Another objective of the project was to build and maintain an Internet-based, easy-to-handle forum and database, where people regardless of geographical location could read about the subject, ask questions and find answers. There was also to be the option to discuss astronomy-sector-specific problems, such as observing times and family duties.
Dark Skies Awareness
Dark Skies Awareness was a worldwide astronomy event that ran from 2 to 5 April 2009. The IAU collaborated with the U.S. National Optical Astronomy Observatory (NOAO), representatives of the International Dark-Sky Association (IDA), the Starlight Initiative, and other national and international partners in dark-sky and environmental education on several related themes. The focus was on three main citizen-scientist programs to measure local levels of light pollution. These programs were to take the form of "star hunts" or "star counts", providing people with a fun and direct way to acquire heightened awareness about light pollution through firsthand observations of the night sky. Together, the three programs were to cover the entire International Year of Astronomy 2009, namely GLOBE at Night (in March), the Great World Wide Star Count (in October) and How Many Stars (January, February, April through September, November and December).
UNESCO and the IAU were working together to implement a research and education collaboration as part of UNESCO's thematic initiative, Astronomy and World Heritage as a worldwide astronomy event that also ran 2–5 April 2009. The main objective was to establish a link between science and culture on the basis of research aimed at acknowledging the cultural and scientific values of properties connected with astronomy. This programme provides an opportunity to identify properties related to astronomy located around the world, to preserve their memory and save them from progressive deterioration. Support from the international community is needed to implement this activity and to promote the recognition of astronomical knowledge through the nomination of sites that celebrate important achievements in science.
Galileo Teacher Training Program
The Galileo Teacher Training Program (GTTP): the International Year of Astronomy 2009 provided an opportunity to engage the formal education community in the excitement of astronomical discovery as a vehicle for improving the teaching of science in classrooms around the world. To help training teachers in effective astronomy communication and to sustain the legacy of IYA2009, the IAU – in collaboration with the National Nodes and leaders in the field such as the Global Hands-On Universe project, the US National Optical Astronomy Observatory and the Astronomical Society of the Pacific – embarked on a unique global effort to empower teachers by developing the Galileo Teacher Training Program (GTTP).
The GTTP goal was to create a worldwide network of certified "Galileo Ambassadors" by 2012. These Ambassadors were to train "Galileo Master Teachers" in the effective use and transfer of astronomy education tools and resources into classroom science curricula. The Galileo Teachers were to be equipped to train other teachers in these methodologies, leveraging the work begun during IYA2009 in classrooms everywhere. Through workshops, online training tools and basic education kits, the products and techniques developed by this program could be adapted to reach locations with few resources of their own, as well as computer-connected areas that could take advantage of access to robotic optical and radio telescopes, webcams, astronomy exercises, cross-disciplinary resources, image processing and digital universes (web and desktop planetariums). Among GTTP partners, the Global Hands-On Universe project was a leader.
Universe Awareness (UNAWE) was a worldwide astronomy event that also ran during 2–5 April 2009, as an international program to introduce very young children in under-privileged environments to the scale and beauty of the Universe. Universe Awareness noted the multicultural origins of modern astronomy in an effort to broaden children's minds, awaken their curiosity in science and stimulate global citizenship and tolerance. Using the sky and children's natural fascination with it as common ground, UNAWE was to create an international awareness of their place in the Universe and their place on Earth.
From Earth to the Universe
The Cornerstone project From Earth to the Universe (FETTU) is a worldwide public science event that began in June 2008, and still ongoing through 2011. This project has endeavored to bring astronomy images and their science to a wider audience in non-traditional informal learning venues. In placing these astronomy exhibitions in public parks, metro stations, art centers, hospitals, shopping malls and other accessible locations, it has been hoped that individuals who might normally ignore or even dislike astronomy, or science in general, will be engaged.
Developing Astronomy Globally
The Developing Astronomy Globally was a worldwide astronomy event that ran during 2–5 April 2009, as a Cornerstone project to acknowledge that astronomy needs to be developed in three key areas: professionally (universities and research); publicly (communication, media, and amateur groups) and educationally (schools and informal education structures). The focus was to be on regions that do not already have strong astronomical communities. The implementation was to be centred on training, development and networking in each of these three key areas.
This Cornerstone was using the momentum of IYA2009 to help establish and enhance regional structures and networks that work on the development of astronomy around the world. These networks were to support the current and future development work of the IAU and other programmes, plus ensure that developing regions could benefit from IYA2009 and the work of the other Cornerstone projects. It was to also address the question of the contribution of astronomy to development.
The Galilean Nights was a worldwide astronomy event that also ran 2–5 April 2009, as a project to involve both amateur and professional astronomers around the globe, taking to the streets their telescopes and pointing them as Galileo did 400 years ago. The sources of interest were Jupiter and its moons, the Sun, our Moon and many others celestial marvels. The event was scheduled to take place on 22–24 October 2009. Astronomers were to share their knowledge and enthusiasm for space by encouraging as many people as possible to look through a telescope at planetary neighbours.
- International Year of Astronomy commemorative coin
- International Astronomical Union (IAU)
- History of the telescope
- 365 Days of Astronomy
- 400 Years of the Telescope (documentary)
- Global Hands-On Universe
- National Astronomy Week (NAW)
- StarPeace Project
- The World At Night (TWAN)
- World Year of Physics 2005
- White House Astronomy Night
- "Johannes Kepler: His Life, His Laws and Times". NASA: Kepler Mission. Retrieved 22 February 2015.
- "2009 to be International Year of Astronomy, UN declares". CBC News. 21 December 2007. Archived from the original on 9 July 2010. Retrieved 9 January 2009.
- United Nations General Assembly Session 62 Verbatim Report 78. A/62/PV.78 page 18. 19 December 2007. Retrieved 2009-03-18.
- "International Year of Astronomy 2009". Sky & Telescope. 1 January 2009. Archived from the original on 2 February 2013. Retrieved 9 January 2009.
- "Looking Through Galileo's Eyes". ScienceDaily. 8 January 2009. Retrieved 9 January 2009.
- "Harriot Moon". Skepticality. Retrieved 22 February 2015.
- "Celebrating Thomas Harriot, the world's first telescopic astronomer". Royal Astronomical Society. Retrieved 22 February 2015.
- "International Year of Astronomy". Skepticality. Retrieved 22 February 2015.
- "About IYA2009". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "IYA2009 Goals & Objectives". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – 100 Hours of Astronomy". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Galileoscope". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Cosmic Diary". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Cosmic Diary". Welcome to Cosmic Diary. Retrieved 22 February 2015.
- "Global Cornerstone projects – Portal to the Universe". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – She is an astronomer". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Dark Skies Awareness". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Astronomy and World Heritage". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Galileo Teacher Training Program".
- "Global Cornerstone projects – Universe Awareness". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "First Free Downloadable Planetarium Show". Retrieved 11 June 2015.
- "Global Cornerstone projects – From Earth to the Universe". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Developing Astronomy Globally". IAU/IYA2009. 1 January 2009. Retrieved 20 May 2009.
- "Global Cornerstone projects – Galilean Nights". IAU/IYA2009. 1 January 2009. Archived from the original on 2 May 2009. Retrieved 20 May 2009.
- Official website of IYA2009 (includes all events and projects)
- Official website of the International Astronomical Union (IAU)
- "Proclamation of 2009 as International Year of Astronomy" (PDF) (Press release). UNESCO Executive Board. 11 August 2005. | <urn:uuid:254d1551-1bea-449d-8c5b-979a17ebbccc> | CC-MAIN-2019-47 | https://en.wikipedia.org/wiki/International_Year_of_Astronomy | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00301.warc.gz | en | 0.941599 | 4,133 | 3.1875 | 3 |
OCEAN PULSE & OCEAN PULSE @ SEA
Ocean Pulse is a comprehensive study of coral reef ecology and the marine environment. It is an inspiring, hands-on program for students in grades K-college. These courses unite students and the local community as they conduct a field survey of our most precious natural resource – the ocean.
“Ocean Pulse Education Programs will help coral reef communities better understand and ultimately conserve their island reefs.”
Dolphin Log / Cousteau Society
Our ocean and beaches become a “Living Classroom.” Students learn to identify reef species, gather data, and compare it to previous studies. This provides vital information about the health of the reefs and promotes a positive direction for other ocean communities to follow.
Students learn the importance of protecting a fragile ecosystem, how coral reefs interconnect with all species in the ocean, and crucial steps required to preserve our marine environment. Our next generation needs this knowledge in order to assure the life of our oceans. By the time the present students graduate from High School, they will have the consciousness that conservation is a positive, if not essential way of life.
Ocean Pulse is aligned with grade specific content and performance standards. These courses provide teachers with valuable tools and community connections needed for carrying out on-going, “meaningful” experiences in their classrooms. The Ocean Pulse Outreach Department provides teachers with opportunities for professional development in the area of environmental education. Teachers that weave critical environmental issues together with classroom and field activities can stimulate interest and lead young people toward thoughtful stewardship of natural resources.
The Ocean Pulse manual provides a standard for student marine education, coral reef study, and the gathering of baseline information. These standardized and easy to follow courses may be utilized by school districts in diverse locations worldwide. Our vision is to establish a global network (via the Internet) of informed and empowered “Ocean Guardians” to assure the life of our oceans.
Ocean Pulse – All Ages and Grade levels
Students focus on the complex relationships within ecosystems resulting from changes in climate, human activity, and the introduction of non-native species. Activities include investigating the formation of the islands, determining fish & turtle population assemblages, coral reef monitoring, classifying and mapping native and introduced species, measuring biotic and abiotic factors of different ecosystems, marine mammal study, and applying international oceanographic sampling and research protocols to define the marine environment. Students also “Adopt-A-Reef” and become “Ocean Guardians” of their local beaches. This information is made available to other students around the globe via the Internet.
Ocean Pulse Certification
– All ages and Levels!
Advanced Ocean Pulse! This Certification Program offers an exciting opportunity for students to engage in research, and receive hands-on training in marine sciences. This is a course on species identification and classification, calcification and structure of coral, origins and distribution of species, composition, zonation, reproduction, productivity, growth, species interaction, and identifies the role of algae and Zooxanthellae. Emphasis is on experiential labs to survey and monitor reef organisms using internationally known research protocols.
Students learn ecological principles, conservation issues related to the Hawaiian Ahupua’a system and resource management methods. Most importantly, they realize the impact and consequences of human activities on reef ecosystems. Student “Peer Ecologists” may work directly with instructors to educate other students on the techniques used in Ocean Pulse. This instills pride and furthers interest in Marine Biology.
Ocean Pulse @ Sea is a drug free marine education program that allows students to learn sailing, safe boating techniques, marine ecology, coral reef monitoring, environmental education, scientific research, and career training from local educators and community members aboard sailing research vessels.
Research Vessels have ports in Hanalei Bay, Nawiliwili Harbor and Port Allen on Kaua’i, for year-round educational programs. These boats have served as research vessel for the following community groups: The Waipa Foundation, The Kaua’i Children’s Discovery Museum, Hawaii International School, Kula High, Intermediate and Elementary, Myron B. Thompson Academy, University of Hawai’i at Hilo, Na Pali Coast Ohana, and the Hawai’i Youth Conservation Corp.
Student Involvement: Students and community members are offered different levels of involvement in the program. The first level of the program are activities that center on the boat: fieldtrips, whale watching, snorkeling & SCUBA, marine education, scientific studies, and sailing technique training, anything – simply as an alternative to being somewhere else doing drugs. The next level is a crew trainee position. Student volunteers will learn from qualified professionals how to crew the boat, eventually leading up to the next level, being a qualified or certified crewmember, or USCG captain. These crewmembers are subject to the USCG random drug-testing program, which is the ultimate deterrent to drugs. A person at this level will also be qualified for employment on a charter boat. The program is an alternative and deterrent to drugs, and a positive stepping-stone in the visitor industry on Kauai.
Students are also trained as “ecologist tour guides” by members of Save Our Seas, in ecology, geology, biology and the Hawaiian cultural relatedness to these subjects.
These quality sail training vessels are a unique and productive platform for the young people of Hawaii. As students participate on the research vessels, they are exposed to ideas in resource conservation by using environmental earth-friendly products and cleaning supplies, alternative energy sources like bio-diesel, and promoting recycling. These vessels are the SOS model for their yearly “SOS Coconut Award.” This award is given to local businesses that use environmental practices. Award winning businesses are chosen by the students.
Ocean Pulse was a success in 1996 and 1997, and is again a success in 2003-2008 at Kula High, Intermediate and Elementary K-12 and 2003 a the Myron B. Thompson Academy on Kaua’i. High school interns serve as peer ecologists and lifeguards!
Ocean Pulse could easily be introduced to island communities throughout the world; to teach the children how to identify creatures, compile marine data, and measure water quality.
REFERENCES for “Ocean Pulse” 1995-2009
Berman, D.S. and Davis-Berman, J. (Aug. 1995). Outdoor education and troubled youth. ERIC Digest (ED385425, RC020276)
Bonwell, C.C. and Eison, J.A. (Jun 1996). Creating excitement in the classroom. ERIC Digest (ED340272)
Caine R.N. and Caine, G. (1994). Making connections: Teaching and the human brain. Menlo Park, CA: Addison-Wesley Publishing Co.
Crandall, J., (Jan.1994), Content-centered language learning. University of Maryland Baltimore County, ERIC Clearinghouse on Language and Linguistics (ED367142 / FL021841)
Cuevas, M.M., Lamb W.G. and Evans, J.E. Jr. (1994). Holt physical science. Austin: Holt, Rirehart and Winston, Inc.
Dewey, J. (1938). Experience and education. New York: The Macmillan Company.
Dewey, J. (1966). Democracy and education: An introduction to the philosophy of education. New York: The Free Press.
Gardner, H. (1991). The unschooled mind. New York: Basic Books
Gay, L. R. (1996). Educational research, competencies for analysis and application. Englewood Cliff, NJ: Prentice-Hall, Inc.
Hendrikson, L., (1984). Active learning. ERIC Digest No. 17 (ED253468, SO016166)
Horton, R. L. and Haines, S. (1996). Philosophical considerations for curriculum development in environmental education, ERIC, Office of Educational Research and Improvement, US Department of Education (RI-88062006)
Hungerford, H.R., Litherland, R.A., Peyton, R. B., Ramsey, J.M. and Volk, T.L. (1996). Investigating and evaluating environmental issues and actions: skill development program. Champaign, IL: Stipes Publishing L.L.C.
Kalinowski, W. (1991). A curriculum outline and rationale for outdoor/Environmental education. ERIC Digest (EJ 431789)
Knapp, C.E. (Aug 1992). Thinking in outdoor inquiry. ERIC Digest (ED348198).
Lorson, M. V., Heimlich, J.E. and Wagner, S. (1996). Integrating science, mathematics, and environmental education: Resources and guidelines. ERIC, Office Of Educational Research and Improvement, US Department of Education (RI-88062006)
Rosebery, A. S., Warren, B. and Conant, F.R. (1992). Appropriating scientific discourse: Findings from language minority classrooms, The National Center for Research on Cultural Diversity and Second Language Learning (Research Report: 3)
Stepath, C.M. and Chandler, K. (1997). Ocean pulse, coral reef monitoring project. Hanalei, HI: Save Our Seas.
Stevens, P.W. and Richards, A. (Mar 1992). Changing schools through experiential education. ERIC Digest (ED345929)
The Future of Social Studies: A report and summary of project SPAN. Boulder, CO: Social Science Education Consortium, Inc., 1982. ERIC Digest (ED 218 200)
Willis, S. (1997, Winter). Field studies-Learning thrives beyond the classroom. Curriculum Update, p. 1-2, 6-8.
*denotes articles that are directly referenced
Alevizon, W. S. and M. B. Brooks. 1975.. The comparative structure of two western Atlantic reef-fish assemblages. Bull. Mar. Sci. Vol. 25t No. 4, p. 482-90.
Bak, R. P. M. 1973. Coral weight increment in situ. A new method to determine coral growth. Mar. Biol., vol. 20, p. 45-90
Bak. R. P. M. 1978. Lethal and sub-lethal effects of dredging on roof coral. Mar. pollution Bull. vol. 9, p. 14-1.
Booth, C. R.; Morrow, J, H. 1990. Measuring ocean productivity via natural fluorescence, Sea Technology, Feb. p- 33-38.
Bortone, S. A.; Hastings, R. W.; Oglesby, J. L. 1986. quantification of reef fish assemblages: a comparison of several in situ methods. Northeast Gulf Sci. Vol. 8, No. 1. P. 1-22.
*Brock. R. E. 1982. A critique of the visual census method for assessing coral reef fish populations. Bull. Mar. Sci. Vol. 32, NO.1, p. 269-76.
*Brock. V. E. 1954. A preliniiary report on a method of estimating reef fish population. J. Wildl. Mgt, Vol. 18, No. 3, p. 297-317.
*Brown, B. E.; Howard, L.S. 1985. Assessing the effect of “stress” on reef corals. Adv. Mar. Biol. 22:1-63.
Carpenter, R. A. Maragos, J.M 1989. How to asress environmental impacts on tropical islands and coastal areas. Honolulu, HI., Environment and Policy Institute East-West Center.
Chamberlin, W. S. ; Booth, C. R.; Kiefer, D.A.; Morrow, J.H.; Murphy, R. C. Evidence for a simple relationship between natural fluorscence, photosynthesis, and chlorophyll in the sea. Deep Sea Research. In press. 1990.
*Christie, H. 1983. Use of video in remote studies of rocky subtidal interactions. Sarsia, Vol. 68, p. 191-94.
D’Elia, C. F.; Taylor, P. R. Disturbances in coral reefs: lessons from Diadema mass mortality and coral bleaching. in: Proceeding: Oceans’ 88, Vol. 3, p. 803-07.
Demartini, E. E.; Roberts, D. 1982. An empirical test of biases in the rapid visual technique for census of reef fish assemblages. Mar. Biol. Vol. 70, p. 129-34.
Department of Health, State of Hawaii. 1989. Apendment and compilation of chapter 11-54 Hawaii Administrative Rules.
*Dodge, R. E.; Logan, A.; Antonfus, A. 1982. Quantitative reef ssegsment studies in Bermuda: a comparison of methods and preliminary results. Bull. Mar. Scl. Vol. 32, No. 3 p. 745-60.
Dudley, W., Hallacher, L. 1989. Hilo Sewage Study. Hilo Hawaii, University of Hawaii Marine Option Program.
Fricke, H. W. 1973b. Behaviour as part of ecological adaptation–In situ studies in the coral reef. Helgolander wissenschaftliche Meersuntersuchunger, Vol. 24, p. 120-44.
*Gamble, J. C. Diving. in: N, A. Holme and A.D. Molntyre, (eds). Methods for the study of marine benthos. Oxford, Blackwell Science Publications. (IBP Handbook no. 16) p. 99-139.
*Hisoock, K. 1979. Systematic surveys and monitoring in nearshore sublittoral areas using diving. In: D. Nichols (ed). Monitoring the marine environment, Symposia of the institute for Biology, 24. Institute of Biology, London, p. 55-74.
Hourigan, T. F.; Tricas, T. C.; Reese, E. S. Coral Reef Fishes as indicators of Environmental Stress in Coral Reefs. In: D. F. Soule and G. S. Kleppel (eds). . Marine organisms as indicators. 1988. New York, Springer-Vorlag.
*Hulburt, A. W.; Pecci, K. J.; Witman, J. D.; Harris, L. G.; Sears, J. R.; Cooper, R. A. Ecosystem definition and community structure of the nacrobenthos of the NEMP monitoring station at Pigeon Hill in the Gulf of Maine. NOAA Tech. Memorandum NMFS-FINEC-14.
Jokiel, P. L.; Maragos, J. W.; and Franzisket, L. 1978. Coral growth buoyant weight technique. In: D. R. Stoddart and R. E Johannes (eds). Coral reefs: research methods. Monographs on oceanographic methodology. UNESCO, Paris. p. 529-42.
Kelleher, G.; Dutton, 1. M. 1985. Environmental effects of offshore tourist developments of the Great Barrier Reef. Proc. 5th. Int. Coral Reef Symp. Tahiti. Vol. 6, p. 525-30.
*Kenchington, R. A. 1978. Visual surveys of large areas of coral reefs. In: D. R. Stoddart and R. E. Johannes (eds). coral reefs: research methods. Monographs in oceanographic methology. UNESCO, Paris. p. 149-62.
Kiefer, D. A; Chamberlin, W. S. 1978. Natural flourescence of chlorophyll a: Relationship to photosynthesis and chlorophyll concentration in the western South Pacific gyre. Limnol. Oceanoqr. Vol. 34, No.5, p. 868-81,
Kuno, W. 1969. A new method of sequential sampling to obtain the population estimates with a fixed level of precision. Res. Popul. Ecol., vol. 11, p.127-360
Loya, Y., 1972. Community Structure and species diversity of hermatypic corals at tilat, Red Sea.,Mar. Biol., Vol. 29, p.177-85,
*Loya, Y., 1978. Plotless and transect methods. Monographs on ceanographic methodology, UNESCO, Paris, 5, 197-217.
Maragos, J. E. 1978. Measurement of water volume transport for flow studies. In: D.R. Stoddart and R.E. Johannes (eds). Coral Reefs,. research methods. Monographs on oceanographic methodology, UNESCO, Paris. p. 353-360.
Marsh, J. A., Jr. , and S. V. Smith. Productivity measurements of coral reefs in flowing water. In: D. R. Stoddart and R. E. Johannes (eds) . Coral reef: research methods. Monographs on oceanographic methodology, UNESCO, Paris. p. 361-378.
*McIntyre, A. D., Elliott, J. M.; Ellis, D. V. 1971. Introduction: design of sampling programmes. In: N. A. Holme and A. D. McIntyre, (ods). Methods for the study of marine benthos. Oxford Blackwell Science Publications (IBP Handbook no. 16) P. 1-27.
Nash. S. V. 1989. Reef diversity index survey method for nonspecialists. Tropical Coastal Area Management, Vol. 4, No. 31 p. 14-17.
*National Marine Fisheries Service. 1-977. Ocean Pulse program development plan, Woods Hole, Mass, Northeast Fisheries Center.
Piclion, M. 1978. Quantitative benthnic ecology of Tulear reefs. In: D. R. Stoddart and R. E. Johannes (eds). Coral reefs: research methods. Monographs on oceanographic methodology UNESCO, Paris. p. 163-74.
Pielou, E. C. 1966. The measurement of diversity in different types of biological Collections. J. Theoretical Biol. Vol. 13, p. 131-44.
Reese, E. S. 1981. Predation of coral by fishes of the family Chaetodontidae: Implications for conservation and management of coral reef ecosystems. Bull. Mar. Sci. Vol.31, No. 3, p. 594-604.
Salm, R. V. and Clark, J. R. 1984. Marine and coastal protected areas: A guide for Planners and Managers. Gland, Switzerland, International Union for conservation of nature and natural resources.
Shannon C. E.; Weaver, W. 1948. The mathematical theory of communication. Urbana, 11, Univ. of Ill. Press, p. 1-117.
Stoddart, D. R.; Johannes, R. E. (eds). Coral reefs; research methods. Monographs on oceanographic methodology UNESCO, Paris.
Weinberg, S. 1978. The minimal area problem in invertebrate communities of Mediterranean rocky substrata. Mar. Biol., Vol. 49, p. 33-40.
Weinberg, S. 1981. A comparison of coral reef survey methods. Bijdragen tot de Diefkunde, Vol.-51, p. 199-218.
Witman, J. D. 1985. Refuges, Biological disturbance, and rocky subtidal community structure in New England. Ecol. monogr. Vol. 55, p- 421-45.
*Witman, J.; Coyer, J. 1990 The underwater catalogue: A guide to ethods in underwater research. Ithica, N. Y. Shoals Marine aboratory, Cornell University.
*Ziemann, D. A. 1990. Water quality and marine life monitoring and mitigation plan Kohanakik Resort. Prepared for Nansay Hawaii, Inc. Kamuela, HI. | <urn:uuid:e751f1d1-3e9f-4a24-b6e8-1cd5cdbeb783> | CC-MAIN-2019-47 | https://saveourseas.wordpress.com/2013/01/02/ocean-pulse/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00501.warc.gz | en | 0.803804 | 4,291 | 3.390625 | 3 |
- Nano Express
- Open Access
Metal-assisted chemical etching of Ge(100) surfaces in water toward nanoscale patterning
Nanoscale Research Letters volume 8, Article number: 151 (2013)
We propose the metal-assisted chemical etching of Ge surfaces in water mediated by dissolved oxygen molecules (O2). First, we demonstrate that Ge surfaces around deposited metallic particles (Ag and Pt) are preferentially etched in water. When a Ge(100) surface is used, most etch pits are in the shape of inverted pyramids. The mechanism of this anisotropic etching is proposed to be the enhanced formation of soluble oxide (GeO2) around metals by the catalytic activity of metallic particles, reducing dissolved O2 in water to H2O molecules. Secondly, we apply this metal-assisted chemical etching to the nanoscale patterning of Ge in water using a cantilever probe in an atomic force microscopy setup. We investigate the dependences of probe material, dissolved oxygen concentration, and pressing force in water on the etched depth of Ge(100) surfaces. We find that the enhanced etching of Ge surfaces occurs only when both a metal-coated probe and saturated-dissolved-oxygen water are used. In this study, we present the possibility of a novel lithography method for Ge in which neither chemical solutions nor resist resins are needed.
Germanium (Ge) is considered to be a substitute for Si for future complementary metal-insulator-semiconductor devices because of its higher carrier mobility than silicon (Si) . Although wet-chemical treatments are essential for the fabrication of Ge-based devices, they have not been well established yet. The primary reason for this is the chemical reactivity of Ge and its oxide (GeO2) with various solutions. For example, Ge oxide (GeO2) is permeable and soluble in water, unlike the more familiar silicon oxide (SiO2). Ge surfaces are also not resistant to various chemical solutions. For example, a piranha solution (a mixture of H2SO4 and H2O2) is commonly used in removing metallic and organic contaminants on the Si surface. However, we cannot use it for Ge because it damages Ge surfaces very easily. Although in several earlier works, the etching property of Ge surfaces has been investigated [2, 3], the unique chemical nature of Ge prevents researchers from developing surface treatment procedures for Ge using solutions.
One of the surface preparation steps needed is wet cleaning. For Si, sophisticated cleaning procedures have been developed since the 1970s [4, 5]. For Ge, however, researchers have just started developing wet cleaning processes together with some pioneering works [6–9]. Furthermore, a variety of solutions have been used in lithography processes (e.g., development, etching, and stripping) to fabricate Si-based devices. However, patterning techniques are not well optimized in the case of Ge. To realize these surface preparation methods, the impact of various aqueous solutions on the morphology of Ge surfaces should be understood on the atomic scale.
In this study, we pay attention to the interaction of water with Ge surfaces in the presence of metals on the Ge surface. In the case of Si, a metal/Si interface in HF solution with oxidants added has been extensively studied [10–18]. Metallic particles on Si serve as a catalyst for the formation of porous surfaces, which can be applied in solar cells. A similar metal/Si interaction is also used to form either oxide patterns or trenches . Recently, we have found that similar reactions occur on Ge surfaces even in water [20, 21]. On the basis of these preceding works, we show the formation of inverted pyramids in water on Ge(100) loaded with metallic particles in this study. We also discuss the mechanism of such formation on the basis of the relationship of redox potential as well as the catalytic role of metals. Then, we apply this metal-assisted chemical etching to the nanoscale patterning of Ge in water.
We used both p-type and n-type Ge(100) wafers with resistivities of 0.1 to 12 Ω cm and 0.1 to 0.5 Ω cm, respectively. The wafers were first rinsed with water for 1 min followed by treatment with an ultraviolet ozone generator for 15 min to remove organic contaminants. They were then immersed in a dilute HF solution (approximately 0.5%) for 1 min.
We conducted two experiments. One is the etch-pit formation by metallic particles in water. Here, we used both Ag and Pt nanoparticles. Ag nanoparticles with a diameter (φ) of approximately 20 nm were mainly used. To deposit these nanoparticles, Ge surfaces were dipped in HCl solution (10-3 M, 100 ml) with AgClO4 (10-4 M, 100 ml) for 5 min. After dipping, they were dried under N2 flow. We also used Pt nanoparticles of approximately 7 nm φ, which were synthesized in accordance with the literature . They were coated with a ligand (tetradecyltrimethylammonium) to avoid aggregation and were dispersed in water. This enabled us to obtain near monodispersed particles. The Ge samples were immersed in the resulting solution and dried under N2 flow. Then, the Ge surfaces loaded with the Pt particles were treated with the ultraviolet ozone generator for 6 h to remove the ligand bound to the Pt surfaces. In this experiment, we used two types of water with different dissolved oxygen concentrations, both of which were prepared from semiconductor-grade ultrapure water. The first type was water poured and stored in a perfluoroalkoxy (PFA) beaker. This water has a saturated dissolved-oxygen concentration of approximately 9 ppm. The second type contained a very low oxygen concentration of approximately 3 ppb. We, hereafter, call these two types of water ‘saturated dissolved-oxygen water’ (SOW) and ‘low dissolved-oxygen water’ (LOW), respectively. By putting a Ge sample in a PFA container connected directly to an ultrapure water line faucet, we were able to treat samples in LOW. The change in the structure of Ge surfaces loaded with metallic particles by immersion in water in the dark was analyzed by scanning electron microscopy (SEM, HITACHI S-4800, Hitachi Ltd., Tokyo, Japan).
The other experiment is the nanoscale machining of Ge surfaces by means of the catalytic activity of the metallic probes, using a commercial atomic force microscopy (AFM) system (SPA-400, Hitachi High-Tech Science Corporation, Tokyo, Japan) equipped with a liquid cell. It was carried out in the contact mode using two types of silicon cantilever probe from NANOWORLD (Neuchâtel, Switzerland): a bare Si cantilever and a cantilever coated with a 25-nm thick Pt/Ir layer (Pt 95%, Ir 5%). The resonant frequency and spring constant of both probes were 13 kHz and 0.2 N/m, respectively. An AFM head was covered with a box capable of shutting out external light. A conventional optical lever technique was used to detect the position of the cantilever. Ultrapure water exposed to air ambient and poured in the liquid cell contained approximately 9 ppm dissolved oxygen (SOW). We added ammonium sulfite monohydrate (JIS First Grade, NACALAI TESQUE Inc., Kyoto, Japan) to the water in the liquid cell. Performed according to the literature [23–25], this method enabled us to obtain ultralow dissolved-oxygen water with approximately 1 ppb oxygen (LOW).
Results and discussion
Figure 1a shows a typical p-type Ge(100) surface after the deposition of Ag particles. From the figure, it is clear that the particles are well dispersed (not segregated) and almost spherical, even with the simple deposition method used. They are approximately 20 nm in diameter. After the sample was immersed and stored in SOW in the dark for 24 h, its surface structure changed markedly, as shown in Figure 1b. Namely, most of the Ag particles disappeared and pits emerged. Most of the pits formed square edges. When the sample was dipped in SOW for more 48 h (72 h in total), each pit grew as shown in Figure 1c. It is clear that the shape of the pit is an inverted pyramid with edges aligned along the <110> direction. We confirmed in another experiment that (1) a metallic particle usually resided at the bottom of the pit , and (2) inverted pyramidal pits were formed on the n-type Ge sample as well. Figure 1d shows an SEM image of a p-type Ge(100) surface loaded with Pt particles. As indicated by white arrows, particles of about 7 nm φ are well dispersed. After the surface shown in Figure 1d was subsequently immersed in SOW and stored in the dark for 24 h, etch pits were formed as shown in Figure 1e.
Many works have shown pore formation on Si with metallic particles as catalysts in HF solution containing oxidants such as H2O2[10–18]. In analogy with these preceding works, it is likely that an enhanced electron transfer from Ge to O2 around metallic particles is the reason for the etch-pit formation shown in Figure 1b,c,e. The reaction by which O2 in water is reduced to water can be expressed by the redox reaction equation:
where E0 is the standard reduction potential, and NHE is the normal hydrogen electrode. The reaction in which Ge in an aqueous solution releases electrons can be expressed as
Because the redox potentials depend on the pH of the solution, these potentials at 25°C are respectively given by the Nernst relationship as
where the O2 pressure is assumed to be 1 atm. In water of pH 7, and are +0.82 and -0.56 (V vs. NHE), respectively. These simple approximations imply that a Ge surface is oxidized by the reduction of dissolved oxygen in water. We speculate that such oxygen reduction is catalyzed by metallic particles such as Ag and Pt. Electrons transferred from Ag particles to O2 in water are supplied from Ge, which enhance the oxidation around particles on Ge surfaces, as schematically depicted in Figure 2a. Because GeO2 is soluble in water, etch pits are formed around metallic particles, as shown in Figure 1. We showed in another experiment that the immersion of a Ge(100) sample loaded with metallic particles (Ag particles) in LOW creates no such pits [20, 21], which gives evidence of the validity of our model mentioned above. Furthermore, we have confirmed that the metal-assisted etching of the Ge surfaces in water mediated by dissolved oxygen occurs not only with metallic particles but also with metallic thin films such as Pt-Pd and Pt .
One may wonder why p-type Ge releases electrons to be oxidized as shown in Equation (2), because electrons are minority carriers for p-type samples. In the pore formation on Si by metal-assisted chemical etching in the dark, researchers mentioned that the conductivity type of the Si substrate (p-type or n-type) does not directly influence the morphology of pits formed [11, 12]. This is in agreement with our result in which a Ge surface with either conductivity type was preferentially etched around metallic particles in saturated dissolved-oxygen water in the dark. As described previously, we confirmed that similar etch pits to those on p-type wafers were formed on n-type ones. We presume that n-type Ge samples emit electrons in the conduction band (majority carriers), whereas p-type samples release them in the valence band.
In our experiments, most etch pits were pyramidal, one of which is shown in Figure 1c. The outermost Ge atoms on the (111) and (100) faces have three and two backbonds, respectively. This probably induces a (100) facet to dissolve faster in water than a (111) facet, forming a pyramidal etch pit on the Ge(100) surface, as schematically shown in Figure 2b. This anisotropic etching is very unique, because it has not been observed on Si(100) surfaces with metallic particles immersed in HF solution with oxidants. It should be noted that Figure 1e exhibits some ‘rhomboid’ and ‘rectangular’ pits together with ‘square’ pits. We believe that the square pits in Figure 1e represent pyramidal etch pits similar to those with Ag particles in Figure 1c. On the other hand, the reason for the formation of the rhomboid or rectangular pits in Figure 1e is not very clear at present. It is possible that the shape of a pit depends on that of a metallic particle. Although Ag particles (φ is approximately 20 nm) appear spherical in Figure 1a, the shape of the Pt particles (φ about 7 nm) is hard to determine from the SEM image in Figure 1d. To answer this question, etch pits should be formed with Ag and Pt particles of similar diameters and shapes, which remain to be tested.
On the basis of the experimental results shown above, we aimed at the nanoscale patterning of Ge surfaces in water by scanning a metal-coated probe. An example is shown in Figure 3 in which experimental conditions are schematically depicted on the left column. First, a p-type Ge(100) surface was imaged using a conventional Si cantilever in air in the contact mode with a scan area of 3.0 × 3.0 μm2, as shown in Figure 3a. Then, the 1.0 × 1.0 μm2 central area was scanned ten times with a pressing force of 3 nN, and the 3.0 × 3.0 μm2 initial area was imaged again. The ten scans took about 45 min. Significant changes in Figure 3a,b are hardly visible, indicating that the mechanical removal of the Ge surfaces by the cantilever is negligible. Experiments similar to those shown in Figure 3a,b were performed in SOW, and their results are shown in Figure 3c,d, respectively. In Figure 3d, the scanned area at the center of the image is observed as a shallow hollow, the cross-sectional profile of which revealed its depth to be approximately 1.0 nm. In contrast, the multiple scans (ten scans) using a Pt-coated cantilever in SOW created a clear square hollow, as shown in Figure 3e,f. The etched depth of the 1.0 × 1.0 μm2 central area in Figure 3f was about 4.0 nm from a cross-sectional profile. The mechanism of inducing the difference between image (d) and image (f) in Figure 3 is as follows. As mentioned previously, we scanned a cantilever in the contact mode. Taking into account the catalytic activity of metals (e.g., Pt) enhancing the reactions in Equations (1) and (2), we suppose that, at each moment during the scan, a Ge surface in contact with a Pt probe is preferentially oxidized in water in the presence of dissolved oxygen. This is schematically drawn in Figure 4a. Owing to the soluble nature of GeO2, the scanned area exhibits a square hollow, as shown in Figure 3f. In Figure 3b,d,f taken after the ten scans, no pyramidal pits such as those shown in Figure 1 are observed. This is because we did not fix the cantilever at only one surface site, but rather scanned it over a micrometer area, which is much larger than the etched depth, as schematically depicted in Figure 4b. Figure 5a,b shows a summary of etched depth as a function of pressing force on the n-type and p-type Ge(100) surfaces, respectively. Because the plots in Figure 5 slightly fluctuate, it is hard to fit them using a simple straight line or a curve. This is probably due to the difference in probe apex among the sets of experiments performed. However, Figure 5 clearly indicates that (1) the catalytic activity of metals (e.g., Pt) has a much greater effect on Ge etching than that of the mechanical machining caused by a pressurized cantilever, and (2) dissolved oxygen in water is the key molecule in metal-assisted etching. Namely, it is easy to imagine that the Ge surface was machined mechanically to some extent by the pressed cantilever on Ge. In Figure 5, the etched depth increases slightly at a larger pressing force even with a Si cantilever in SOW (light gray filled circles) or a Pt-coated cantilever in LOW (gray filled circles). This indicates that the mechanical etching of Ge occurs, but its effect is very small. On the other hand, a drastic increase in etched depth is observed with a Pt-coated cantilever in SOW (blue filled circles) at each pressing force, which is probably induced by the catalytic effect of Pt mediated by dissolved oxygen in water. One may think that the difference in etched depth between the blue and gray (or light gray) filled circles increases with increasing pressing force in Figure 5. This is as if the catalytic effect is enhanced at greater pressing forces. As for the reason for this enhancement, we imagine that the probe apex became blunter at larger forces. This results in an increase in the area of contact between the metallic probe and the Ge surface, which enhances the etching rate of Ge by the catalytic effect. Note in Figure 5 that an n-type Ge surface is etched deeper than a p-type one in the entire pressing force range when a Pt-coated cantilever was scanned in SOW. One explanation for this is that more electrons in the n-type Ge samples are transferred to oxygen molecules via Equations (1) and (2) because the work function, or the energy necessary for an electron to escape into vacuum from an initial energy at the Fermi level, is smaller for n-type samples than for p-type ones. This increases the oxidation rate of Ge, resulting in an accelerated etching of n-type Ge. Another explanation is that the resistivity of the samples, not the conductivity type, determines the etched depth shown by a blue filled circle in Figure 5. Because our p-type samples had a wider range of resistivities (0.1 to 12 Ω cm) than the n-type ones (0.1 to 0.5 Ω cm), we should not exclude the possibility of carrier density affecting the removal rate of Ge in metal-assisted chemical etching.
As mentioned in the ‘Background’ section, Ge is not resistant to a variety of chemical solutions. Hence, wet-chemical treatments such as wet cleaning and lithography for Ge have not been well optimized compared with those for Si. The results in this study present several important messages for future semiconductor processes for Ge. First, residual metallic particles on Ge can increase surface microroughness even in water. For Ge surfaces, LOW should be used for rinsing to prevent unwanted pit formation. However, the metal-assisted chemical etching presented here can be a novel patterning technique for Ge surfaces in water, one example of which is demonstrated in Figures 3 and 5. This method is unique and promising because it requires no chemical solution that degrades Ge surfaces but is used in conventional wet-chemical treatments in Si processes.
We studied the metal-induced chemical etching of Ge(100) surfaces in water. We showed that noble metal particles such as Ag and Pt induce anisotropic etching. The mechanism of this formation is the catalytic activity of noble metals to reduce O2 molecules in water, which promotes preferential oxidation around metallic particles. Etch pits are formed to roughen the surface due to the soluble nature of GeO2. A key parameter for controlling the reaction is the dissolved oxygen concentration of water. We proposed that enhanced etching can be used positively toward the nanoscale patterning of Ge surfaces in water. This idea was confirmed by a set of AFM experiments in which a cantilever probe on Ge(100) was scanned in either water or air. We investigated the dependences of probe material, pressing force, and dissolved oxygen concentration on etched depth. We demonstrated the metal-assisted patterning of Ge surfaces in water, the mechanism of which is similar to that of the metal-induced pit formation mentioned above.
Atomic force microscopy
Low dissolved-oxygen water
Normal hydrogen electrode
Scanning electron microscopy
Saturated dissolved-oxygen water
Matsubara H, Sasada T, Takenaka M, Takagi S: Evidence of low interface trap density in GeO2/Ge metal-oxide-semiconductor structures fabricated by thermal oxidation. Appl Phys Lett 2008, 93: 032104. 10.1063/1.2959731
Leancu R, Moldovan N, Csepregi L, Lang W: Anisotropic etching of germanium. Sens Actuators A-Phys 1995, 46: 35–37. 10.1016/0924-4247(94)00856-D
Fang C, Foll H, Carstensen J: Electrochemical pore etching in germanium. J Electroanal Chem 2006, 589: 259–288. 10.1016/j.jelechem.2006.02.021
Kern W, Puotinen DA: Cleaning solutions based on hydrogen peroxide for use in silicon semiconductor technology. RCA Review 1970, 31: 187–206.
Ohmi T: Total room temperature wet cleaning for Si substrate surface. J Electrochem Soc 1996, 143: 2957–2964. 10.1149/1.1837133
Onsia B, Conard T, De Gendt S, Heyns M, Hoflijk I, Mertens P, Meuris M, Raskin G, Sioncke S, Teerlinck I, Theuwis A, Van Steenbergen J, Vinckier C: A study of the influence of typical wet chemical treatments on the germanium wafer surface. In Ultra Clean Processing of Silicon Surfaces VII. Volume 103–104. Edited by: Mertens P, Meuris M, Heyns M. Switzerland: Solid State Phenomena; 2005:27–30.
Blumenstein C, Meyer S, Ruff A, Schmid B, Schafer J, Claessen R: High purity chemical etching and thermal passivation process for Ge(001) as nanostructure template. J Chem Phys 2011, 135: 064201. 10.1063/1.3624902
Fleischmann C, Houssa M, Sioncke S, Beckhoff B, Muller M, Honicke P, Meuris M, Temst K, Vantomme A: Self-affine surface roughness of chemically and thermally cleaned Ge(100) surfaces. J Electrochem Soc 2011, 158: H1090-H1096. 10.1149/1.3624762
Dei K, Kawase T, Yoneda K, Uchikoshi J, Morita M, Arima K: Characterization of terraces and steps on Cl-terminated Ge(111) surfaces after HCl treatment in N2 ambient. J Nanosci Nanotech 2011, 11: 2968–2972. 10.1166/jnn.2011.3895
Li X, Bohn PW: Metal-assisted chemical etching in HF/H2O2 produces porous silicon. Appl Phys Lett 2000, 77: 2572–2574. 10.1063/1.1319191
Mitsugi N, Nagai K: Pit formation induced by copper contamination on silicon surface immersed in dilute hydrofluoric acid solution. J Electrochem Soc 2004, 151: G302-G306. 10.1149/1.1669026
Tsujino K, Matsumura M: Boring deep cylindrical nanoholes in silicon using silver nanoparticles as a catalyst. Adv Mater 2005, 17: 1045–1047. 10.1002/adma.200401681
Tsujino K, Matsumura M: Helical nanoholes bored in silicon by wet chemical etching using platinum nanoparticles as catalyst. Electrochem Solid State Lett 2005, 8: C193-C195. 10.1149/1.2109347
Tsujino K, Matsumura M: Morphology of nanoholes formed in silicon by wet etching in solutions containing HF and H2O2 at different concentrations using silver nanoparticles as catalysts. Electrochim Acta 2007, 53: 28–34. 10.1016/j.electacta.2007.01.035
Chartier C, Bastide S, Levy-Clement C: Metal-assisted chemical etching of silicon in HF-H2O2. Electrochim Acta 2008, 53: 5509–5516. 10.1016/j.electacta.2008.03.009
Lee CL, Tsujino K, Kanda Y, Ikeda S, Matsumura M: Pore formation in silicon by wet etching using micrometre-sized metal particles as catalysts. J Mat Chem 2008, 18: 1015–1020. 10.1039/b715639a
Chourou ML, Fukami K, Sakka T, Virtanen S, Ogata YH: Metal-assisted etching of p-type silicon under anodic polarization in HF solution with and without H2O2. Electrochim Acta 2010, 55: 903–912. 10.1016/j.electacta.2009.09.048
Yae S, Tashiro M, Abe M, Fukumuro N, Matsuda H: High catalytic activity of palladium for metal-enhanced HF etching of silicon. J Electrochem Soc 2010, 157: D90-D93. 10.1149/1.3264643
Vijaykumar T, Raina G, Heun S, Kulkarni GU: Catalytic behavior of individual Au nanocrystals in the local anodic oxidation of Si surfaces. J Phys Chem C 2008, 112: 13311–13316. 10.1021/jp804545s
Arima K, Kawase T, Nishitani K, Mura A, Kawai K, Uchikoshi J, Morita M: Formation of pyramidal etch pits induced by metallic particles on Ge(100) surfaces in water. ECS Trans 2011, 41: 171–178.
Kawase T, Mura A, Nishitani K, Kawai Y, Kawai K, Uchikoshi J, Morita M, Arima K: Catalytic behavior of metallic particles in anisotropic etching of Ge(100) surfaces in water mediated by dissolved oxygen. J Appl Phys 2012, 111: 126102. 10.1063/1.4730768
Lee H, Habas SE, Kweskin S, Butcher D, Somorjai GA, Yang PD: Morphological control of catalytically active platinum nanocrystals. Angew Chem Int Ed 2006, 45: 7824–7828. 10.1002/anie.200603068
Fukidome H, Matsumura M: A very simple method of flattening Si(111) surface at an atomic level using oxygen-free water. Jpn J Appl Phys Part 2-Letters 1999, 38: L1085-L1086. 10.1143/JJAP.38.L1085
Fukidome H, Matsumura M, Komeda T, Namba K, Nishioka Y: In situ atomic force microscopy observation of dissolution process of Si(111) in oxygen-free water at room temperature. Electrochem Solid State Lett 1999, 2: 393–394. 10.1149/1.1390848
Tokuda N, Nishizawa M, Miki K, Yamasaki S, Hasunuma R, Yamabe K: Selective growth of monoatomic Cu rows at step edges on Si(111) substrates in ultralow-dissolved-oxygen water. Jpn J Appl Phys Part 2-Letters & Express Letters 2005, 44: L613-L615. 10.1143/JJAP.44.L613
The authors would like to thank Dr. Yusuke Yamada for the preparation of the Pt particles. The work was supported in part by a Grant-in-Aid for Young Scientists (A) (grant no.: 24686020) from Japan Society for the Promotion of Science. It was also supported in part by grants from Amano Institute of Technology and Ichijyu Industrial Science and Technology Promotion Foundation.
The authors declare that they have no competing interests.
TK carried out the nanoscale patterning experiments using the AFM setup. AM investigated the etching property of the Ge surface by metallic particles by SEM. KD and KN participated in the sample preparations. KK and JU analyzed the data, and MM revealed the nanoscale mechanism of metal-assisted chemical etching. KA gave the final approval of the version of the manuscript to be published. All authors read and approved the final manuscript.
About this article
Cite this article
Kawase, T., Mura, A., Dei, K. et al. Metal-assisted chemical etching of Ge(100) surfaces in water toward nanoscale patterning. Nanoscale Res Lett 8, 151 (2013) doi:10.1186/1556-276X-8-151
- Dissolved oxygen
- Oxygen reduction
- Atomic force microscopy | <urn:uuid:6301e30a-2050-4dd3-a886-9097c097f7df> | CC-MAIN-2019-47 | https://nanoscalereslett.springeropen.com/articles/10.1186/1556-276X-8-151 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00099.warc.gz | en | 0.906799 | 6,331 | 2.953125 | 3 |
The secrets, Benefits and Virtues of Friday – (Jumm’ah)
(Also Jumm’ah is a called a day of ‘Eid’)
“O Muslims! Allah Ta’ala has made this day-(Friday) a day of ‘Eid’.
So have a bath on this day, whoever has perfume should apply it, and use the miswaak.”
Friday has many distinguishing features and virtues that Allaah has bestowed upon this day and not others.
It was narrated that Abu Hurayrah رضي الله عنه and Hudhayfah رضي الله عنه said: The Messenger of Allaah (صلى الله عليه وسلم) said: “Allaah led those who came before us away from Friday (Jumu’ah). The Jews had Saturday, and the Christians had Sunday. Then Allaah brought us and Allaah guided us to Friday. So there is Friday, Saturday and Sunday, and thus they will follow us on the Day of Resurrection. We are the last of the people of this world but we will be the first on the Day of Resurrection, and we will be dealt with before all others.”
[Narrated by Muslim, 856]
It was narrated from Aws ibn Aws رضي الله عنه that the Prophet (صلى الله عليه وسلم) said: “The best of your days is Friday (Jumu’ah). On that day Adam (alayhis-sallam) was created; on that day he died; on that day the Trumpet will be blown and on that day all of creation will swoon. So send a great deal of blessings upon me, for your blessings will be shown to me.” They said, “O Messenger of Allaah, how will our blessings upon you be shown to you when you have turned to dust?” He said, “Allaah has forbidden the earth to consume the bodies of the Prophets, عليهم الصلاة والسلام – peace be upon them.” [Narrated by Abu Dawood, 1047; classed as saheeh by Ibn al-Qayyim رحمه الله in his comments on Sunan Abi Dawood]
It was narrated that Abu Hurayrah (رضي الله عنه) said: The Messenger of Allaah (صلى الله عليه وسلم) said: “The best day on which the sun rises is Friday (Jumu’ah). On it Adam was created, on it he was admitted to Paradise and on it he was expelled therefrom.” [Narrated by Muslim, 1410]
This hadeeth includes some of the reasons why Friday is regarded as special.
Al-Nawawi رحمة اللہ علیه said: Al-Qaadi ‘Iyaad رحمة اللہ علیه said: The apparent meaning is that these virtues do not mean that Friday is regarded as special because Adam (alayhis-sallam – peace be upon him) was expelled on a Friday and the Hour will begin on a Friday. Rather it is meant to explain what momentous events took place and will take place on this day, so that people will make the most of this day to do righteous deeds, so as to attain the mercy of Allaah and ward off His punishment.
It was narrated that Abu Lubaabah ibn ‘Abd al-Mundhir رضي الله عنه said: The Prophet (صلى الله عليه وسلم) said: “Friday is the master of days, and the greatest of them before Allaah. It is greater before Allaah than the day of al-Adha and the day of al-Fitr. It has five characteristics: on this day Allaah created Adam, on it He sent Adam down to the earth, on it Allaah caused Adam to die, on it there is a time when a person does not ask Allaah for anything but He gives it to him, so long as he does not ask for anything haraam (forbidden), and on it the Hour will begin. There is no angel who is close to Allaah, no heaven, no earth, no wind, no mountain and no sea that does not fear Friday.” [Narrated by Ibn Maajah, 1084]
Al-Sanadi رحمة اللہ علیه said: “They fear Friday” means they fear the onset of the Hour. This indicates that all created beings are aware of the days and they know that the Day of Resurrection will come on a Friday (Jumu’ah).
The virtues of this day include the following:
1 – On it is Salaat al-Jumu’ah (Friday prayer), which is the best of prayer.
Allaah سبحانه و تعالى says:
“O you who believe (Muslims)! When the call is proclaimed for the Salaah (prayer) on Friday (Jumu‘ah prayer), come to the remembrance of Allaah [Jumu‘ah religious talk (Khutbah) and Salaah (prayer)] and leave off business (and every other thing). That is better for you if you did but know!” [al-Jumu’ah 62:9]
Muslim (233) narrated from Abu Hurayrah (رضي الله عنه) that the Messenger of Allaah (صلى الله عليه وسلم) said: “The five daily prayers and from one Jumu’ah to the next is an expiation for whatever sins come in between them, so long as one does not commit a major sin.”
2 – Praying Fajr in congregation on Fridays is the best prayer that the Muslim can pray during the week.
It was narrated that Ibn ‘Umar (رضي الله عنهما) said: The Messenger (صلى الله عليه وسلم) said: “The best prayer before Allaah is Fajr prayer on Friday in congregation.” [Narrated by al-Bayhaqi in Shu’ab al-Eemaan]
One of the special features of Fajr prayer on Friday is that it is Sunnah to recite Soorat al-Sajdah in the first rak’ah and Soorat al-Insaan in the second.
It was narrated from Abu Hurayrah (رضي الله عنه) that the Prophet (صلى الله عليه وسلم) used to recite in Fajr prayer in Fridays Soorat al-Sajdah (32) in the first rak’ah and Soorat al-Insaan (76) in the second. [Narrated by al-Bukhaari, 851; Muslim, 880]
Al-Haafiz Ibn Hajar رحمة اللہ علیه said: It was said that the reason why these two soorahs are recited is because they mention the creation of Adam and what will happen on the Day of Resurrection, because that will come to pass on a Friday.
3 – Whoever dies during the day or night of Friday, Allaah will protect him from the trial of the grave.
It was narrated that ‘Abd-Allaah ibn ‘Amr رضي الله عنه said: The Messenger of Allaah (صلى الله عليه وسلم) said: “There is no Muslim who dies during the day of Friday or the night of Friday but Allaah will protect him from the trial of the grave.”
[Narrated by al-Tirmidhi, 1074]
It is recommended to recite Surat al-Kahf completely the night before Friday, and it is also recommended to do so Friday itself, before Maghrib time. Ibn Abidin said, And it is best to do so early on Friday, in order to rush to the good and to avoid forgetting. [Ibn Abidin, Radd al-Muhtar, Bab al-Jumu`ah]
The evidence for it being recommended includes the hadith related by Hakim and Bayhaqi, from Abu Sa`id (Allah be pleased with him), Whoever recites Surat al-Kahf on Friday, light shall shine forth for him between the two Fridays. [Ibn Hajar, Talkhis al-Habir]
It is mentioned in Heavenly Ornaments, by Imam al-Tahanawi:
The Virtues of Jumu’ah
1 . Rasulullah (Allah bless him & give him peace) said: “Friday is the best of days. It was on this day that Hadrat Aadam alayhis salaam was created, it was on this day that he was granted entry into jannah, it was on this day that he was removed from jannah (which became the cause for man’s existence in this universe, and which is a great blessing), and the day of resurrection will also take place on this day. ” (Sahih Muslim)
2 . It is related from Imam Ahmad rahmatullahialayh that he said that in certain aspects the rank of the night of jumu’ah is even higher than Laylatul Qadr One of the reasons for this is that it was on this night that Rasulullah (Allah bless him & give him peace) appeared in the womb of his mother Rasulullah’s (Allah bless him & give him peace) appearance in this world was a cause of so much good and blessings both in this world and in the hereafter that they cannot be enumerated. (Ash’atulLama’aat)
3 . Rasulullah (Allah bless him & give him peace) said: “There is such an hour on Friday that if any Muslim makes dua in it, his dua will definitely be accepted. ” (Bukhari, Muslim) The ulama have differed in specifying that hour which has been mentioned in the Hadith . Shaykh Abdul Haq Muhaddith Dehlawi rahmatullahialayh has mentioned 40 different opinions in his book Sharh Sifrus Sa’aadah . However, from among all these opinions he has given preference to two opinions: (1) That time is from the commencement of the khutbah till the end of the salaat, (2) That time is towards the end of the day . A big group of ulama have given preference to this second opinion and there are many Ahadith which support this opinion . Shaykh Dehlawi rahmatullahialayh says that this narration is correct that Hadrat Fatimah radiallahuanha used to order her maid on Fridays to inform her when the day is about to end so that she could occupy herself in making zikr and duas. (Ash’atulLama’aat)
4 . Rasulullah (Allah bless him & give him peace) said: “Of all the days, Friday is the most virtuous. It is on this day that the trumpet will be blown. Send abundant durood upon me on Fridays because they are presented to me on that day. ” The Sahabah radiallahu anhum asked: “O Rasulullah! How will they be presented to you when even your bones will not be present after your death?” Rasulullah (Allah bless him & give him peace) replied: “Allah Ta’ala has made the earth haraam upon the prophets forever . ” (Abu Daud)
5 . Rasulullah (Allah bless him & give him peace) said: “The word “shaahid” refers to Friday . There is no day more virtuous than Friday. There is such an hour in this day that no Muslim will make dua in it except that his dua will be accepted. And he does not seek protection from anything except that Allah Ta’ala will grant him protection . ” (Tirmidhi) The word “shaahid” appears in Surah Burooj . Allah Ta’ala has taken an oath of that day. He says in the Quran:
“By the sky in which there are constellations .By the promised day (of judgement). By the day that witnesses (Friday), and the day that is witnessed (day of Arafah). ”
6 . Rasulullah (Allah bless him & give him peace) said: “Friday is the “mother” of all days and the most virtuous in the sight of Allah Ta’ala. In the sight of Allah Ta’ala it has more greatness than Eid ul-Fitr and Eid ul-Ad’haa. ” (Ibn Majah)
7 . Rasulullah (Allah bless him & give him peace) said: “The Muslim who passes away on the night or during the day of Friday, Allah Ta’ala saves him from the punishment of the grave . ” (Tirmidhi)
8 . Once Hadrat Ibne Abbas radiallahu anhu recited the following verse: “This day, I have completed your Deen for you. ” A Jew was sitting near him. On hearing this verse being recited he remarked: “If this verse was revealed to us, we would have celebrated that day as a day of eid . ” Ibne Abbas radiallahu anhu replied: “This verse was revealed on two eids, i. e . on the day of jumu’ah and the day of arafah. ” In other words, what is the need for us to make that day into a day of eid when it was already a day of two eids?
9 . Rasulullah (Allah bless him & give him peace) used to say that the night of jumu’ah is a lustrous night, and the day of jumu’ah is a lustrous day. (Mishkaat)
10 . After qiyaamah, Allah Ta’ala will send those who deserve paradise into paradise, and those who deserve hell into hell . The days that we have in this world will also be there. Although there will be no day and night, Allah Ta’ala will show us the extent of days and nights and also the number of hours. So when Friday will come and that hour when the people used to go for jumu’ah will approach, a person will call out saying: “O dwellers of jannah! Go into the jungles of abundance, the length and breadth of which are not known to anyone besides Allah Ta’ala. There will be mounds of musk which will be as high as the skies. The prophets alayhimus salaam will be made to sit on towers of light, and the believers on chairs of sapphires. Once everyone is seated in their respective places, Allah Ta’ala will send a breeze which will carry that musk. That breeze will carry the musk and apply it to their clothing, faces and hair . That breeze will know how to apply that musk even better than that woman who is given all the different perfumes of the world. Allah Ta’ala will then order the carriers of His throne to go and place His throne among all these people. He will then address them saying: “O my servants who have brought faith in the unseen despite not seeing Me, who have attested My Rasul (Allah bless him & give him peace), and who have obeyed My laws! Ask Me whatever you wish for . This day is the day of giving abundantly . ” They will all exclaim in one voice: “O Allah! We are pleased with You, You also be pleased with us. ” Allah Ta’ala will reply: “O dwellers of jannah! If I were not pleased with you all, I would not have kept you in My jannah . Ask for something because this is the day of giving in abundance. ” They will all say in one voice: “O Allah! Show us Your beauty, that we may be able to look at Your noble being with our very eyes . ” Allah Ta’ala will lift the veil and will become apparent to these people and His beauty will engulf them from all sides . If this order was not given from before hand that the jannatis will never get burnt, without doubt they would not have endured the heat of this light and they would all have got burnt. He will then ask them to go back to their respective places . Their beauty and attractiveness will double through the effects of that Real beauty. These people will then go to their wives. They will not be able to see their wives nor will their wives be able to see them. After a little while, the nur which was concealing them will be removed and they will now be able to see each other. Their wives will tell them that how is it that you do not have the same appearance which you had left with? That is, your appearance is a thousand times better now. They will reply that the reason for this is that the noble being of Allah Ta’ala was made apparent to us and we saw His beauty with our very eyes . (Sharh Sifrus-Sa’aadah) See what a great bounty they received on the day of jumu’ah.
11 . Every afternoon, the heat of jahannam is increased. However, through the blessings of jumu’ah, this will not be done on Fridays . (Ihyaa ul-Uloom)
12 . On one Friday, Rasulullah (Allah bless him & give him peace) said: “O Muslims! Allah Ta’ala has made this day a day of eid . So have a bath on this day, whoever has perfume should apply it, and use the miswaak. ” (Ibn Majah)
The Aadaab of Jumu’ah
1 . Every Muslim should make preparations for jumu’ah from Thursday. After the asr salaat of Thursday, he should make a lot of istighfaar. He should clean his clothes and keep them ready. If he does not have any perfume in his house, then if it is possible he should try and obtain some and keep it ready so that he will not get distracted with these things on jumu’ah. The pious people of the past have stated that the person to receive the most benefit on Friday will be that person who waits for it and who makes preparations for it from Thursday. The most unfortunate person will be he who does not even know as to when Friday will fall, so much so that he will ask the people in the morning as to which day this is. Some pious people used to go and stay in the jaame musjid from the night of jumu’ah in order to make full preparations for the following day. (Ihya aul-Uloom, vol. 1, page 161)
2 . On the day of jumu’ah, ghusl should be made and the hair of the head and the rest of the body should be thoroughly washed. It is also very virtuous to use the miswaak on this day.
3 . After making ghusl, a person should wear the best clothing that he possesses, and if possible he should also apply some perfume. He should also clip his nails.
4 . He should try and go very early to the jaame musjid. The earlier a person goes, the more reward he will receive. Rasulullah (Allah bless him & give him peace) said: “On the day of jumu’ah, the angels stand at the entrance of that musjid in which jumu’ah salaat is to be offered. They write down the name of the person who enters the musjid first, and thereafter the name of the person who follows, and they continue doing this . The person who entered first will receive the reward of sacrificing a camel in the path of Allah, the one who followed him will get the reward of sacrificing a cow, thereafter a chicken, thereafter the reward of giving an egg as charity in the path of Allah. Once the khutbah commences, the angels close the register and begin listening to the khutbah. ” (Bukhari and Muslim)
In olden times, the roads and alleys used to be extremely busy in the mornings and at fajr time . All the people used to go so early to the jaame musjid and there used to be such a large crowd that it used to look like the days of eid . Later, when this habit was given up, people began saying that this is the first innovation in Islam . After writing this, Imam Ghazali rahmatullahialayh says: “Aren’t the Muslims ashamed of themselves that the Jews and Christians go so early in the morning to their synagogues and churches on Saturdays and Sundays. Those who are businessmen go so early to the bazaars in order to do their buying and selling . Why don’t the Muslims do the same?” The reality of the situation is that the Muslims have totally reduced the value of this blessed day. They do not even know what day this is, and what a high status it has. How sad it is that the day which was more valuable than eid in the eyes of Muslims of the past, which Rasulullah (Allah bless him & give him peace) was proud of and the day which was not granted to the previous nations has become so dishonoured at the hands of Muslims today and it is such a great ingratitude to the favour of Allah Ta’ala that the consequence of all this can be seen with our very eyes. ”
5 . By going walking for the jumu’ah salaat, one gets the reward of fasting for one year for every step that he takes. (Tirmidhi)
6 . On Fridays, Rasulullah (Allah bless him & give him peace) used to recite Surah Alif Laam Meem Sajdah and Surah Hal Ataa, in the fajr salaat . These Surahs should therefore be occasionally recited in the fajr salaat on Fridays . Occasionally they should be left out so that people do not regard their recitation as wajib.
7 . For the jumu’ah salaat, Rasulullah (Allah bless him & give him peace) used to recite the following Surahs: al-Jumu’ah and al-Munaafiqun, or al-A’la and al-Ghaashiyah .
8 . There is a lot of reward in reciting Surah Kahf either before the jumu’ah salaat or after it. Rasulullah (Allah bless him & give him peace) said: “The person who recites Surah Kahf on Fridays, a nur will appear for him from below the arsh as high as the skies. This light will help him in the darkness of the day of resurrection . And all the sins which he may have committed from the last Friday till this Friday will be forgiven . ” (Sharh Sifrus-Sa’aadah) The ulama have written that this Hadith refers to minor sins because major sins are not forgiven without making taubah.
9 . There is more reward in reciting durood on Fridays than on other days . It has been mentioned in the Hadith that durood should be recited abundantly on Fridays .
The Virtues and Importance of Jumu’ah Salaat
Jumu’ah salaat is fard-e-ayn . It has been established from the Quran, Hadith and the consensus of the ummah. It is one of the most salient features of Islam. The person who rejects jumu’ah salaat is a kaafir. The one who misses it without any valid excuse is a faasiq.
1 . Allah Ta’ala says in the Quran:
Translation : “O you who believe! When the call for jumu’ah salaat is made, hasten towards the remembrance of Allah Ta’ala and leave all transactions. This is best for you if only you know . ”
In this verse, “remembrance” refers to the jumu’ah salaat and khutbah. “Hasten” means that one should go with great concern and care.
2 . Rasulullah (Allah bless him & give him peace) said: “The person who has a bath on Friday, purifies himself as far as possible, applies oil to his hair, applies perfume, leaves for the musjid, when he arrives at the musjid he does not sit down by removing anyone from his place, offers as many nafl salaats as possible, when the imam delivers the khutbah he remains silent – then his sins from the previous jumu’ah till now will be forgiven. ” (Bukhari)
3 . Rasulullah (Allah bless him & give him peace) said: “The person who has a bath on Friday and goes early to the musjid on foot, and not by a vehicle, listens to the khutbahand does not do any foolish act while it is being delivered, will get the reward of one year’s ibaadah, one year’s fasting, and one year’s salaat; for every step that he takes. ” (Tirmidhi)
4 . Hadrat Ibn Umar and Abu Hurayrah radiallahu anhuma narrate that they heard Rasulullah (Allah bless him & give him peace) saying: “People should abstain from leaving out jumu’ah salaat If not, Allah Ta’ala will put a seal over their hearts whereby they will fall into severe negligence. ” (Muslim)
5 . Rasulullah (Allah bless him & give him peace) said: “The person who misses out three jumu’ah’s without any valid reason, Allah Ta’ala puts a seal over his heart. ” (Tirmidhi) In another narration it is mentioned that Allah Ta’ala becomes displeased with him.
6 . Taariq bin Shihaab radiallahu anhu narrates that Rasulullah (Allah bless him & give him peace) said: “The jumu’ah salaat with jama’at is a duty which is wajib on every Muslim with the exception of the following four persons: (i) a slave, that is the one who is owned by someone according to the rules laid down by the Shariah, (ii) a woman, (iii) an immature boy, (iv) a sick person. ” (Abu Daud)
7 . Ibn Umar radiallahuanhu narrates that Rasulullah (Allah blesshim & give him peace) said the following in regard to those who leave out jumu’ah: “It is my earnest desire that I appoint someone as imam in my place while I go and burn the homes of those who do not attend the jumu’ah salaat. ” (Muslim) A similar Hadith has also been related with regard to leaving out jama’at. We have mentioned this Hadith previously.
8 . Ibn Abbas radiallahuanhu narrates that Rasulullah (Allah blesshim & give him peace) said: “The person who leaves out jumu’ah salaat without a valid reason is written down as a hypocrite in a book that is absolutely protected from any changes and modifications. ” (Mishkaat) In other words, he will be labelled as a hypocrite forever. However, if he repents or Allah forgives him solely out of His mercy, then this is another matter.
9 . Hadrat Jaabir radiallahuanhu narrates that Rasulullah (Allah bless him & give him peace) said: “Jumu’ah salaat becomes obligatory on the person who believes in Allah Ta’ala and the last day, except for the sick, musafir, woman, child, and a slave. If a person occupies himself in something unnecessary, or in some transaction, Allah Ta’ala also turns away from him and does not worry about him and Allah is worthy of all praise. ” (Mishkaat) In other words, He is not affected by anyone’s ibaadah nor does He benefit in any way. His essence and being will remain the same irrespective of whether anyone praises Him and worships Him or not.
10 . Hadrat Ibn Abbas radiallahuanhu says that the person who leaves out several jumu’ah salaats consecutively has in fact turned away from Islam. (Ash’atulLama’aat)
11 . A person asked Ibn Abbas radiallahuanhu regarding a person who passed away and who should not join the jumu’ah and jama’at salaats: “What do you have to say regarding such a person?” He replied: “That person is in jahannam . ” This person continued asking him this question for a full month and he gave him the same reply. (Ihyaa ul-Uloom)
Even by merely glancing at these Ahadith, one can come to the conclusion that the Shariah has laid great stress on jumu’ah salaat and that severe warnings have been given to the one who leaves out jumu’ah . Can a person who claims to be a Muslim still have the audacity of leaving out this fard duty? [end of section quoted from Heavenly Ornaments | <urn:uuid:f46525b8-5a38-4dee-b25b-846afca5ee2f> | CC-MAIN-2019-47 | http://www.ummah.co/the-virtues-and-sunnahs-of-jummah/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00180.warc.gz | en | 0.959866 | 6,561 | 2.890625 | 3 |
Chronic pain affects approximately 19% of adults in Europe . In Austria 25% of the adult population is affected by chronic pain . These prevalence rates vary by the types of conditions of chronic pain and rates are higher for women and older adults or subjects with lower educational level [3–5]. Sociodemographic factors, such as sex interact with the mechanisms of coping with chronic pain [2, 6, 7] and therefore play an important role in the consequences of chronic pain. One of the most frequently reported problems for adults living with chronic pain is sleep disturbance [1, 4, 8]. It is notable that approximately two thirds of the general population with chronic pain are less able or unable to sleep because of pain . According to a study carried out in primary care, approximately 40%s of chronic pain patients reported poor sleep quality, where age, female sex, low income, higher pain intensity and the presence of depression were significantly associated with poorer sleep quality . In a clinical setting, more than half of patients with chronic pain reported insomnia, and sleep disturbance was significantly associated with pain intensity, sensory pain ratings, affective pain ratings, general anxiety, general depression, and health anxiety .
The association between chronic pain and sleep disorders is bidirectional ; on the one hand pain leads to sleep disturbance and on the other hand, patients with persistent insomnia often develop chronic pain . Psychological factors, such as mood and pain-related attention , and pre-sleep arousal have been shown to play an important role in the interaction between sleep quality and chronic pain. Common mental disorders, such as depression and anxiety often emerge together with chronic pain [2, 3, 14, 15] and with insomnia [16, 17]. When depression and insomnia co-occur in chronic pain patients, they have adverse effects on many pain outcomes .
Studies on sleep quality in chronic pain patients are scarce in the Austrian population. The aim of the present analysis was to investigate the association between different dimensions of chronic pain, such as pain intensity, sensory dimension, and affective dimension and different dimensions of sleep quality, such as time before falling asleep, sleep fragmentation, sleep duration, and recovering effects of sleep in subjects suffering from chronic pain. Additionally, it was the aim to evaluate how this association changed by controlling for possible confounders, such as sociodemographic factors, pain characteristics, and physical and psychological quality of life.
This analysis is part of a larger study with the primary aim to assess the cost of illness in patients which chronic pain in Austria. Patients from three different hospital-based outpatient clinics were included in the study: the outpatient clinic of the Department of Physical Medicine and Rehabilitation of the Medical University of Vienna, the headache clinic of the Department of Neurology of the Medical University of Vienna and the pain clinic of the Orthopedic Hospital Speising in Vienna. Inclusion criteria were: chronic pain (pain lasting for at least 3 months) at any body site, age between 18 and 65 years, sufficient knowledge of the German language, and currently no cancer and no psychiatric diseases. Exclusion criteria were current cancer treatment or current psychiatric inpatient treatment according to the patients’ report.
Patients were fully informed about the study and the possibility to participate by the treating doctor during a visit to the clinics. In the headache clinic, patients were additionally recruited via mail and asked to participate in the study. Recruitment of study participants lasted from December 2012 to February 2014. All subjects signed an informed consent form to participate in the study. The local ethics committee of the Medical University of Vienna approved the study (EK-number 1624/2012).
Location of chronic pain was assessed with a two-sided homunculus diagram, and subjects were asked to show on this drawing the body site or body sites in which they experienced chronic pain. In total, 14 different body locations could be identified. This approach was also used in publications based on the Austrian Health Interview Survey . Pain characteristics were assessed with the short-form McGill pain questionnaire (SF-MPQ) . This questionnaire consists of 15 pain descriptors which are assessed on a Likert scale from 0 (none) to 3 (severe), a visual analogue scale (VAS) for the current overall pain intensity ranging from 0 (no pain) to 10 (most severe pain imaginable) and a present pain intensity (PPI) scale to indicate overall pain intensity by labeling current pain from 0 (no pain) to 5 (excruciating pain). The first 11 items were summed to indicate the sensory dimension of pain (e. g. throbbing and aching), with a possible score from 0 to 33. The next four items were summed to indicate the affective dimension of pain (e. g. fearful and sickening), with a possible score from 0 to 12. The final score for the SF-MPQ was calculated by summing the scores for the sensory dimension, the affective dimension, PPI, and VAS. The total score has a possible range from 0 to 60. For the study, the German version of the SF-MPQ was used .
Sleep was assessed with a questionnaire used in a previous study . From this questionnaire, four questions regarding sleep were applied: “how long do you need to fall asleep?”, which could be answered by “up to 30 min”, “up to 1 h” and “longer than 1 h” . “How often is your sleep interrupted due to pain?”, which could be answered by “not at all”, “1–3 times per night” and “3 times per night and more” . “How long do you sleep in total?”, which could be answered by “less than 5 h”, “5–7 h” and “7 h and more” . “Do you feel fresh and recovered after waking?”, which could be answered by “yes” or “no”. Additionally, the use of sedative medication was assessed with the question “do you use prescribed medication for sleep, such as sedatives, antidepressants, anxiolytic drugs or over the counter medication against sleeping problems?”, which could be answered by “yes” or “no”.
As the covariate, health-related quality of life (HRQOL) was assessed by the use of the short-form 12 health survey (SF-12 questionnaire), and summed scores for the dimensions of physical and psychological quality of life were calculated . Further covariates were sex, age, and level of education in three levels: primary education (compulsory school up to the age of 15 years), secondary education (apprenticeship, professional/commercial school, and high school up to the age of 19 years), and tertiary education (university).
Statistical analyses were calculated with IBM SPSS Statistics V21.0. Descriptive statistics were applied to describe the sample characteristics. For this, either mean and standard deviation (SD), where appropriate or numbers (N) and percentages (%) are indicated. To describe the association between pain characteristics and sleep parameters, analyses of variance (ANOVA) was applied. Finally, stepwise logistic regression models were calculated with the four dichotomized sleep parameters as the dependent variables and the total score of SF-MPQ as the independent variable. After an initial crude model, in model I adjustment was made for age, sex, and level of education. In model II additional adjustment for duration of chronic pain and the number of painful body sites was made. Finally, in model III, additional adjustments were made for the summed scores for physical and psychological quality of life according to the SF-12. Results are presented as odds ratios (ORs) and 95% confidence intervals (95% CI).
In total 121 patients with chronic pain were included in the study. The age range was from 23 to 65 years, and almost three quarters were women. Most of the subjects had secondary education, but the proportion with university as the highest education (one fifth) was remarkably high (Table 1). The majority of subjects indicated that the pain had already lasted for longer than 10 years. Most subjects indicated more than one body site with chronic pain. The most common body sites were lower back (57.9% of the patients), neck (51.2%), legs (50.4%), shoulders (44.6%), head (35.5%), hips (32.2%), feet and toes (28.9%), arms (25.6%), upper back (24.8%), hands and fingers (23.1%), and face (20.7%). Mean values for the dimensions of SF-MPQ are listed in Table 1. Approximately one fifth of the subjects indicated that they needed more than I h to fall asleep. The majority reported that they woke up regularly due to pain and one third indicated a sleep duration of less than 5 h. Almost two thirds answered that they perceived their sleep as non-recovering (Table 1). The question regarding the use of sleep medication was answered by 25.6% with “yes” and 72.7% with “no” (1.7% missing).
Characteristics of the 121 participants (male 32, female 89, mean age 49 ± 9 years) with chronic pain
Mean or N
SD or %
Level of education
SF-MPQ (sensory dimension)
SF-MPQ (affective dimension)
SF-MPQ (PPI score)
SF-MPQ (total score)
Duration of chronic pain (years)
Number of painful body sites
SF-12 (physical sum score)
SF-12 (psychological sum score)
Up to 30 min
Up to 60 min
Longer than 60 min
1–3 times per night
More than 3 times
More than 7 h
Less than 5 h
Recovering effect of sleep
According to the results of ANOVA, there was a clear association between the affective dimension of pain, the PPI score, the VAS score, the total SF-MPQ score and the duration until falling asleep. There was also a clear association between the affective dimension, the PPI score, and the total score of pain experience with the probability of waking up due to pain. Furthermore, the sensory dimension, the affective dimension, the PPI score, and the total score were significantly associated with the probability of perceiving sleep as recovering. There was no significant association of any pain measure with sleep duration (Table 2).
Mean values of the different dimensions of the SF-MPQ by different sleep parameters (ANOVA)
Up to 30 min
Up to 60 min
Longer than 60 min
1–3 times per night
More than 3 times
More than 7 h
Less than 5 h
The binary logistic regression analyses revealed that one point more in the total score of SF-MPQ increased the odds of needing more than 30 min for falling asleep, waking up more than three times due to pain, sleeping less than 5 h, and perceiving the pain as non-recovering, by 6% (crude model in Table 3). Adjusting for socio-demographic factors (model I, Table 3), duration of pain, and number of painful body sites (model II, Table 3) did not clearly alter the association between pain and sleep parameters; however, adjusting additionally for physical and psychological quality of life lowered the odds ratio (OR) and the association was no longer significant; therefore, a modifying effect of the quality of life on the association between pain perception and sleep quality can be assumed (model III, Table 3).
Association between total SF-MPQ-score (independent variable) and the various sleep parameters (dependent variables). Results of stepwise bivariate logistic regression models
OR (95% CI)
OR (95% CI)
OR (95% CI)
OR (95% CI)
Falling asleep after more than 30 min
Waking up more than three times due to pain
Sleep duration < 5 h
No perceived recovering sleep
The results of the present study showed notable sleep disturbances in the sample of patients suffering from chronic pain. The affective dimension of pain and pain intensity were particularly associated with sleep problems. The time needed for falling asleep, frequency of waking up in the night, and experiencing sleep as recovering were sleep dimensions most often influenced by pain, whereas sleep duration was only marginally affected by the perception of pain. Interestingly, the association between sleep quality and pain was not influenced by sociodemographic variables, duration of pain or the number of painful body sites; however, physical and psychological quality of life modified this association, since adjustment for those factors lowered the OR and led to a loss of significance of this association.
In the present study, the majority of the patients with chronic pain were females. This relates to the fact that chronic pain disorders have considerable higher prevalence in females than males ; however, adjusting for sex did not affect the association between pain and sleep quality (Table 3), hence the association between pain and sleep quality did not show gender differences. In another study, however, female patients with chronic pain reported poorer sleep quality than males . The majority of the participants in our study reported falling asleep in up to 30 min (Table 1). This result might indicate that falling asleep is not the major sleep difficulty for the majority of chronic pain patients. Nevertheless, the proportion of patients with chronic pain who relatively quickly fall asleep was lower than the proportion (76%) in the general Austrian population . Furthermore, three out of four pain assessment dimensions (i.e. affective dimension, PPI and global pain intensity), as well as the total score of the SF-MPQ, showed significant associations with sleep latency (Table 2). These results emphasize a link between sleep onset latency and intensity of chronic pain and pain-related cognition. In comparison, Tang et al. reported in their study on patients with chronic back pain that affective pain ratings and health anxiety were significant predictors of impaired sleep latency . Sleep fragmentation (1–3 times per night) was reported in almost half of our patients (Table 1). In contrast, in the observation of the general population only 15% described difficulties in sleep maintenance . In the present study, more than half the subjects reported sleeping 5–7 h per day (Table 1). Sleep duration showed no significant associations with the mean values of the different dimensions of the SF-MPQ by the form of the different sleep parameters (Table 2). These results are in accordance with the results of the study conducted by Smith et al. , who reported an average sleep duration of 6 h per day in a sample of patients suffering from chronic pain. Nevertheless, while sleep duration did not seem to be impaired in our study, recovering effects of sleep were reported in only a minority of participants (Table 1), and significant associations between non-recovering sleep with the mean values of the different dimensions of the SF-MPQ by the form of the different sleep parameters were found (e.g. sensory dimension, affective dimension, PPI and total score, Table 2).
In our study, the affective dimension of chronic pain (e.g. experiencing pain as tiring exhausting, sickening, fearful, and cruel punishing) showed the highest impact on sleep quality next to pain intensity. This finding underlines the strong effect of psychological factors on the association between pain and sleep. This fact is in line with previous literature, where common mental disorders, such as depression [13, 24–27] and anxiety [24, 26, 27] were highly associated with sleep disturbances in patients with chronic pain.
Sleep disturbance experienced by patients with chronic pain is receiving growing attention as an important factor in the quality of life. In our study, when controlling for the physical and psychological dimensions of the quality of life, the association between pain parameters and sleep quality was lowered and not significant any more. This means that at a given level of quality of life we found no significant association between pain dimensions and sleep dimensions. This can also be interpreted that physical and psychological quality of life mediates the association between pain and sleep quality. Quality of life seems to have a central role for patients with chronic pain, which is in line with a previous Austrian study .
Despite the observational nature of our study, some possible clinical implementations should be discussed. Our results might support the assumption that sleep and pain have a bidirectional and reciprocal relationship; therefore, clinicians who manage patients with chronic pain should focus on interventions that relieve pain, as well as on assessing and treating sleep disturbance. This is, however often only addressed as a secondary concern . In addressing sleep quality in patients with chronic pain, it has to be considered that the effect of various medication groups on sleep in chronic pain patients are inconclusive. While non-steroidal anti-inflammatory drugs are sleep neutral, antidepressants and opioids can have positive as well as negative effects on sleep . Finally, our results underline the importance of psychological factors, such as mood, anxiety, depression, and quality of life in the association between pain and sleep quality. These factors should be routinely addressed in the management of patients with chronic pain in terms of assessment, monitoring and defining treatment goals; therefore, a multidisciplinary approach will often be required in order to obtain more comprehensive improvements for patients in medical, functional, and social contexts.
The present study has some limitations. Firstly, it relies solely on self-reported measures of sleep disturbance; therefore, subjective sleep problems might be overestimated by the patients. Nevertheless, a study by O’Donoghue et al. showed that participants with chronic low back pain demonstrated significantly poorer sleep, irrespective of the kind of sleep evaluation (objective or subjective) . Furthermore, the cross-sectional design did not permit any conclusions regarding the direction of the relationship between pain and sleep. It cannot be determined from the results whether sleep disturbance is only a marker for nociceptive processes or whether insomnia might also contribute to hyperalgesia. Additionally, it has to be mentioned that it was only the secondary aim of the study to evaluate the association of pain and sleep quality. The sample was set for the primary aim, to assess the cost of illness in patients with chronic pain. The size of the sample is also a limitation, yielding a low power for some statistical calculations. Finally, we did not assess the clinical diagnoses of common mental disorders, such as depression, anxiety, or stress-related disorders, only the mental dimension of quality of life, which limits conclusions about the mediation of psychological factors.
Various sleep problems are significantly associated with pain in patients suffering from chronic pain. Physical and psychological dimensions of quality of life notably influence both pain perception and sleep quality and therefore modify the association between pain perception and sleep quality. Because comorbid sleep problems and pain have been related to higher disability, the need to improve sleep quality among patients with chronic pain, and to reduce pain among patients with insomnia, should be an important part of future research.
Open access funding provided by Medical University of Vienna. We would like to thank the heads, doctors and staff of the three clinics in which the study took place. Furthermore, we would like to thank K. Viktoria Stein for her help in designing the study. Additionally, we would like to thank students Jasmin Ghozlan, Philipp Köppen, Matthias Krauße, Matthias Macsek, Melanie Narodoslavsky-Gföller, and Matthias Ranftler for performing the interviews with the patients. We are also grateful to Mark Ackerley, Professional Member of the Society for Editors and Proofreaders, for the linguistic review of this paper. Furthermore, we thank Andrew J. Haig, MD, active Emeritus Professor of Physical Medicine and Rehabilitation at the University of Michigan, for his comments on this paper.
Conflict of interest
M. Keilani, R. Crevenna, and T.E. Dorner declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | <urn:uuid:124cb399-8533-4ccb-a2b2-52f3be5278dc> | CC-MAIN-2019-47 | https://www.springermedizin.at/sleep-quality-in-subjects-suffering-from-chronic-pain/15030866?fulltextView=true | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00180.warc.gz | en | 0.945306 | 4,204 | 2.546875 | 3 |
Lesson 6: Principles of Biblical InterpretationRelated Media
As a Protestant I cherish the NT teaching on the priesthood of believers—that each Christian has the right to his own interpretation, but also that each Christian has the responsibility to get it right. ―Daniel Wallace
When it comes to making claims about what the Bible means, sometimes we hear comments from Christians or non-Christians like the following: “Well, that’s just your interpretation.” “The Bible can be made to say anything you want.” “You can’t really understand the Bible. It is full of contradictions.” “No one can understand the true meaning of anything anyone says.” Or, someone sitting in a Bible study might say, “This is what the Bible means to me.” All of these types of comments are about principles of biblical interpretation also called in theological jargon hermeneutics. Welcome to our postmodern world. Pilate’s question lives on: “What is truth? (John 18:38).”
Some issues that we as Christians face regarding the topic of biblical interpretation include: How does divine inspiration and human authorship affect biblical interpretation? What does a text mean? What are some general principles of interpretation? How do we interpret the Old Testament? How do we interpret the New Testament? These are all critical questions for us to consider as we seek to become better interpreters of God’s word, the Bible.
What Does a Text Mean?
The last lesson looked at the topic of inspiration and found that the Bible is both a human book and a divine book. There are certain implications of this for biblical interpretation. The first is that the human authors had a specific historical audience, context and purpose. These authors used their own language, writing methods, style of writing and literary form of writing. The divine authorship of the Bible gives it its unity and the ultimate source of all interpretation is from God. In the book of Genesis Joseph was asked about the meaning of some divinely given dreams and he replied, “Don’t interpretations belong to God? (Gen 40:8).
So let’s just start with the most basic question. What does a text mean? The answer to this question is that a text means what the author intended it to mean. If there is only one thing you learn from this lesson this is it. For a simple example, if you wrote a letter with some statements in it that are a little ambiguous, then what does the letter mean? Does it mean what you intended it to mean or how the readers interpret it? Of course it means what you intended it to mean. The true meaning of a text resides in the authorial intent of the text. This leads us to the first primary and fundamental principle of interpreting the Bible.
General Principles of Biblical Interpretation
Principle 1: Interpretation must be based on the author’s intention of meaning and not the reader. This means we must get into the author’s context, historically, grammatically, culturally and the literary forms and conventions the author was working in. To be able to do this some good Bible study tools are needed since we are 2000 years or more removed from the biblical authors and their context is very different than ours. The first tool that any one should get is a good study Bible with notes that explain historical and cultural background information. Most major Bible translations come in editions with these types of notes but by far the NET Bible with its over 60,000 notes surpasses them all. Get the most extensive Study Bible that goes with the translation you use. After this, good evangelical commentaries are essential tools to study the Bible but make sure to look at a couple to get a variety of perspectives. When someone in a Bible study states what the verse means to him, we need to redirect and clarify that the meaning is what the author intended. After that the question then is how that historical meaning applies to us today. The second principle of biblical interpretation should also be considered foundational.
Principle 2: Interpretations must be done in the context of the passage. What does the following mean? “It was a ball.” Well, the answer depends on the context. Consider the following sentences: The baseball umpire saw the pitch drift to the outside and said, “it was a ball.” We went to the dance last night, in fact it was so formal “it was a ball.” As I was walking along the golf course I spotted something small and white in the tall grass, “it was a ball.” I had so much fun at the game night, “it was a ball.” In each case the word ball means something different. Therefore, context determines meaning! The nearest context must given the most weight in interpretation. First, there is the near context of the sentence, then the paragraph, then the section and then the book and even author. The interpreter should look at all these circles of context to be able to correctly assess the meaning.
Far too often people try to interpret a verse by itself in isolation without looking at the context itself. For example, consider the verse Revelation 3:20 which is sometimes used as an illustration for evangelism. Behold, I stand at the door and knock; if anyone hears My voice and opens the door, I will come in to him and will dine with him, and he with Me (Rev 3:20; NASB).1 If this is all you looked at, it would be easy to understand the verse in terms of someone asking Jesus into his or her life for the first time. But the context in the preceding verse (v. 19) is talking about discipline of those whom Jesus loves, which would most naturally refer to believers. Also, in looking at the larger paragraph the passage is to a church
(Rev 3:14, 22). The verse is really addressed to believers who need to repent from their sin and return to fellowship with God.
Principle 3: Interpret the Bible literally (or normally) allowing for normal use of figurative language. Take the plain meaning of the text at face value. When the literal does not make sense you probably have a figure of speech. For example, Isaiah 55:12 states the trees of the field will clap their hands. Since trees do not have hands or clap this must be a figure of speech. Look for words such as “like” or “as” which can also communicate a figure of speech. Figures of speech and illustrations give the Bible a powerful and colorful means of expression. They are an important part of the normal expression of language.
Principle 4: Use the Bible to help interpret itself. Interpret difficult passages with clear ones. This is sometimes called the law of non-contradiction. Because the Bible is God’s word, and God is true, the Bible will not contradict itself. For example, there are clear passages that teach the doctrine of eternal security, that once a person is truly saved he or she cannot lose salvation (John 5; Rom 8). Some passages in the Bible are very hard to interpret like Hebrews 6:4-6.2 So I would let the overall and clear theology of the Bible influence me that a very hard passage like Hebrews 6 is not teaching that someone can lose his salvation. Also, use the New Testament to help interpret the Old Testament. This recognizes the progressive nature of revelation, that is the Bible is giving more revelation on topics over time. But one must start by interpreting the Old Testament text in its context before a New Testament consideration is made.
Principle 5: Interpretation must be distinguished from application. While there is one interpretation that is historical, there are many applications that can be carried over to our modern context. Build an application bridge from the interpretation to the timeless principle and then to the application now. For example in John 12, Mary anoints Jesus with very expensive oil. The historical context records a historical event. The interpretation relates only to what Mary did to Jesus. What about us today? An application might be that we are willing to give sacrificially for the Lord’s work and give Jesus acts of worship as Mary did. Or when Jesus states the principle in Matt 7 to love one’s enemies it is a general command that I might apply specifically by loving a worker who undermines me or a neighbor who offends me.
Principle 6: Be sensitive to distinctions between Israel and the church and Old Covenant and New Covenant eras/requirements. Promises made to Israel in the Old Testament cannot automatically be transferred to the church in which we are a part. For example, the land promises were given to Abraham and his descendants (Gen 12:7) but that does not include me, a Gentile Christian. Christians are not under the requirements of the Mosaic law (Rom 6:14). For example, in Lev 19:19 there is a command “you must not wear a garment made of two different kinds of fabric.” This was a binding command under the Mosaic law but not under the terms of the New Covenant. It is true that certain Old Testament commands repeated in the New Testament are still binding, but this is made clear by their repetition in the New Testament. The church was formed in Acts 2 with the descent of the Holy Spirit and most direct statements to and about the church occur after that. Also, there is a future for national Israel (cf. Rom 11) in which many Old Testament promises will yet be fulfilled and certain practices of the church age will come to an end at the second coming of Jesus (such as the Lord’s supper 1 Cor 11:26).
Principle 7: Be sensitive to the type of literature you are in. The Bible contains many different types of literature: law, narrative, wisdom, poetry, gospel, parable, epistle, and apocalyptic. Each of these types of literature has specific features that must be considered when interpreting a text. Some of these will be examined in the next section. For now we need to understand that where we are in the Bible makes a big difference on how we interpret and apply it.
Interpreting the Old Testament
Narrative Literature: Much of the Old Testament contains narrative literature. First, the passage needs to be interpreted in its historical context and then applications can be drawn from the characters and events. In the book of Judges, only one verse is given to the judge Shamgar. It reads, “After Ehud came Shamgar son of Anath; he killed six hundred Philistines with an oxgoad3 and he too delivered Israel” (Judges 3:31). Why did God include this passage? Yes, it records an historical event. Also, the verse teaches God’s delivering power can come in an unexpected way, not with a mighty army but with one man wielding an oxgoad.
Law: Realize that Christians are not under the law as a legal system (Rom 6:14) but that we are to fulfill the principles that stand behind the law of loving God and loving one’s neighbor (cf. Matt 22:37-40)? Sometimes the teaching is carried directly into the New Testament (e.g., Do not murder, etc). Other times, the New Testament takes a text and applies a principle from it. For example, “You must not muzzle your ox when it is treading grain” (Deut 25:4). Paul takes this verse, which refers to feeding a work animal and applies the principle of the Christian worker being worthy of tangible support. Paul states, “Elders who provide effective leadership must be counted worthy of double honor, especially those who work hard in speaking and teaching. For the scripture says, ‘Do not muzzle an ox while it is treading out the grain,’ and, ‘The worker deserves his pay’” (1 Tim 5:17-18, cf. 1 Cor 9:9). In general, if the Old Testament command in the law is not repeated in the New Testament, look for the principle behind the statement in the law and then try to apply that.
Wisdom Literature: Realize that much of the proverbial type of wisdom in the Old Testament is general truth based on observations but not absolute truths or promises. Two good examples are seen in the following: “A gentle response turns away anger, but a harsh word stirs up wrath” (Prov 15:1). Another one is, “Train a child in the way that he should go, and when he is old he will not turn from it” (Prov 22:6). Christians should not take these types of proverbial statements as promises of what will always happen but rather patterns that are generally true outcomes based on observation. A gentle answer will not always prevent an angry outburst but it is much more likely to than a harsh one. Christian parents who have a child who has gone astray from the faith may have done their best to train the child the right way but the child did not take it.
Poetry: Realize that poetry often has a greater use of figurate language than narrative or law. Also, Hebrew poetry’s main characteristic is parallelism. For example, Psalm 24 says, “The Lord owns the earth and all it contains, the world and all who live in it. For he set its foundation upon the seas, and established it upon the ocean currents. Who is allowed to ascend the mountain of the Lord? Who may go up to his holy dwelling place?” (Ps 24:1-3). Here we have three sets of pairs in side by side fashion with the second reference restating the basic idea of the first. The phrase “the earth and all it contains” is amplified by the phrase “the world and all who live in it”. The phrase “he sets its foundation upon the seas” is rephrased “established it upon the ocean currents.” The question of who is allowed to ascend to the mountain of the Lord is restated “Who may go up to his Holy Dwelling place?” Most English Bible translations will format poetry using indentation, which helps show the parallel ideas.
Interpreting the New Testament
Gospels: Understand that each writer has a specific audience for whom he is writing, and that he has selected his material for them. Matthew was written for a Jewish audience. Mark was written for a Roman audience. Luke was written for a Greek audience. John was written for a universal or Gentile audience. This can help us see nuances or explain differences between accounts. For example, in Matthew 19:1-12 and Mark 10:1-12 Jesus teaches on the hard topic of divorce. Both gospels state that a man who divorces his wife and marries another commits adultery against her. Mark alone though adds the point that if a woman divorces her husband and marries another she commits adultery against him. Why is this difference there? It probably has to do with the audience. Matthew is writing to a Jewish culture in which a woman could not divorce her husband while Mark is writing to a Roman audience in which one could.
Read the gospels not only vertically, that is, understanding what is said in each individual account, but also horizontally, that is, considering why one account follows another. For example, see Mark 2-3:6; what do these various accounts have in common? One can notice that they are all different stories that relate to the conflict that Jesus had with the Jewish leadership. Mark 3:6 reads, “So the Pharisees went out immediately and began plotting with the Herodians as to how they could assassinate him.” The stories are grouped in a way that gives an explanation as to why Jesus was rejected as strongly as he was.
Lastly, recognize that the gospels are in a transitional stage between Old and New Covenants. Jesus lived in the context of Judaism prior to the birth of the church. For example, Jesus is keeping the Old Testament prescribed feasts in many of his journeys to Jerusalem. Also, he is introducing changes that will be inaugurated with the start of the New Covenant. For example, in Mark 7 Jesus declared all foods clean which was a change from the Old Testament dietary laws.4
Parables.5 Parables are a form of figurative speech. They are stories that are used to illustrate a truth. There are parables in different parts of the Bible but Jesus was the master of them and many are found in the gospels (e.g., Matt 13, Mark 4, Luke 15). How then should we interpret the parables? First, determine the context that prompted the parable. Parables always arise out of a context. For example the Pharisees disdain for Jesus eating with tax collectors and sinners prompts Jesus to tell a parable about how God loves a lost sinner who repents (Luke 15). Second, understand the story’s natural meaning which is often taken from real life situations in first century Palestine. Third, ascertain the main point or truth the parable is trying to give and focus on this. Only interpret the details of the parables if they can be validated from the passage. Many details are there only for the setting of the story. For example, what is the main point of the mustard seed parable? Jesus stated: “The kingdom of heaven is like a mustard seed that a man took and sowed in his field. It is the smallest of all the seeds, but when it has grown it is the greatest garden plant and becomes a tree, so that the wild birds come and nest in its branches” (Matt 13:31-32). The parable is an illustration of the kingdom of heaven which starts small but grows to be very large in size. This seems to be the main point. The birds and the branches are probably there only to illustrate how large the tree has become.
Acts. Recognize that Acts is a theologized history of the early church. Acts tells what the church was doing from the human side of things and what God was doing from the divine side of things. For example, consider these passages on the early growth of the church which refer to the same event but from two different perspectives. “So those who accepted his message were baptized, and that day about three thousand people were added”. . . . (Acts 2:41) “And the Lord was adding to their number everyday those who were being saved” (Acts 2:47). Here we see what God is doing in and through the church. Also, we need to recognize that the church starts in Acts 2 with the baptism of the Holy Spirit. The baptism of the Spirit, the filling of the Spirit, church planting and gospel outreach characterize the events of the book. In addition, some events in Acts are descriptive of what happened not proscriptive of what is necessarily expected in the modern church. For example, Samaritan believers did not receive the Holy Spirit in Acts 8 upon faith in Jesus. They had to wait for Peter and John to get there. When Paul was bitten by a viper in Malta, yet he miraculously lived (Acts 28:1-5). These are descriptions of what happened and are not necessarily normative of what happens in the church today. So it probably would not be a good idea to start snake handling services!
The book of Acts is also a book of transitions. First there are key transitions in biography. This is especially true as the book focuses more on the ministry of Peter in the first portions of the book then shifts to Paul. There is also a transition in ministry focus from the Jews to the Samaritans and to the Gentiles. Lastly there is a geographical transition starting in Jerusalem taking the gospel outward into Samaria, Asia Minor, Europe and eventually Rome. In Acts 1:8 Luke gives us a rough outline of the progression emphasizing the progress of the gospel. “But you will receive power when the Holy Spirit has come upon you, and you will be my witnesses in Jerusalem, and in all Judea and Samaria, and to the farthest parts of the earth."
Epistles. Since the New Testament epistles are directed to churches and individuals in the church, they most directly apply to us today. Most commands given in the epistles are general enough in nature that we need to obey them, or in the case of promises we can claim them. For example in 1 Corinthians 15 there is a promise given for immortal bodies and eventual victory over death. These promises are not just for those in the local Corinthian church but the universal church of God.
In the epistles, pay special attention to logical connectors/conjunctions to explore relationships of clauses and sentences. Look for these types of words: “for, “therefore,” “but,” etc. For example Hebrews 12:1 reads, “Therefore, since we are surrounded by such a great cloud of witnesses, we must get rid of every weight and the sin that clings so closely, and run with endurance the race set out for us.” The word therefore points back to the previous chapter in which Old Testament saints were held up as people who had given a good testimony or witness of faith. The phrase “cloud of witnesses” then would naturally refer back to the people of the preceding chapter. In another example the author of Hebrews writes, “So since we are receiving an unshakable kingdom, let us give thanks, and through this let us offer worship pleasing to God in devotion and awe. For our God is indeed a devouring fire” (Heb 12:28-29). Here the word for sets up a subordinate idea giving the reason we as Christians should offer worship in devotion and awe to God.
Revelation. Revelation is the one book in the New Testament that is one of the hardest to interpret. There are several reasons for this. First, there are substantially different interpretative approaches on the overall timing of the book. Some see most of it as purely historical. Some see most of it as yet future. Second, there are many Old Testament allusions in Revelation. Allusions are phrases and references to the Old Testament without an explicit statement by John that he is quoting the Old Testament. So when John refers to the Old Testament he generally does not tell you he is doing so. Third, there is a greater use of symbolic language in Revelation than in other parts of the Bible. Revelation is in a form of literature known as apocalyptic.6
How can one get started? First, the book of Revelation promises a blessing to the one who reads it (Rev 1:3). So we should read it even if we do not completely understand everything. The basic thrust of Revelation’s message is clear. Jesus is coming again and will defeat the forces of evil. We can be assured of this. Other interpretative helps that can be given would be to interpret the seven churches as seven historical churches in existence in the first century A.D (Rev 2-3). Interpret chapter 4 onward as primarily future events from our perspective (Rev 1:18-19).7 Follow a generally chronological view of the book from chapter 4 sequencing the bowls, trumpets and seals, second coming of Jesus, millennial kingdom and eternal state. Use a study Bible with a good set of notes to help frame common interpretations and Old Testament backgrounds. Lastly, become a student of the book and keep working at it.
Conclusion and Summary
Biblical passages must be interpreted according to the intention of the author and in the context in which the statement is made. Interpretation must be distinguished from application. One must be sensitive to what type of literature one is in and how this may or may not apply to a believer in the church age. Interpreting the Bible is sometimes hard work but it’s always worth the cost. David reminds us of the value of God’s word, “They are of greater value than gold, than even a great amount of pure gold; they bring greater delight than honey, than even the sweetest honey from a honeycomb” (Ps 19:10).
- What types of interpretations have you heard where you questioned the method of interpretation?
- What would happen to interpretation if the church used reader centered interpretations as opposed to an author centered interpretations?
- How does the Holy Spirit help us in interpreting the Bible (1 Cor 2)?
- If the Holy Spirit is guiding us in interpretation why do godly Christians have differing interpretations on various passages?
- What is our relationship, if any, to the Old Testament Commandments/Law?
- Why are only 9 of the 10 commandments repeated in the New Testament? The Sabbath command is the one of the ten commandments that is not there.
- How does the distinction between the church and Israel affect application of the Old Testament?
- How do you know if something is symbolic or not?
1 The NET Bible gives a translation rendering that helps to alleviate this confusion. “Listen! I am standing at the door and knocking! If anyone hears my voice and opens the door I will come into his home and share a meal with him, and he with me” (Rev 3:20).
2 “For it is impossible in the case of those who have once been enlightened, tasted the heavenly gift, become partakers of the Holy Spirit, 5 tasted the good word of God and the miracles of the coming age, 6 and then have committed apostasy, to renew them again to repentance, since they are crucifying the Son of God for themselves all over again and holding him up to contempt (Heb 6:4-6 NET).
3 An oxgoad is simply a long stick with a pointed end that was used to prod animals into walking.
4 He [Jesus] said to them, "Are you so foolish? Don't you understand that whatever goes into a person from outside cannot defile him? 19 For it does not enter his heart but his stomach, and then goes out into the sewer." (This means all foods are clean.)(Mark 7:18-19 NET).
5 Adapted from Roy Zuck, Basic Bible Interpretation (Colorado Springs: Victor, 1991) 194-226.
6 A scholarly definition of Apocalyptic: “a genre of revelatory literature with a narrative framework, in which a revelation is mediated by an otherworldly being to a human recipient, disclosing a transcendent reality which is both temporal, insofar as it envisages eschatological salvation, and spatial insofar as it involves another supernatural world” J.J. Collins “Apocalypse: The Morphology of a Genre,” Semeia 14 (1979), 9. Revelation focuses on the future and spiritual world to a much greater degree than other portions of the New Testament and it is communicated in visions and symbolic language.
7 Revelation 1:19 gives a basic chronological outline of the book. “Therefore write what you saw, what is, and what will be after these things” (Rev 1:19 NET). (past: what you saw (Chapter 1:9-20); present: what is (Chapters 2-3); and future: what will take place after these things (Chapters 4-22:5). | <urn:uuid:4c31ca96-e55f-433a-a180-fcecdb20224e> | CC-MAIN-2019-47 | https://bible.org/seriespage/lesson-6-principles-biblical-interpretation | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00177.warc.gz | en | 0.959059 | 5,643 | 3.0625 | 3 |
Education and religion
questions about meaning and purpose in life, beliefs about God, ultimate reality, have an entitlement under Jersey Law to study religious education and the develop their own narratives in relation to stories they hear from. GCSE Religious Studies Relationships and lifestyle learning resources for adults, children, parents and teachers. The relationship between religion and public education has been fraught with the role of religion in public education is grounded in questions about the role of .
In terms of legal impact, the establishment clause has historically garnered more attention because of the wide-sweeping impact a legal decision will have. In contrast, free exercise cases address issues that pertain largely to religious minorities, so the impact is smaller and more context dependent.
Typically, when justices decide Establishment Clause cases they are asked to determine whether an enactment effectively establishes, or supports, a state religion. There are generally three different judicial perspectives on establishment, strict separation, accommodation, and neutral separation. Strict separationists invoke the idea of a wall separating Church and State. For strict separationists there is no instance in which an enactment would be tolerated Neuhaus, Barring this, certain accommodations are permissible as long as government does not prefer one religion to another Massaro, Finally, the neutral separation position examines enactments with a slightly different lens arguing that what is most important is official State neutrality between religion and non-religion and thus argue that to adhere to the establishment clause may mean at times accommodating religion if it is to maintain neutrality between religion and non-religion Fox, ; Temperman, Generally speaking, when focusing on the major court cases that have impacted public education, the neutral separation position has carried the day when it comes to issues such as school prayer, religious instruction, and released time.
For our discussion this is important because it is the application of the 14th amendment to the 1st amendment that holds public schools and public school employees to the restrictions of the 1st amendment Everson v. The 14th amendment application to the 1st amendment is also essential since it is the states, rather than the federal government, that hold substantive influence over public school curriculum and policy. Several watershed cases have firmly established the preference for the neutral separation position.
Arguing that school personnel were involved with the administration and execution of this program was tantamount to supporting religion it was found unconstitutional. In contrast, the courts sided with the school district in Zorach v. Key Court Cases In the middle of the 20th century it was commonplace for the school day to begin with a religious prayer or invocation.
Beginning incases made their way through the courts, and in every instance the court found such prayers violated the establishment clause. Further, because the prayer was broadcast at the start of the school day, students had no choice captive audience but to listen. The following year, the court in an decision in Abington School District v. Of greater importance in this case was the distinction made between the unconstitutionality of practicing religion in public school with the constitutionally permissible act of studying religion in public school.
That is, if there is an educational purpose to studying religion, then presumably this would be permissible. The second significance of this case is that it offered the first two of what later became a three-prong test used to adjudicate Establishment clause cases.
The first prong asks what is the primary purpose of the enactment? Is it religious or secular? The second prong asks what is the primary effect of the enactment—religious or secular? In cases where the primary purpose and effect are secular the enactment is said to be permissible. This formula is particularly useful when determining whether curriculum, such as evolution or creationism, for example, is permitted. Arkansas addressed the matter of an Arkansas law prohibiting the teaching of evolution.
The law was ruled unconstitutional on the grounds that the primary purpose of the law was to advance and protect a religious view. Following the Epperson decision was the famous case Lemon v. It was famous mainly because of the establishment of the third prong used to adjudicate establishment clause cases.
At issue in this case was the question of whether public schools could reimburse private schools for the salaries of their teachers who taught secular subjects. Since the majority of the private schools were parochial, the matter fell under establishment. In deciding that it was unconstitutional for the public schools to pay the salaries of the parochial school teachers, the court determined that while primary purpose and primary effect were central to deciding constitutionality, a third prong, which says that the enactment must not foster an excessive entanglement between religion and government was needed.
Paying the salaries of private school teachers who teach secular subjects may not serve a primarily religious purpose or have a primarily religious effect, but it certainly would foster an excessive entanglement between government and religion in that government would be very involved with accounting for their investments in a parochial school. Contemporary Tensions Other important Establishment Clause cases related to education include Wallace v.
This case dealt with the constitutionality of moments of silence. In this case, the state of Alabama allowed for a moment of silence for the purpose of meditation or private prayer. While moments of silence with no explicit purpose have been found constitutional, this law was found unconstitutional on the grounds that it had a clear religious purpose.
The court, in a decision found the law unconstitutional according to all three prongs of the Lemon test.
The courts have ruled similarly in more recent court cases such as Selman v. In summary, since the s when the 14th amendment was applied to the 1st amendment, public schools have been limited in what counts as permissible in relation to religion and public schooling. Bush, emphasize that restrictions on religious expression are limited to school personnel while in their official capacity.
For example, under the Equal Access Actstudent-initiated religious groups are permitted at schools. However, teachers cannot create or lead these groups, though they are allowed to monitor them. Free Exercise It is worth mentioning briefly the role of the free exercise clause in public schools. The chief function of the free exercise clause is to provide protection to religious minorities where laws created by the majority might serve unintentionally to restrict their free exercise.
The most famous free exercise case related to public schools is, Wisconsin v. In this case, members of the Amish community requested an exemption from state compulsory attendance laws. Wisconsin law required all students to attend school until the age of The Amish requested an exemption from the last two years of schooling what essentially would have amounted to the first two years of high school.
Their rationale was that the exposure Amish children would have could undermine their very way of life; indeed they claimed it threatened their survival. Ultimately, the court sided with the Amish for two very different reasons. First, acknowledging the importance of an education for participation in public life, the court reasoned that because the Amish live a self-sufficient life and by all outward expressions are a successful social unit, the exemption was warranted.
Second, they reasoned that laws should not serve to threaten the very way of life of a religious minority group and the state ought to be respectful, not hostile, to minority religious views. The law, then, sets clear parameters for what constitutes an establishment of religion and when individual free exercise should take precedent over generally applicable laws. One can conclude from this discussion that, contrary to the claim made by the religiously orthodox, public schools are not hostile to religion but rather are welcoming of religion in the public school in so far as it serves an educational purpose.
This next section treats curriculum. Where, if at all should religion reside in the curriculum? What are the strengths and limitations of its inclusion? And finally, how does its inclusion contribute to cultivating a democratic ideal? Curriculum Curriculum serves as a battleground in education. Perhaps more than other dimensions of schooling, it tells us what is worth knowing and understanding.
Curriculum, however, does not exist in a vacuum. Curriculum can be a deeply political issue, especially when dealing with the topics of science, history, and religion Erekson, There is also significant discussion on who should set the curriculum priorities the local school district, the states, or the federal government as well as how much freedom teachers should have to move away from the set curriculum Webb, How the curriculum treats religion has often created controversy.
This is an even more complex issue in a society that is becoming both more non-religious as well as more religiously diverse Pew Religious Center, Herbert Kliebard, the preeminent American curriculum historian, identifies four primary groups who have vied for supremacy in schools. These groups sought to define the U. They were humanists, social meliorists, those focused on child development, and social efficiency educators Kliebard, ; Labadee, Depending on which view enjoyed currency at a particular time in history, could determine whether religion, in some form, found its way into the formal narrative of schooling.
Whereas the humanists were primarily concerned with fostering in students intellectual skills through the traditional disciplines, social meliorists thought curriculum should have a focus on activism—social improvement. Developmentalists thought that it was important to design curriculum around the development of the individual learner and social efficiency advocates thought curriculum should be limited to preparation for the workforce.
As one examines different movements to include religion within the curriculum it is valuable to note which theoretical model is invoked. For these curricular approaches provide a lens into the view of religion with respect to larger society.
Limiting discussions to creationism and science misses far more consequential arguments for an important and relevant role for religion in the public schools.
Warren Nord has made perhaps the most convincing and comprehensive arguments for the centrality of religious ways of knowing to all disciplines Nord, For Nord it is not so much that religious perspectives have a stronger purchase on the truth of things, but rather the religious lens or a religious lens asks different sorts of questions than non-religious lenses and thus enlarges the conversations about various historical perspectives, economic theories, etc. For example, religion can serve as a type of critique of our current market-driven society or it can enlarge conversations related to scientific development, environmental sustainability, etc.
Nord, however, is not alone in his calls for including religion religious perspectives in the public school curriculum. Stephen Prothero and others have made a strong call for religious literacy Prothero, Particularly since the terrorist attacks of in the United States, there has been a collective realization that, generally speaking, Americans are largely ignorant when it comes to understanding much about religion Moore, Politicians and media outlets have often exploited this ignorance to create fear about Muslims, refugees, and the religious other.
The contention goes, the more illiterate we are, the more religious intolerance predominates. This illiteracy is not limited to Islam, but can be said to be a general religious illiteracy Wood, Nel Noddings has also made a forceful case for providing students with opportunities to explore existential questions in the public school classroom Noddings, She argues that students already come to school bogged down with these types of questions, so schools have an obligation to help students make sense of them Noddings, The Bible Literacy Project, an ambitious project endorsed by a wide range of academics and theologians provides a well-sourced textbook that can be used in schools Bible Literacy Project, Though, their intentions may be less educational and more religious, many states have passed legislation permitting the teaching of the Bible in public schools Goodman, The Bible used for literary or historical reasons seems justifiable and fully constitutional.
Multiculturalism A recent text by philosopher Liz Jackson makes the case that Muslims, in particular, are done a disservice when schools do not attend substantively to the study of Islam in schools. Her argument is based on three essential claims.
First, in the absence of a substantive treatment in schools, citizens are left with popular culture depictions of Muslims Jackson, These characterizations typically misrepresent Muslims. Second, the ways in which Muslims are depicted in social studies textbooks also take a narrow view. That is Muslims and Islam are largely depicted beginning in through the lens of terrorism Jackson, Finally, Jackson argues that preservice teacher preparation programs do not do sufficient work in preparing future social studies teachers to be knowledgeable about Muslims and Islam, and therefore they are ill-equipped to disrupt the narratives perpetuated in textbooks or through popular culture Jackson, Curricular Opportunities There are many ways in which religion can be addressed in public school curricula that are both constitutionally permissible and educationally justifiable.
Schools could provide world religion survey courses so that students have at least a superficial understanding of the range of religions in the world. Schools could offer controversial issues classes where religion could serve as both a topic and a perspective. Schools can study religious perspectives on a variety of current issues. In an increasingly diverse society, the ability to understand the perspectives of those from other faiths is vital for social cohesion and peace.
Ignoring differences does not make intolerance dissipate but often allows stereotypes and antagonism to flourish. However, this is not inconsistent with the view that a good general knowledge of religions, and as a result a sense of tolerance, are essential to the exercise of democratic citizenship. In its Recommendation on religion and democracy, the Assembly asserted: Knowledge of religions is dying out in many families.
More and more young people lack the necessary bearings fully to apprehend the societies in which they move and others with which they are confronted. The media — printed and audiovisual — can have a highly positive informative role. Some, however, especially among those aimed at the wider public, very often display a regrettable ignorance of religions, as shown for instance by the frequent unwarranted parallels drawn between Islam and certain fundamentalist and radical movements.
Politics and religion should be kept apart. However, democracy and religion should not be incompatible. In fact they should be valid partners.Blood Relation (रक्त संबंध ) - Reasoning trick in hindi - for ssc cgl , cpo , chsl , railway
By tackling societal problems, the public authorities can eliminate many of the situations which can lead to religious extremism. Education is essential for combating ignorance, stereotypes and misunderstanding of religions. Governments should also do more to guarantee freedom of conscience and religious expression, to foster education on religions, to encourage dialogue with and between religions and to promote the cultural and social expression of religions.
School is a major component of education, of forming a critical spirit in future citizens and of intercultural dialogue. It lays the foundations for tolerant behaviour.
By teaching children the history and philosophy of the main religions with restraint and objectivity and with respect for the values of the European Convention on Human Rights, it will effectively combat fanaticism. Understanding the history of political conflicts in the name of religion is essential. Even countries where one religion plainly predominates should teach about the origins of all religions rather than favour a single one or encourage proselytising.
In Europe, there are various concurrent situations. Education systems generally — and especially the State schools in so-called secular countries — are not devoting enough resources to teaching about religions, or — in countries where there is a state religion and in denominational schools — are focusing on only one religion.
Some countries have prohibited the carrying or wearing of religious symbols in schools. Unfortunately, all over Europe there is a shortage of teachers qualified to give comparative instruction in the different religions, so a European teacher training institute for that needs to be set up at least for teacher trainers. The Council of Europe assigns a key role to education in the construction of a democratic society,but study of religions in schools has not yet received special attention.
The Assembly observes moreover that the three monotheistic religions of the Book have common origins Abraham and share many values with other religions and that the values upheld by the Council of Europe stem from these values. Accordingly, the Assembly recommends that the Committee of Ministers: The Assembly also recommends that the Committee of Ministers encourage the governments of member states to ensure that religious studies are taught at the primary and secondary levels of State education, on the basis of the following criteria in particular: It is not a matter of instilling a faith but of making young people understand why religions are the sources of faith for millions; They should be teachers of a cultural or literary discipline.
However, specialists in another discipline could be made responsible for this education; Religion is an important aspect of European culture and plays a significant role for many people throughout Europe. However, it has become clear that — particularly in so-called secular countries — education systems are not devoting enough resources to teaching about religions, or — in countries where there is a state religion and in faith schools — are focusing on only one religion.
At the same time, religious traditions are dying out in many families. As a result, more and more young people lack the bearings they need to help them understand the societies in which they live and to which they face. It was therefore deemed necessary to consider the role of education systems with regard to religion.
Earlier committee activities 4. In its report on religion and democracy Doc. However, democracy and religion need not be incompatible and can be valid partners.
How religion may affect educational attainment
By tackling societal problems, the authorities can remove many of the causes of religious extremism. Education is the key way to combat ignorance, stereotypes and misunderstanding of religions. Governments should also do more to guarantee freedom of conscience and religious expression, to develop education about religions, to encourage dialogue with and between religions and to promote the cultural and social expression of religions.
During an exchange of views with the committee on 23 June the Council of Europe Commissioner for Human Rights, Mr Gil-Robles, said that many of the crisis situations which he had encountered were deeply rooted in cultural and religious tensions. The Commissioner also underlined the need to consider the setting up of a European teacher training institute for the comparative study of religions. On 17 November the rapporteur and others tabled a motion for a recommendation on the comparative study of religions and intercultural dialogue.
The motion points out the key role that the Council of Europe assigns to education in the construction of a democratic society and states that the comparative study of religions in schools has not yet received special attention. Knowledge of religions is integral to knowledge of the history of humanity and civilisations. It should be distinguished from belief in or practice of a specific religion.
Given the many possible prejudices and stereotypes regarding religions, it is important to have structured, rational instruction in schools. That would help combat fanaticism, fundamentalism and xenophobia more effectively. The Bureau of the Assembly asked the Committee on Culture, Science and Education for a report and the latter appointed me as rapporteur at its meeting on 29 January Some however, especially among those aimed at the wider public, very often display a regrettable ignorance of religions, as shown for instance by the frequent unwarranted parallels drawn between Islam and certain fundamentalist and radical movements.
The committee held its first exchange of views on the subject on 18 March The issue is a complex and sensitive one, involving deeply rooted religious, cultural and historical beliefs. It is therefore a subject that needs to be treated with great caution.
Ignorance often gives rise to intolerance, fanaticism, fundamentalism and terrorism. Schools play a key role because they impart knowledge of and respect for others.
Better knowledge of others would help develop intercultural dialogue and religious tolerance. Schools should teach religions, their history, their philosophies and their practices as a comparative study and in a structured and reasoned manner.
The committee members think that as well as concerning itself with comparative study of religions and intercultural dialogue the report should adopt a wider approach to the subject.
School courses should teach not only factual knowledge but also about the nature of religious experience. They should not confine themselves to European religions but extend to other continents' religions now represented in Europe.
Religious instruction must not be bound by national stereotypes. A series of hearings enabled me to collect relevant views on the question, for example from religious leaders and history teachers.
It is also important to take in the non-religious as well as the religious stance. As religious beliefs are deeply held, there has to be a modicum of consensus as a starting point.
He pointed out that while teachers were responsible for actual teaching, a range of parties were involved in education: New syllabuses should take into account that the Bible and the Koran were not scientific documents. He raised the question of whether this subject area was the sole preserve of history teachers.
Religion might also fit into the education for citizenship syllabus. At all events, it should not be left entirely to teachers to draw up the new syllabuses. The following are some of the comments made by committee members: At the end of the discussion the committee decided to hold a thorough hearing with the representatives of the main religions to be found in Europe. This hearing, held in Paris on 2 Decemberdid not have a structured programme. Instead of a series of statements followed by questions and answers, the aim was to enable committee members and the invited religious leaders to debate freely on issues relating to school teaching of religion.
Relationships and lifestyle
Is it necessary to teach about religions in schools and why? What should be the core content of religious instruction? What ways and means should be considered? Who should teach about religions and in what contexts? What account should be taken of the different religions in drawing up syllabuses and in teacher training?
The following religious leaders were invited on a personal basis and not as official representatives of their respective religions: A number of interesting ideas emerged at the hearing: People therefore had to be able to receive, practise and express education and religion at the local level.
Bishop Athanasios - Religion should not be a mere item of knowledge complete with its historic and sociological aspects: Religious education also provided an opportunity for developing the spiritual dimension in students. Bishop Athanasios - European education systems varied widely, and consequently did not all share the same point of view on religious education. | <urn:uuid:1b748538-ab35-464f-82f5-b195c2367a48> | CC-MAIN-2019-47 | https://posavski-obzor.info/and-relationship/education-and-religion-relationship-questions.php | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00138.warc.gz | en | 0.961012 | 4,495 | 3.53125 | 4 |
A History of the County of York: the City of York. Originally published by Victoria County History, London, 1961.
This free content was digitised by double rekeying. All rights reserved.
Since 1900 the confectionery industries have become the largest single group in York's industrial life. At the 1901 Census they employed 1,994 persons. By 1911 this figure had risen to 3,737, (fn. 1) which was still much lower than the figure for railway employees. It was between the two world wars that really rapid expansion took place, mainly through the growth of Messrs. Rowntree's business. In 1923 this firm employed some 7,000 persons compared with 4,066 in 1909 and by 1936 the total number employed in the industry in York was about 10,000, (fn. 2) and had further increased to 12,274 in 1939. (fn. 3) It had thus grown to account for some 29.4 per cent. of York's insured working population compared with some 5.9 per cent. in 1901. Of these totals rather more than half were women and girls whose earnings thus played an important part in supplementing family incomes and in helping to raise families out of the conditions of poverty which were so widespread in York at the turn of the century. The second largest group, the railways, employed 5,529 persons in 1938. Thus although they maintained their level of employment, the railways became proportionately less important, falling from 16.3 per cent. to 13.2 per cent. of total employed population between 1901 and 1938. No important new industries employing large numbers were introduced into York between 1901 and 1939 and no other large expansion of existing industries took place. The British Sugar Corporation established a plant near the Ouse in Acomb in 1927, but its labour force was small and subject to seasonal fluctuations. (fn. 4) The printing industry increased its labour force from 322 in 1901 to 1,078 in 1939, but no other industry increased its employment figure significantly.
Beyond the range of manufacturing industries and the railways there were, however, some significant changes within the York economy. The numbers of people engaged in government services, both local and national, grew from 429 in 1901 to 1,966 in 1939, when they represented 4.7 per cent. of employed persons in the city, compared with 1.4 per cent. in 1901. Similar figures are not available for the distributive trades but there is little doubt that in York, as elsewhere, the number of persons employed in them rose over the period; (fn. 5) by 1939 there were 4,900, or 11.8 per cent. of the employed population, engaged in retail and wholesale distribution. In road transport services 1,336 persons were employed by 1931.
In 1907 a special committee of the corporation was formed 'to encourage the establishment of new industries and to foster the development of its existing commercial business'. (fn. 6) The committee believed that by their influence a tannery had been retained near the city instead of being removed, (fn. 7) but appears to have achieved little in establishing new industries. Otherwise retail distribution, for which the city had always been noted in the 19th century, gained in importance, and marketing, especially of animals, continued to flourish. Although there was a steady decline in the number of sheep sold, the period between 1934 and 1938 saw the trade in both cattle and pigs reach higher levels than at any time since 1900 (see Table 9).
The variety of York's trades and the growing predominance of the confectionery industry, the railways, distributive trades, and government service meant that the city suffered less from unemployment than the rest of the country between the wars. The first slump which followed the First World War brought corporate action in the form of executing public works, for which a government grant-in-aid was sought. (fn. 8) In 1932 York had 4,110 unemployed, compared with 2,078 two years earlier. (fn. 9) This was about 10 per cent. of the local labour force, which was serious enough, but it must be remembered that the national percentage in the same year was 22.1 of insured persons. Subsequently the numbers of persons wholly unemployed fell from 4,273 in 1933 to 2,575 in 1937 and rose again slightly in 1938 to 2,774. (fn. 10) Thus throughout the 1930's the level of unemployment in York remained at about one-half of the national level. An important contribution to the relief of unemployment was made, as will be seen, by large municipal slum clearance and housing estate projects.
The Corporation, 1900-39
In addition to the increasing volume of business arising from its existing administrative commitments and its assumption of responsibility for education in 1902 and poor law in 1930, the corporation had extensive trading departments. The progress of trade in the cattle markets has already been mentioned; the three main trading activities were the electricity undertaking, the Ouse and Foss Navigations and, after 1909, the tramways. The electricity undertaking from its establishment in 1900 expanded uninterruptedly, (fn. 11) and demanded consequential increases in capital expenditure. (fn. 12) At first it was not a financial success. Between 1900 and 1910 its net losses amounted to £3,058, and, as the corporation's auditor annually pointed out, no provision was being made for depreciation. By 1907 the finance committee reported that the affairs of the undertaking were a 'source of anxiety' (fn. 13) and in 1909 a rate had to be levied to allow the electricity committee to meet its obligations. With solvency achieved, the rapidly increasing revenue not only enabled the undertaking to pay its way but to show a profit and to reduce the price of the electricity it supplied. The profit was for a time, however, largely illusory, for in 1914 it was once again pointed out that the undertaking was still making no provision against depreciation and the practice had arisen of taking its profits in order to relieve the burden of the rates. (fn. 14) It was agreed that this practice should end and the undertaking was at last placed upon a sound financial footing. Only in 1918 and 1919 did it again make a loss, due to the inflated price of coal used at the Foss Islands power station. It was therefore resolved to build a hydro-electric station on the Ouse at Linton Lock which halved the consumption of coal and made possible the supply of a larger area. (fn. 15)
On the river, the First World War restricted traffic: the 355,982 tons carried in 1914 had by 1917 fallen to 184,692. (fn. 16) Increased dredging, the provision of landing stages to encourage local traffic, and pressure on the gas and electricity works to carry their coal on the river were recommended. By 1920 there was a small improvement in trade but the dues authorized by the relevant local Acts were felt to be insufficient and advantage was taken of the Ministry of Transport Act of 1919 to increase them. The financial problems of the navigation also raised again the question of the agreement with Leethams and this was only settled, after protracted controversy, in 1925. (fn. 17) The depression years of the 1930's saw a further fall in the tonnage carried and even in the best of these years, 1938, traffic on the river failed to recover its pre-1914 level. (fn. 18)
The municipal provision of transport services was never markedly successful. (fn. 19) The tramways were bought from the York Tramways Company in 1909 for £8,856. The problems presented by the narrowness of streets and by the walls, bars, and bridges involved high capital charges, but the revenue was very limited. (fn. 20) In the year that they were acquired the tramways were electrified at a cost of £89,741, while heavy additional costs were incurred for necessary street widenings. (fn. 21) In the early stages of the tramways' history the growing demand for transport services clearly outpaced the ability of the tramway system to supply the public's needs. In 1915 petrol omnibuses were introduced but because of the high capital cost of providing the service and the effects of the postwar price inflation, the buses had at first to be run at a loss. After 1923 the tramways and buses, and the railless cars introduced in 1919, just paid their way. (fn. 22) By the 1930's the total number of passengers carried was falling and a preference was being shown for buses. (fn. 23) Bus profits, moreover, were rising, while those from trams were falling. (fn. 24) Clearly it would soon be necessary to turn wholly to buses but the corporation was unwilling to face the heavy capital charges involved. The solution to this problem was found in a merger between the corporation transport department and the West Yorkshire Road Car Company and an agreement for the joint running of the transport services was reached in 1934. In the following year the joint committee resolved to supersede trams by buses. (fn. 25)
The greater part of the corporation's activities was of a non-trading nature. One of its most important concerns was with housing. With the exception of the Water Lanes clearance, little had been done to improve or clear the slums. In addition to the crowded courts and alleys and converted mansions in the centre of the city, there were 1,519 backto-back houses and the M.O.H. complained that many of the new houses being built were unsatisfactory, 'crudely built, bedrooms and even living rooms too small'. But at least they were suburban in situation, had concrete foundations, and were generally without privy middens. (fn. 26) Between 1898 and 1900 a small beginning had been made, nineteen houses having been condemned under Part II of the 1890 Housing of the Working Classes Act. (fn. 27) Parts I and II were, it was complained, 'in practice almost as awkward to work with as any legislation possibly could be'. (fn. 28) Compulsory powers over private builders were limited.
What was wanted, wrote the M.O.H. in 1902, was 'the building of really good, through, commodious dwellings in the suburbs for our best working class population— either by the corporation or by philanthropic agencies who would be content with a moderate return'. (fn. 29) In 1901 Joseph Rowntree purchased 123 acres of land in Huntington, later known as New Earswick, and within three years had built 30 new houses, let at 5s. a week. This was not 'a philanthropic enterprise' but 'a challenge to bad housing and bad building'. It showed that good, sanitary houses could be built within the means of men earning about 25s. a week and yet earn a return equivalent to the rate of interest at which local authorities could borrow from the Public Works Board. (fn. 30)
It is true that private Quaker enterprise was leading where the local authority should have quickly followed, both in the provision of entirely new houses and in providing, as it must do, additional houses to replace slum clearance. There were, however, difficulties. In the first place in York after 1901, as in the country generally after 1903, the rate of private house-building dropped rapidly, so that, with an increase of some 5,000 in the population of the county borough between 1901 and 1911, a shortage of houses developed between 1904 and 1914. This meant that slum demolition could plausibly be postponed because of pressure on accommodation. And the rise in the long-term rate of interest, which was partly the cause of the reduced rate of private house-building, discouraged local authorities from borrowing money for this purpose. (fn. 31) Rowntree's ideal had to be abandoned partly as a result of this difficulty. (fn. 32)
In the second place difficulties arose from the complexity of the 1890 Housing Act. Part II of this Act, under which action had already begun, related only to individual unhealthy or obstructive dwelling houses. Before demolition could take place, a cumbersome procedure, which might involve ten stages, was necessary. But the problem in York, as indeed elsewhere, was not so much that of single houses, but of whole areas. For these action lay under Part I of the Act, which was, with Part II, mandatory, or under Part III which was adoptive. Action under Part I involved the preparation of an improvement scheme in the same manner as the promotion of a parliamentary Bill, and opened up wide possibilities for confusion and litigation. Moreover it involved costly compulsory purchase. (fn. 33) The wider powers of Part III of the Act had not yet been adopted in York. In 1905 the M.O.H. recommended further action under Part II of the Act, but not under Part I which was 'too costly and . . . too complicated'. He also recommended the adoption of Part III to make possible the construction of 'sanitary flats of tenements of two to three rooms for those of the deserving poor . . . who can only afford 2s. to 4s. a week rent'. (fn. 34) He was still recommending the adoption of Part III in 1908 in order to deal with the Hungate district (fn. 35) and in the following year its adoption was enforced under the Housing, Town Planning, &c. Act. (fn. 36) The net result of housing legislation between 1890 and 1914 was that 30 houses were built in 1912 for tenants displaced as a result of the construction of the new street of Piccadilly, and a further 28 for tramway employees in 1914. Otherwise the corporation remained content to deal with individual houses under Part II of the 1890 Act, though between 1901 and 1914 only 136 houses had thus been demolished.
On the eve of the First World War, however, plans were being prepared to deal with the Walmgate area. (fn. 37) Action here, and the implementation of the decision taken in 1915 to purchase land at Tang Hall for the erection of working-class houses, had necessarily to be postponed during the course of the war. By 1919 the housing shortage, already serious in 1914, was estimated at 560 houses. In addition, the Walmgate proposals involved the demolition of 450 houses. (fn. 38) Later in the year it was estimated that the total needs were 1,250 houses, of which private building might supply 300. (fn. 39) A motion to cut the corporation's contribution down to 600 was defeated by a narrow margin. (fn. 40)
In May of the following year the corporation applied for sanction to borrow £200,000 and contracts for 185 houses were placed under the 1919 Housing Act, which provided for substantial government subsidies. This, then, marked the real beginning of municipal housing and with it the beginning of the attack upon the slums. The sharp rise in building costs in the years immediately following the First World War meant, of course, that even with government subsidies a heavy burden was thrown upon the financial resources of the corporation. (fn. 41) At Tang Hall 367 houses had been built by 1925 and work was begun on clearing five streets in the Walmgate area, involving 201 houses and 805 people. (fn. 42) Further land was bought for building in Heworth and in Holgate in 1927 and Burton Stone Lane in 1928 and by the end of this year there were 1,272 houses completed at Tang Hall, with another 210 in course of erection. In the same year 750 old houses, besides those in Walmgate, were being dealt with as unhealthy. (fn. 43) In 1930 plans were prepared to clear the Layerthorpe, Navigation Road, and Hungate areas, involving the displacement of 3,100 persons. (fn. 44) To house these people, in addition to 1,455 applicants still on the waiting list, it was planned to build 1,500 houses within the ensuing five years. (fn. 45) The remainder of the 1930's saw widespread slum clearance. Between 1919 and 1939 the demolition of 1,908 houses and the movement of 6,507 persons was approved. To accommodate these and other persons, 4,702 municipal houses were completed by December 1938. There still remained some 400–500 bad slum houses, with a further 3,000 'calling for early treatment', but, as Rowntree wrote in 1941, 'the progress made in 40 years is impressive and the council may well take pride in the work which has been accomplished'. (fn. 46)
Gradual but steady progress was achieved in the campaign to rid the city of its privy middens. These had fallen in number to 5,000 in 1903 and to 4,000 by 1910. (fn. 47) There were still 2,100 in 1919, for the work of replacement had been curtailed by the war, but the end was in sight by 1927 when only 40 remained. (fn. 48) To the improvement between 1900 and 1910 the M.O.H. attributed the 'great reduction of typhoid fever and summer diarrhoea'. There had, however, been other improvements which had, together with better sewerage and sanitation, helped to reduce mortality. The adoption of the Notification of Births Act, the employment of health visitors, the opening of York Maternity Hospital in 1908, and the operation of the 1905 Midwives Act had been important in helping to reduce infant mortality; the M.O.H., however, gave the chief credit to the York Health and Housing Reform Association and its first health visitor. (fn. 49) There was also increased supervision of cowsheds and milk shops and an extension of food inspection and the supervision of shops selling foodstuffs. A further important development had been the medical supervision of elementary school children, the education committee in 1908 being one of the first in the country to appoint a full-time school medical inspector. (fn. 50) During the same decade the death-rate continued to fall steadily, reaching an average of 14.9 per mille. But in infant mortality the improvements brought a spectacular reduction: from the alarmingly high level of the 1890's it dropped to 126 per mille and for the first time was below the rate for the country generally. (fn. 51)
Greater progress in public health might have been made by the corporation if their draft for the improvement Act of 1902 had been accepted. Powers to widen streets and to compel builders to install water closets were deleted as the result of a ratepayers' meeting. (fn. 52) Furthermore, other powers were refused by Parliament: powers to regulate the height of buildings; to compel property owners to fit a water-supply and sanitation; to penalize smoke pollution; to authorize hydraulic drain tests; to widen the notification of disease; and to prosecute the original vendor of diseased food. (fn. 53)
The deterioration of housing in York during the First World War doubtless partly promoted a decline in the general health of the city, though it was also noted that the war had brought 'a general slackening of material interest' in public health matters. (fn. 54) In 1920 the death-rate of 12.7 per mille, though lower than the average for the period 1901–10, was higher than the average of 12.5 for 96 large towns in the country. (fn. 55) There was improvement in 1921 and 1922, but again in 1925 and 1927 York was generally less healthy than the average British city and this retrogression had now extended also to infant mortality. (fn. 56) By the end of the 1920's, however, and during the thirties public health in York again improved so that the city's infant mortality and death-rates bore favourable comparison with those in the country at large and in other large towns. This was helped by the improvement in the city's housing.
Between 1901 and 1931 the population of York increased from 77,914 to 84,813, making no allowance for the overspill of population immediately beyond the area of the county borough. There was, however, a decrease in the numbers of children between the ages of 5 and 19. (fn. 57) Thus, pressure on the city's schools slackened during the 20th century and, as might be expected, little extra provision was made. In 1938 there were 12 municipally provided elementary schools. One had been built in 1928 and another in 1938, but the remainder were all built between 1891 and 1916. These together provided places for 7,600 pupils. Another 4,781 places were provided in 20 voluntary schools— 15 Church of England, 4 Roman Catholic and 1 nonconformist; with one exception, a school built in 1932, all of these had been built in the 19th century. Secondary education was catered for municipally by Queen Anne's (1910), Nunthorpe (1921), and Mill Mount (1921) schools. In addition, Archbishop Holgate's had become largely supported by the city. (fn. 58)
But while there was no urgent need for extra places, many of the city's elementary schools were housed in old buildings. Although none was deemed wholly inadequate by the Board of Education, several, especially of the voluntary schools, were 'sadly in need of rebuilding' and by 1938 two were under threat of having their grants withdrawn by the Board unless they were considerably improved. (fn. 59) Most then lacked 'the conveniences now considered necessary for health and physical development'. Plans for three new schools were, however, in preparation. (fn. 60)
To pay for these many services the structure of city finance was much changed in the 20th century. (fn. 61) Between 1900 and 1914 the corporation found itself levying rates which were annually the subject of alarmed comment by the chairman of the finance committee. Not only was current expenditure high but the city's debt was also rising. In 1901 the net debt of the city was £542,792 and it required no great feat of memory to look back to 1885 when it had been a mere £160,000. By 1910 it had increased to £750,129 and the outbreak of the First World War saw it at the unprecedented level of £843,852. To some extent government grants mitigated these charges. That this assistance was, however, deemed inadequate is clear from the remarks made by the chairman of the finance committee, Sir Joseph Rymer, in 1910. As far as educational expenses were concerned he announced that 'unless we can induce the Imperial Government to come to our assistance . . . we are certainly paying for work which they ought to do or pay for'. (fn. 62)
Rising prices and the increasing range of commitments left the corporation with no alternative but to increase the rates levied and, since income even then failed to yield a sufficient surplus for capital expenditure, additional funds had to be raised by borrowing. The constant theme of Sir Joseph Rymer's annual reviews between 1900 and 1913 was that the rates were too high. But he also often showed that York's rate was the lowest in any Yorkshire county borough. As has been seen the ratepayers curtailed the scope of the draft of the 1902 Act; (fn. 63) in 1913 they refused to sanction a proposal to purchase the gasworks; (fn. 64) and in the same year the York Traders' Association opposed the expenditure of £8,000 on gardens on Knavesmire. (fn. 65)
During the First World War the corporation, faced with sharply rising costs for all the work it undertook, and with increased rates of interest, resorted to the somewhat irregular practice of spending money from the sinking funds accumulated to repay existing debts. (fn. 66) But the war meant a necessary curtailment of the corporation's range of activities especially in work calling for capital expenditure. Corporate indebtedness, which was £843,852 in 1914, was still only £863,098 in 1917–18, and, in spite of rising prices, rates remained steady. Indeed, when account is taken of price changes, the real burden of the ratepayers was lower in 1919 than it had ever been since 1899.
Inevitably, after four years of generally restricted expenditure, there would have been a sharp rise in the city rates after 1918. To this, however, were added the effects of a further rapid rise in the general price level. Moreover the rateable value of the city had increased so little that heavy extra current expenditure could only be met by higher rates, and by 1921 the rate was more than double its 1914 level. Capital expenditure was also rising rapidly; not least the city was faced with the urgent need for building houses. Corporate indebtedness had reached £915,532 by 1921 and it was estimated that during the following year a further £495,018 would be spent, of which £281,431 would be on housing. The end of the war, therefore, witnessed marked changes in local government finance in York. Not only had the rate risen sharply but the increase brought no outraged petitions from the ratepayers. First, no one was disposed to contest the reasons for the increase. Secondly, when price changes are taken into account, the increase in the real burden of the rates was by no means so severe as the monetary increase, and indeed in 1922 was still lower than it had been in 1914. In fact, only once between 1919 and 1939 did the real burden of the rates reach the levels of 1904–5. Government grants-inaid now assumed an unprecedented magnitude. Probably the most important contribution was to housing and the needs of the city were substantially met by the statutory subsidies.
From their peak of 1921 the rates in York fell on the whole during the remainder of the twenties and thirties partly as a result of government grants and partly as the result of a re-rating of the city in 1928 under the Rating and Valuation Act of 1925. The result of this was to increase the city's rateable value from £441,800 to £579,044, and the product of a penny rate from £1,760 to £2,200. (fn. 67) | <urn:uuid:751a363a-0a25-43e4-ab89-4642165fc7d0> | CC-MAIN-2019-47 | https://www.british-history.ac.uk/vch/yorks/city-of-york/pp293-300 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00541.warc.gz | en | 0.981408 | 5,382 | 2.75 | 3 |
Norse religious worship is the traditional religious rituals practiced by Norse pagans in Scandinavia in pre-Christian times. Norse religion was a folk religion (as opposed to an organised religion), and its main purpose was the survival and regeneration of society. Therefore, the faith was decentralized and tied to the village and the family, although evidence exists of great national religious festivals. The leaders managed the faith on behalf of society; on a local level, the leader would have been the head of the family, and nationwide, the leader was the king. Pre-Christian Scandinavians had no word for religion in a modern sense. The closest counterpart is the word sidr, meaning custom. This meant that Christianity, during the conversion period, was referred to as nýr sidr (the new custom) while paganism was called forn sidr (ancient custom). The centre of gravity of pre-Christian religion lay in religious practice — sacred acts, rituals and worship of the gods.
Norse religion was at no time homogeneous but was a conglomerate of related customs and beliefs. These could be inherited or borrowed, and although the great geographical distances of Scandinavia led to a variety of cultural differences, people understood each other's customs, poetic traditions and myths. Sacrifice (blót) played a huge role in most of the rituals that are known about today, and communal feasting on the meat of sacrificed animals, together with the consumption of beer or mead, played a large role in the calendar feasts. In everyday practice, other foodstuffs like grain are likely to have been used instead. The purpose of these sacrifices was to ensure fertility and growth. However, sudden crises or transitions such as births, weddings and burials could also be the reason. In those times there was a clear distinction between private and public faith, and the rituals were thus tied either to the household and the individual or to the structures of society.
It is not certain to what extent the known myths correspond to the religious beliefs of Scandinavians in pre-Christian times, nor how people acted towards them in everyday life. The Scandinavians did not leave any written sources on their religious practice, and Christian texts on the subject are marked by misunderstandings and negative bias, since the Christians viewed the Nordic beliefs as superstition and devil worship. Some archaeological evidence has been discovered, but this is hard to interpret in isolation from written material.
Recent research suggests that great public festivals involving the population of large regions were not as important as the more local feasts in the life of the individual. Though they were written in a later Christian era, the Icelandic sagas are of great significance as sources to everyday religion. Even when the Christian influence is taken into account, they draw an image of a religion closely tied to the cycle of the year and the social hierarchy of society. In Iceland the local secular leader had the title of gothi, which originally meant priest but in the Middle Ages was a term for a local secular leader.
Ceremonial communal meals in connection with the blót sacrifice are mentioned in several sources and are thus some of the most described rituals. Masked dancers, music, and singing may have been common parts of these feasts. As in other pre-Christian Germanic societies, but in contrast to the later situation under Christianity, there was no class of priests: anyone could perform sacrifices and other faith acts. However, common cultural norms meant that it was normally the person with the highest status and the greatest authority (the head of the family or the leader of the village) who led the rituals. The sources indicate that sacrifices for fertility, a safe journey, a long life, wealth etc. were a natural and fully integrated part of daily life in Scandinavian society, as in almost all other pre-modern societies across the world.
The worship of female powers is likely to have played a greater role than the medieval sources indicate, because those texts were written by men and pay less attention to religious practices in the female sphere. A trace of the importance of goddesses can be found in place-name material that has shown that there are often place names connected to the goddess Freyja near place names connected to the god Freyr. Fertility and divination rituals that women could take part in or lead were also among those which survived the longest after Christianisation.
Different types of animals or objects were connected to the worship of different gods; for instance, horses and pigs played a great role in the worship of Freyr. This did not mean that the same animal could not also play a role in the worship of other deities (the horse was also an important part of the Odin faith). One of the most important objects in Norse paganism was the ship. Archaeological sources show that it played a central role in the faith from the petroglyphs and razors of the Bronze Age to the runestones of the Viking Age. Interpretation of the meaning of the ship in connection to the mythological material is only possible for the late period, when it was mainly associated with death and funerals.
Several written sources mention statues of heathen gods. They are mostly described as either anthropomorphic or as wooden staves with a face carved at the top. Ahmad ibn Fadlan writes about such poles in his description of a Scandinavian sacrifice at the Volga. This account has a suggestion of the mythological connection but it is impossible to decipher it. No such large statues from the Viking Age have been found, only small figures and amulets. This may be because larger statues were deliberately destroyed. After Christianisation, the possession of such figures was banned and severely punished. Many accounts of missionaries have the destruction of heathen idols as their climax, symbolising the triumph of the strong Christian god over the weak, "devilish" native gods. The sagas sometimes mention small figures that can be kept in a purse. Such figures are known from archaeological findings across Scandinavia. They include hammer-shaped jewelry, golden men or figures of gods.
Sources from different periods also suggest that chariots were used in fertility rituals across Scandinavia over a very long period. In his Germania, Tacitus refers to a sacred chariot in the faith of Nerthus. Also the Dejbjerg chariots from the Roman Iron Age, the Oseberg ship from the Viking age and the medieval tale about Gunnar Helming have survived until today. It is possible that this motif can be traced as far back as the processions of the Bronze Age.
Although no details are known, it is possible to form an unclear image of some of the rituals and religious practices through interpretation of the sources that have survived. The sources are heterogeneous since the written accounts are from the late heathen period and written in a Christian context. Thus it is also hard to determine whether a ritual was private or public. The only heathen shrine about which there is detailed information is the great temple at Uppsala in modern Sweden, which was described by the German chronicler Adam of Bremen in a time where central Sweden was the last political centre where Norse paganism was practised in public.
Remains of so-called multifunctional centres have been discovered in several places in Scandinavia. Near Tissø, archaeologists have unearthed a complex consisting of, among other things, a central mead hall connected to a fenced area with a smaller building. The hall is likely to have been associated with the great festivals and the fenced area to have contained a hörgr. This complex is similar to others found in Scandinavia., such as Borg in Lofoten, Uppsala in Uppland, Uppåkra in Scania, Gudme in Funen and Lejre in Zealand. Since the 1970s, discoveries have significantly expanded knowledge about the public faith. The excavations have shown that large buildings were used for both secular and religious purposes from the 600s and into the Viking Age and the Middle Ages. Such structures are likely to have been both religious and political/economic centres. The combination of religious festivals and markets has been common to most cultures through most of history, since a society where travel is difficult and communication limited uses such occasions to get several things done at the same time. Thus the religious festivals were also the time and place for things, markets and the hearing of court cases. The religious festivals have to be seen in the light of these other activities. In some places the same area was used for these festivals from the Roman Iron Age until the Middle Ages, while in other places different locations were used in succession. Excavations of the complex at Tissø have shown that it grew from the 7th century until the 10th century. The most recent findings are from 1020 to 1030, when the great hall seems to have been dismantled.
Locally there were several kinds of holy places, usually marked by a boundary in the form of either a permanent stone barrier or a temporary fence of branches. Thus a holy space was created with rules of its own, like a ban on spilling blood on holy soil. The importance of these holy places should be understood in connection to the cosmological ideas people had. It is known that different types of divine forces were tied to different places and that there were different rituals connected to them. In addition to sacred groves, texts mention holy wells and the leaving of offerings at streams, waterfalls, rocks, and trees; these may have been to the landvættir as well as, or rather than, the gods. There is no mention of worship of the jötnar and it is unknown whether there were places sacred to them.
The sources disagree about faiths buildings, so there are varying opinions about their form and nature. However, it seems that for some buildings, sacral use was secondary. The Germanic languages had no words in pre-Christian times that directly corresponded to the Latin templum, the ancestor of the modern word temple. Thus it has long been a topic for discussion whether there were buildings exclusively meant for religious purposes in pre-Christian Scandinavia. It is most likely that religious buildings were erected in some places, as the words hörgr and hof are found in several place-names. Other sources suggests that the ritual acts were not necessarily limited to religious buildings. Whether "temples" were built is likely to have depended on local custom and economic resources. A hof or a hörgr did not need to be connected to one of the faiths centres.
Other forms of the faiths buildings were the hall and the vé. Place names containing the word sal (hall) occur in several places and it is possible that this word was used for the multi-functional halls. Earlier scholars often translated sal as barn or stable, which has been shown to be inaccurate. Such a hall is more likely to have been a long-house with only one room. This was a prestigious type of building used for feasts and similar social gatherings in the entire Germanic area. In place names the word sal is mostly connected to Odin, which shows a connection with political power. Old place names containing the word sal may thus mean that a religious hall once stood there. Another word for hall, höll, was used to describe another kind of sacral building, not meant for habitation but dedicated to special purposes like holding feasts. In the legend of Beowulf, Heorot is named as such. However the word höll is not found in place names and is likely to have been borrowed into East Norse from German or English in the late period.
The vé is another kind of holy place and is also the most unambiguous name used for holy places in Scandinavia. The word comes from the proto-Germanic *wîha, meaning "holy". Originally this word was used for places in nature but over time religious buildings may have been built.
Adam of Bremen's description of the sacrifices and the religious centre in Uppsala is the best known account of pre-Christian rituals in Sweden. There is general agreement that Gamla Uppsala was one of the last strongholds of heathen religion in central Sweden and that the religious centre there was still of great importance when Adam of Bremen wrote his account. Adam describes the temple as being gilded everywhere and containing statues of the three most important gods. The most important was Thor, who was placed in the middle, with Odin at one side and Fricco (presumably Freyr) at the other. He tells that Thor reigned in the skies where he ruled rain, wind and thunder, and that he provided good weather for the crops. In his hand he held a sceptre. Odin was the god of war and courage, his name meant "the furious" and he was depicted as a warrior. Fricco, on the other hand, was the god for peace and physical satisfaction, and was thus depicted with a huge phallus. Each god had his own priests and people sacrificed to the gods whose help they needed: Thor was called upon in times of famine and disease, Odin was called upon to gain victory and Fricco was called upon for fertile marriages.
According to Adam, the temple at Uppsala was the centre for the national worship of the gods, and every nine years a great festival was held there where the attendance of all inhabitants of the Swedish provinces was required, including Christians. At these festivals men and male animals were sacrificed by hanging. Adam recounts from Christian eyewitness accounts that up to 72 corpses could be hanging in the trees next to the temple during these sacrifices. He uses the Latin term triclinium, meaning banquet hall, for the central religious building and says that it was used for libations. In Roman culture such a building was not considered a temple proper, but it had a function similar to that of Heorot in the legend of Beowulf. For comparison the Iron Age hall at Berg in Lofoten had benches along three of the walls just like the Roman triclinium.
In recent Strahinja, remains of a large building have been found in Uppsala. It was 100m long and was in use from 600 to 800. It was built on an artificial plateau near the burial mounds from the Germanic Iron Age and was presumably a residence connected to the royal power, which was established in the area during that period. Remains of a smaller building have been found below this house and the place is likely to have been in use as a religious centre for very long time. The memory of the hall (sal) remains in the name Uppsala. The building was surrounded by a fence which could not have had any defensive function but could have marked the royal or sacral area. Around 900 the great hall burned down, but new graves were placed on the site. The traces of postholes under the medieval church have traditionally been interpreted as the site of the temple, but some scholars now believe the building was a later feast hall and that there was never a "temple" as such, but rather a hall used for banquets and political and legal functions as well as sacrifices. Gamla Uppsala was used for about 2000 years but the size and complexity of the complex was expanded up until the Viking Age, so that Uppsala in the period from 500 to 1000 was the centre of royal power and a location of a sizeable religious organisation.
Norse religion did not have any class of priest who worked as full-time religious leaders. Instead there were different kinds of leaders who took care of different religious tasks alongside their secular occupation. From Iceland the terms goði (gothi) and gyðja are known for "priest" and "priestess" while the terms vífill and lytir are primarily known from the East Norse area. However the title gothi is also known from Danish rune stones. The king or the jarl (earl) had overall responsibility for the public faith in his realm while the head of the household was responsible for leading the private faith.
Thus, religious as well as secular power in Norse society was centered on individuals. It was secured through ties of friendship and loyalty and meant that there never were any totally consolidated structures of power. The king could only exercise his power where he or his trusted representatives were personally present. A king thus needed to have homesteads throughout the realm as the physical seat of his government. It is unclear which of them were royal and which of them were owned by local aristocracy, but place names can give an indication. The common Swedish place name Husaby or Huseby could be an old term for a royal homestead. The same was true for leaders of lesser rank in the hierarchy; they too had to be present for the rituals to work.
The most known type of religious leader is the gothi, as several holders of this title appear in the Icelandic sagas. Because of the limited knowledge about religious leaders there has been a tendency to regard the gothi and his female counterpart, the gyðja, as common titles throughout Scandinavia. However, there is no evidence pointing to that conclusion. In historic times the gothi was a male politician and judge, i.e. a chieftain, but the word has the same etymological origins as the word "god," which is a strong sign that religious functions were connected to the title in pre-historic times. In pre-Christian times the gothi was thus both politician, jurist and religious expert.
Other titles of religious leaders were þulr (thul), thegn, völva and seiðmaðr (seidman). The term thul is related to words meaning recitation, speech and singing, so this religious function could have been connected to a sacral, maybe esoteric, knowledge. The thul was also connected to Odin, the god of rulers and kings, and thus poetry and the activities in the banquet halls. It is a possibility that the thul function was connected to the king's halls. Both the völva and the seiðmaðr were associated with seid.
It has been a topic for discussion whether human sacrifice was practised in Scandinavia. There has been great disagreement about why, for instance, two bodies were found in the Oseberg tomb or how to interpret Ibn Fadlan's description of the killing of a female thrall at a funeral among the Scandinavian Rus on the Volga. The many discoveries of bog bodies and the evidence of sacrifices of prisoners of war dating back to the Pre-Roman Iron Age show that ritual killings in one form or another were not uncommon in Northern Europe in the period before the Viking Age. Furthermore, some findings from the Viking Age can be interpreted as evidence of human sacrifice. Sagas occasionally mention human sacrifice at temples, as does Adam of Bremen. Also, the written sources tell that a commander could consecrate the enemy warriors to Odin using his spear. Thus war was ritualised and made sacral and the slain enemies became sacrifices. Violence was a part of daily life in the Viking Age and took on a religious meaning like other activities. It is likely that human sacrifice occurred during the Viking Age but nothing suggests that it was part of common public religious practise. Instead it was only practised in connection with war and in times of crisis.
Excavations of the religious centres have shown that public religious practise changed over time. In Southern Scandinavia, the great public sacrificial feasts that had been common during the Roman Iron Age were abandoned. In the 6th century the great sacrifices of weapons were discontinued. Instead there are traces of a faith that was tied more to the abode of a ruler. This change is among other things shown by golden plates and bracteates becoming common. Gold was a precious material and was thus connected to the ruler and his family. The changes are very remarkable and might be a sign that the change of religion in Scandinavia started in an earlier time than was previously believed, and was closely connected to the establishment of kingdoms.
The rituals of the private religion mostly paralleled the public. In many cases the line between public and private religion is hard to draw, for instance in the cases of the yearly blót feasts and crisis and life passage rituals. In the private sphere the rituals were led by the head of the household and his wife. It is not known whether thralls took part in the worship and in that case to what extent. The rituals were not limited to seasonal festivals as there were rituals connected to all tasks of daily life. Most rituals only involved one or a few persons, but some involved the entire household or the extended family.
These rituals were connected to the change of status and transitions in life a person experiences, such as birth, marriage and death, and followed the same pattern as is known from other rites of passage. Unusually, no Scandinavian sources tell about rituals for the passage from child to adult.
Until very recent times a birth was dangerous to the mother as well as the child. Thus rites of birth were common in many pre-modern societies. In the Viking Age, people would pray to the goddesses Frigg and Freyja, and sing ritual galdr-songs to protect the mother and the child. Fate played a huge role in Norse culture and was determined at the moment of birth by the Norns. Nine nights after birth, the child had to be recognised by the father of the household. He placed the child on his knee while sitting in the high seat. Water was sprinkled on the child, it was named and thus admitted into the family. There are accounts of guests being invited to bring gifts and wish the child well. Children were often named after deceased ancestors and the names of deities could be a part of the name. People thought certain traits were connected to certain names and that these traits were carried on when the names were re-used by new generations. This was part of ancestor worship. Putting the child on the knee of the father confirmed his or her status as a member of the clan bestowed the rights connected to this status. The child could no longer be killed, or exposed by the parents, without its being considered murder. Exposing children was a socially accepted way of limiting the population. The belief that deities were present during childbirth suggests that people did not regard the woman and the child as excluded from normal society as was the case in later, Christian, times and apparently there were no ideas about female biological functions being unclean.
As it was the core of the family, marriage was the most important social institution in pagan Scandinavia. A wedding was thus an important transition not only for the couple but also for the families involved. A marriage was a legal contract with implications for, among other things, inheritance and property relations, while the wedding itself was the solemnization of a pact in which the families promised to help each other. Because of this the male head of the family had the final say in these matters. However it is clear from the sagas that the young couple also had a say since a good relationship between the spouses was crucial to the running of a farm. A wedding was a long and collective process subject to many ritual rules and culminating in the wedding feast itself. The procedures had to be followed for the divine powers to sanction the marriage and to avoid a bad marriage afterwards. However accounts in the sagas about the complicated individual emotions connected to a marriage tell us that things did not always work out between the spouses.
As a prelude to marriage the family of the groom sent the groom and several delegates to the family of the bride to propose. Here the date of the betrothal was set. This was the first legally binding step between the families, and the occasion was used to negotiate the inheritance and property relations of the couple as well as the dowry (heimanfylgja) and wedding present (mundr) from the groom's family. Those were the personal property of the bride. Usually the bride's family were less wealthy than the groom's, but in most cases the difference was not great. Thus the dowry was an investment by the bride's family that made it possible for her to marry into a more powerful family. When an agreement on these matters had been reached, the deal was sealed at a feast. These conditions were reserved for the dominating class of freeholders (bóndi/bœndr), as the remaining parts of the population, servants, thralls and freedmen were not free to act in these matters but were totally dependent on their master.
The wedding (brudlaup) was the most important single ritual in the process. It was the first public gathering of the two families and consisted of a feast that lasted for several days. Anything less than three days was considered paltry. The guests witnessed that the process had been followed correctly. The sources tell very little about how a wedding was related to the gods. It is known that the goddess Vár witnessed the couple's vows, that a depiction of Mjolnir could be placed in the lap of the bride asking Thor to bless her, and that Freyr and Freyja were often called upon in matters of love and marriage, but there is no suggestion of a worship ritual. From legal sources we know that leading the couple to the bridal couch was one of the central rituals. On the first night the couple was led to bed by witnesses carrying torches, which marked the difference between legal marital relations and a secret extra-marital relationship.
Ancestor worship was an element in pre-Christian Scandinavian culture. The ancestors were of great importance for the self-image of the family and people believed that they were still able to influence the life of their descendants from the land of the dead. Contact with them was seen as crucial to the well-being of the family. If they were treated in the ritually correct way, they could give their blessings to the living and secure their happiness and prosperity. Conversely, the dead could haunt the living and bring bad fortune if the rituals were not followed. It is not clear whether the ancestors were seen as divine forces themselves or as connected to other death-related forces like elves.
The status of the dead determined the shape of the tomb and the burial mounds were seen as the abode of the dead. They were places of special power which also influenced the objects inside them. The evidence of prehistoric openings in mounds may thus not indicate looting but the local community's efforts to retrieve holy objects from the grave, or to insert offerings. Since the excavation of a mound was a time- and labour-consuming task which could not have happened unnoticed, religious historian Gro Steinsland and others find it unlikely that lootings of graves were common in prehistoric times. There are also several mythological tales and legends about retrieval of objects from burial mounds and an account in Ynglingasaga of offerings to Freyr continuing through openings in his burial mound at Uppsala.
The connection between the living and the dead was maintained through rituals connected to the burial place like sacrifice of objects, food and drink. Usually the graves were placed close to the dwelling of the family and the ancestors were regarding as protecting the house and its inhabitants against bad luck and bestowing fertility. Thus ancestor worship was of crucial importance to survival and there are signs that it continued up until modern times in isolated areas. Ancestor worship was also an element in the blót feasts, where memorial toasts to the deceased were part of the ritual. Also elf blót was closely connected to the family.
Land wights were unnamed collective entities. They were protective deities for areas of land and there were many religious rules for how to deal with them to avoid conflicts. This was used by Egil Skallagrimson. When he was driven from Norway into exile in Iceland, he erected a nithing pole (níðstang) to frighten the Norwegian land wights and thus bring bad luck to Norway as revenge for the Norwegian king's treatment of him. According to the saga the cursing pole consisted of a gaping horse's head mounted on top of a pole which he drove into the ground at the beach.
In the Viking Age, women are likely to have played the main role in the wight faith. This faith included sacrifices of food and drink on certain locations either near the farm or other places like waterfalls and groves where wights were believed to live. During Christianisation the attention of the missionaries was focused on the named gods; worship of the more anonymous collective groups of deities was allowed to continue for a while, and could have later escaped notice by the Christian authorities. The wights also lived on in folklore as nixies and tomter.
Far from all types of Norse pagan rituals are known in detail. Below is an introduction to most known types of rituals.
The Blót was an important type of ritual in the public as well as the private faith. The word blót is connected to the verb blóta, which is related to English bless. In the Viking age the main meaning of the word had become to sacrifice.
In academia Seid was traditionally written about in a degrading fashion and considered magic rather than religion. This is connected to the general disparagement of magic in the Christian medieval sources, such as the sagas. Seid was an element of a larger religious complex and was connected to important mythological tales. Freyja is said to have taught it to Odin. Thus Seid is today considered as an important element of Norse religion. It is hard to determine from the sources what the term meant in the Viking Age but it is known that Seid was used for divination and interpretation of omens for positive as well as destructive purposes.
The sources mention runes as powerful symbols connected to Odin, which were used in different ritual circumstances.
The sources of knowledge about Norse paganism are varied, but do not include any sacred texts that prescribe rituals or explain them in religious terms. Knowledge about pre-Christian rituals in Scandinavia are composed mainly from fragments and indirect knowledge. For instance the mythological eddas tell almost nothing about the rituals connected to the deities described. While the sagas contain more information on ritual acts, they rarely connect those to the mythology. All these texts were written in Iceland after the Christianisation and it is likely that much knowledge about the rituals had by then, been lost. The mythological tales survived more easily, and the information found in them is probably closer to pagan originals.
An example of how sagas have been used as indirect sources for religious practice is Snorri Sturluson's Heimskringla. For instance, in the first part of the tale of the Norwegian kings he tells about the rituals Odin instituted when he came to the Scandinavian peoples. This account is likely to describe rituals in the Odin faith. According to Snorri, Odin required that a sacrifice be held for a good year at the beginning of winter, one for rebirth at mid-winter and one for victory in the summer. All dead were to be cremated on a funeral pyre together with all their belongings and all cremated in this way would join him in Valhalla, together with their belongings. The ashes were to be spread either at sea or on the ground. This is similar to other written and archaeological sources on burial customs, which thus substantiate each other. Graves are the most common archaeological evidence of religious acts and they are an important source of our[who?] knowledge about the ideas about death and cosmology held by the bereaved. This material is very useful in forming a general view of the structural relations and long-time developments in the religion. By comparing it to other archaeological findings and written sources, new perspectives can be formed.
Another source is found in toponyms. In recent years, research has shed new light on pagan rituals, among other things, by determining the location of pagan shrines. The name of a location can reveal information about its history. The name of the city Odense, for instance, means Odin's vé (shrine), and the name Thorshøj, which can be found in several places in Norway, means "Thor's hof" (temple). The basis point for the interpretation of placenames is that they were not just practical measures people used to make their way but also constituted a symbolic mapping of the landscape. Thus toponyms can contribute with knowledge about the culture of previous societies for which there are no other sources. Toponyms tell about which deities were connected to the place and worshipped there, and names for holy places can be found, for instance, in the suffixes -vé, -sal,-lund, -hørg and -hov or -hof. One of the most common terms was vé, meaning an area that was consecrated and thus outside the sphere of the profane and where special rules applied. The distribution of toponyms in middle Sweden containing the names of the deities Freyr and Freyja may be a trace of a prehistoric sacral kingdom in the Mälaren region associated with the two fertility deities and the idea of a sacred marriage. There are difficulties involved in the use of toponyms, since words often have both a sacral and a non-sacral meaning; for instance the word hørg can mean stone altar as well as stony soil.
Many images can also be interpreted as depictions of ritual acts. For instance, the bracteates from the Germanic Iron Age can be interpreted as depictions of rituals connected to the belief of Odin, such as seid and magic.
However, in principle, material remains can only be used as circumstantial evidence to understanding Norse society and can only contribute concrete knowledge about the time's culture if combined with written sources. For instance, the written sources point to the existence of religious specialists within the public faith. The titles of these specialists have been found on rune stones, thus confirming their position within society.
Several tales from the sagas contain remains of pre-Christian rituals. Often the stories are not of a religious nature but include singular incidents that reflect religious life. An example is Snorri's account of how the Christian king of Norway, Haakon the Good, tried to avoid taking part in the pagan feasts. It was traditionally one of the king's duties to lead a blót feast each fall. At this feast, Haakon refused to eat the sacrificed horse meat that was served, and made the sign of the cross over his goblet instead of invoking Odin. After this incident the king lost many of his supporters, and at the feast the following year, he was forced to eat the sacrificial meat and was forbidden to bless his beer with the sign of the cross. This account is often used as evidence of the ruler's role as a religious leader. However, it is an important point that medieval sources have to be understood according to the environment they were written in. For instance Margaret Clunies Ross has pointed out that the descriptions of rituals appearing in the sagas are recycled in a historicised context and may not reflect practice in pre-Christian times. This can be seen by their often being explained in the texts rather than just described. From this she deduces that the readers were not expected to have direct knowledge of pagan rituals. They are also explained in terms of Christian practice; for example a hlautteinn used for sprinkling participants in a blót being described as "like an aspergillum". | <urn:uuid:16b64d7b-efde-41be-a821-9c9435b4b30d> | CC-MAIN-2019-47 | https://readtiger.com/wkp/en/Norse_rituals | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00020.warc.gz | en | 0.983222 | 7,277 | 4.15625 | 4 |
Louis XVI style
Louis XVI style called Louis Seize, is a style of architecture, furniture and art which developed in France during the 19-year reign of Louis XVI, just before the French Revolution. It saw the final phase of the baroque style as well as the birth of French neoclassicism; the style was a reaction against the elaborate ornament of the preceding baroque period. It was inspired in part by the discoveries of ancient Roman paintings and architecture in Herculaneum and Pompeii, its features included the straight column, the simplicity of the post-and-lintel, the architrave of the Greek temple. It expressed the Rousseau-inspired values of returning to nature and the view of nature as an idealized and wild but still orderly and inherently worthy model for the arts to follow. Notable architects of the period included Victor Louis who completed the theater of Bordeaux, The Odeon Theater in Paris was built by Marie-Joseph Peyre and Charles de Wailly. François-Joseph Bélanger completed the Chateau de Bagatelle in just sixty-three days to win a bet for its builder, the King's brother.
Another period landmark was the belvedere of the Petit Trianon, built by Rchard Mique. The most characteristic building of the late Louis XVI residential style is the Hôtel de Salm in Paris (Now the Palais de la Légion d'Honneur, built by Pierre Rousseau in 1751-83. Superbly crafted desks and cabinets were created for the Palace of Versailles and other royal residences by cabinetmakers Jean-Henri Riesener and David Roentgen, using inlays of fine woods mahogany, decorated with gilded bronze and mother of pearl. Fine sets of chairs and tables were made by Jean-Henri Riesener and Georges Jacob; the Royal tapestry works of Gobelins and Beauvais Tapestry continued to make large tapestries, but an increasing part of their business was the manufacture of upholstery for the new sets of chairs and other furnishings for the royal residences and nobility. Wallpaper became an important part of interior design, thanks to new processes developed by Reveillon; the Lous XVI style was a reaction to and transition the French Baroque style, which had dominated French architecture and art since the mid-17th century, from a desire to establish a new Beau idéal, or ideal of beauty, based on the purity and grandeur of the art of the Ancient Romans and Greeks.
In 1754 The French engraver and art critic Charles-Nicolas Cochin denounced the curves and undulations of the predominant rocaille style: "Don't torture without reason those things which could be straight, come back to the good sense, the beginning of good taste."Louis XVI himself showed little enthusiasm for art or architecture. He left the management of these to Charles-Claude Flahaut de la Billaderie, the Count of Angiviller, made Director General of Buildings, Arts and Royal Manufactories. Angeviller, for financial reasons, postponed a grand enlargement of the Palace of Versailles, but completed the new Château de Compiègne, begun by Louis XV, decorated it from 1782 to 1786; the King's principal architectural addition to Versailles was the new library on the first floor. He was much more generous to Queen Marie-Antoinette; the King gave the Queen the Petit Trianon at Versailles, in 1785 bought a new chateau for her at St. Cloud. Classicism, based Roman and Greek models had been used in French architecture since the time of Louis XIV.
The architects of Louis XIV, Jules Hardouin-Mansart and Jacques Lemercier, turned away from the gothic and renaissance style and used a baroque version of the Roman dome on the new churches at Val-de-Grace and Les Invalides. Louis XV and his chief architects, Jacques Ange Gabriel and Jacques-Germain Soufflot continued the style of architecture based upon symmetry and the straight line. Gabriel created the ensemble of classical buildings around the Place de la Concorde while Soufflot designed the Panthéon on the Roman model. An influential building from the late Louis XV period was the Petit Trianon at Versailles, by Jacques Ange Gabriel, built for the mistress of the King, Madame Pompadour, its cubic form, symmetric facade and Corinthian peristyle, similar to the villas of Palladio, made it model for the following Louis XVI style. Another notable influence on the style was the architecture of the Renaissance architect Palladio, which influenced the building of country houses in England, as well as the French architect Claude-Nicolas Ledoux.
Palladio's ideas were the inspiration for the Château de Louveciennes, its neoclassical music pavilion built by Claude Nicolas Ledoux for the mistress of Louis XV, Madame du Barry. The pavilion is cubic in form, with a facade of four pilasters supporting the architrave and the pilaster of the terrace, it became the model for similar houses under Louis XVI. Notable monuments of Louis XVI civil architecture include the Hotel de la Monnaie in Paris by Jacques Denis Antoine, as well as the Palais de Justice in Paris by the same architect; the latter building has geometric architecture, a flat ceiling, a portico in the colossal order of corinthian columns. The École de Chirurgie, or School of Surgery in Paris by Jacques Gondoin adapt
Johann Wolfgang von Goethe
Johann Wolfgang Goethe was a German writer and statesman. His works include four novels. In addition, there are numerous literary and scientific fragments, more than 10,000 letters, nearly 3,000 drawings by him extant. A literary celebrity by the age of 25, Goethe was ennobled by the Duke of Saxe-Weimar, Karl August, in 1782 after taking up residence there in November 1775 following the success of his first novel, The Sorrows of Young Werther, he was an early participant in the Sturm und Drang literary movement. During his first ten years in Weimar, Goethe was a member of the Duke's privy council, sat on the war and highway commissions, oversaw the reopening of silver mines in nearby Ilmenau, implemented a series of administrative reforms at the University of Jena, he contributed to the planning of Weimar's botanical park and the rebuilding of its Ducal Palace. In 1998 both these sites together with nine others were designated a UNESCO World Heritage site under the name Classical Weimar. Goethe's first major scientific work, the Metamorphosis of Plants, was published after he returned from a 1788 tour of Italy.
In 1791, he was made managing director of the theatre at Weimar, in 1794 he began a friendship with the dramatist and philosopher Friedrich Schiller, whose plays he premiered until Schiller's death in 1805. During this period, Goethe published Wilhelm Meister's Apprenticeship, his conversations and various common undertakings throughout the 1790s with Schiller, Johann Gottlieb Fichte, Johann Gottfried Herder, Alexander von Humboldt, Wilhelm von Humboldt, August and Friedrich Schlegel have come to be collectively termed Weimar Classicism. The German philosopher Arthur Schopenhauer named Wilhelm Meister's Apprenticeship one of the four greatest novels written, while the American philosopher and essayist Ralph Waldo Emerson selected Goethe as one of six "representative men" in his work of the same name. Goethe's comments and observations form the basis of several biographical works, notably Johann Peter Eckermann's Conversations with Goethe. Goethe's father, Johann Caspar Goethe, lived with his family in a large house in Frankfurt an Imperial Free City of the Holy Roman Empire.
Though he had studied law in Leipzig and had been appointed Imperial Councillor, he was not involved in the city's official affairs. Johann Caspar married Goethe's mother, Catharina Elizabeth Textor at Frankfurt on 20 August 1748, when he was 38 and she was 17. All their children, with the exception of Johann Wolfgang and his sister, Cornelia Friederica Christiana, born in 1750, died at early ages, his father and private tutors gave Goethe lessons in all the common subjects of their time languages. Goethe received lessons in dancing and fencing. Johann Caspar, feeling frustrated in his own ambitions, was determined that his children should have all those advantages that he had not. Although Goethe's great passion was drawing, he became interested in literature, he had a lively devotion to theater as well and was fascinated by puppet shows that were annually arranged in his home. He took great pleasure in reading works on history and religion, he writes about this period: I had from childhood the singular habit of always learning by heart the beginnings of books, the divisions of a work, first of the five books of Moses, of the'Aeneid' and Ovid's'Metamorphoses'....
If an busy imagination, of which that tale may bear witness, led me hither and thither, if the medley of fable and history and religion, threatened to bewilder me, I fled to those oriental regions, plunged into the first books of Moses, there, amid the scattered shepherd tribes, found myself at once in the greatest solitude and the greatest society. Goethe became acquainted with Frankfurt actors. Among early literary attempts, he was infatuated with Gretchen, who would reappear in his Faust and the adventures with whom he would concisely describe in Dichtung und Wahrheit, he adored Caritas Meixner, a wealthy Worms trader's daughter and friend of his sister, who would marry the merchant G. F. Schuler. Goethe studied law at Leipzig University from 1765 to 1768, he detested learning age-old judicial rules by heart, preferring instead to attend the poetry lessons of Christian Fürchtegott Gellert. In Leipzig, Goethe fell in love with Anna Katharina Schönkopf and wrote cheerful verses about her in the Rococo genre.
In 1770, he anonymously released his first collection of poems. His uncritical admiration for many contemporary poets vanished as he became interested in Gotthold Ephraim Lessing and Christoph Martin Wieland. At this time, Goethe wrote a good deal, but he threw away nearly all of these works, except for the comedy Die Mitschuldigen; the restaurant Auerbachs Keller and its legend of Faust's 1525 barrel ride impressed him so much that Auerbachs Keller became the only real place in his closet drama Faust Part One. As his studies did not progress, Goethe was forced to return to Frankfurt at the close of August 1768. Goethe became ill in Frankfurt. Durin
Callenberg Castle is a castle on a wooded hill in Beiersdorf, an Ortsteil of Coburg, 6 kilometres from the town centre. It was a hunting lodge and summer residence and has long been the principal residence of the House of Saxe-Coburg and Gotha, it is owned by Andreas, Prince of Saxe-Coburg and Gotha who created the Ducal Saxe-Coburg and Gotha House Order. A large and architecturally important family chapel is contained within. According to the Schloss Callenberg web site "the castle became the property of Duke Johann Casimir of Saxe-Coburg in 1588, after the death of the last von Sternberg; until 1825 the ducal treasury and the Castle of Callenberg were property of the Dukes of Saxe-Meiningen. It was only in 1826; until 1945 the castle was the summer residence of the Dukes of Coburg." A hill castle here was first mentioned as Chalwinberch in 1122. It served as the main seat for the Ritter von Callenberg until 1231, when the lord sold it to the Prince-Bishop of Würzburg; the knight made use of the proceeds to participate in a Crusade.
In 1317 the House of Henneberg gave it as a fief to the Sternberg family. This family died out in 1592; as a vacant property, it now fell to Duke Johann Casimir. He intended to use it as a summer palace and planned substantial renovations but during his lifetime only the castle chapel was rebuilt. Major construction work resumed only in 1827 under Ernst I, he had the castle redesigned, a landscape garden was created and an exhibit farm added, in which silkworms were bred. From 1842, Callenberg was the summer residence of the heir and future duke Ernst II. Today's Gothic revival elements date to another renovation after 1857. From 1893, Callenberg served as dowager house for Princess Alexandrine of Baden, the widow of Ernest II; the last ruling duke, Carl Eduard used Callenberg as a summer residence. After his death in 1954 he was buried here. Post World War II, the castle fell into disrepair, it was first used by American troops and served as a nursing home, housed a technical college and a foundation.
From the late 1970s, the castle stood changed owners several times. The chapel features Gothic arches, Doric columns, Italian Renaissance parapets, medieval walls and a Baroque pulpit. Schloss Callenberg is once again owned by the House of Gotha. Due to its history and Gothic revival architecture it is a listed monument. Since 1998 it has displayed the ducal art and furniture collection and since 2004 it has housed the German Rifle Museum; the cemetery, Cemetery Waldfriedhof or Waldfriedhof Beiersdorf, still remains, containing the remains of Charles Edward, Duke of Saxe-Coburg and Gotha, among others. Website of Callenberg Castle Official Website of the Ducal House of Saxe-Coburg and Gotha
The Ancien Régime was the political and social system of the Kingdom of France from the Late Middle Ages until 1789, when hereditary monarchy and the feudal system of French nobility were abolished by the French Revolution. The Ancien Régime was ruled by Bourbon dynasties; the term is used to refer to the similar feudal systems of the time elsewhere in Europe. The administrative and social structures of the Ancien Régime were the result of years of state-building, legislative acts, internal conflicts, civil wars, but they remained and the Valois Dynasty's attempts at re-establishing control over the scattered political centres of the country were hindered by the Huguenot Wars. Much of the reigns of Henry IV and Louis XIII and the early years of Louis XIV were focused on administrative centralization. Despite, the notion of "absolute monarchy" and the efforts by the kings to create a centralized state, the Kingdom of France retained its irregularities: authority overlapped and nobles struggled to retain autonomy.
The need for centralization in this period was directly linked to the question of royal finances and the ability to wage war. The internal conflicts and dynastic crises of the 16th and 17th centuries and the territorial expansion of France in the 17th century demanded great sums which needed to be raised through taxes, such as the land tax and the tax on salt and by contributions of men and service from the nobility. One key to this centralization was the replacing of personal patronage systems organized around the king and other nobles by institutional systems around the state; the creation of intendants—representatives of royal power in the provinces—did much to undermine local control by regional nobles. The same was true of the greater reliance shown by the royal court on the noblesse de robe as judges and royal counselors; the creation of regional parlements had the same goal of facilitating the introduction of royal power into newly assimilated territories, but as the parlements gained in self-assurance, they began to be sources of disunity.
The term in French means "old regime" or "former regime". However, most English language books use the French term Ancien Régime; the term first appeared in print in English in 1794, was pejorative in nature. It conjured up a society so encrusted with anachronisms that only a shock of great violence could free the living organism within. Institutionally torpid, economically immobile, culturally atrophied and stratified, this'old regime' was incapable of self-modernization."More ancien régime refers to any political and social system having the principal features of the French Ancien Régime. Europe's other anciens régimes had diverse fates; the Nine Years' War was a major conflict between France and a European-wide coalition of Austria and the Holy Roman Empire, the Dutch Republic, Spain and Savoy. It was fought on the European continent and the surrounding seas, in Ireland, North America, India, it was the first global war. Louis XIV had emerged from the Franco-Dutch War in 1678 as the most powerful monarch in Europe, an absolute ruler who had won numerous military victories.
Using a combination of aggression and quasilegal means, Louis XIV set about extending his gains to stabilize and strengthen France's frontiers, culminating in the brief War of the Reunions. The resulting Truce of Ratisbon guaranteed France's new borders for 20 years, but Louis XIV's subsequent actions – notably his revocation of the Edict of Nantes in 1685 – led to the deterioration of his military and political dominance. Louis XIV's decision to cross the Rhine in September 1688 was designed to extend his influence and pressure the Holy Roman Empire into accepting his territorial and dynastic claims, but when Leopold I and the German princes resolved to resist, when the States General and William III brought the Dutch and the English into the war against France, the French King at last faced a powerful coalition aimed at curtailing his ambitions; the main fighting took place around France's borders, in the Spanish Netherlands, the Rhineland, Duchy of Savoy, Catalonia. The fighting favoured Louis XIV's armies, but by 1696, his country was in the grip of an economic crisis.
The Maritime Powers were financially exhausted, when Savoy defected from the alliance, all parties were keen for a negotiated settlement. By the terms of the Treaty of Ryswick, Louis XIV retained the whole of Alsace, but he was forced to return Lorraine to its ruler and give up any gains on the right bank of the Rhine. Louis XIV accepted William III as the rightful King of England, while the Dutch acquired their barrier fortress system in the Spanish Netherlands to help secure their own borders. However, with the ailing and childless Charles II of Spain approaching his end, a new conflict over the inheritance of the Spanish Empire would soon embroil Louis XIV and the Grand Alliance in a final war – the War of the Spanish Succession. Spain had a number of major assets, apart from its homeland itself, it controlled important territory in the New World. S
Abraham Roentgen was a German Ébéniste. Roentgen was born in Germany, he learned cabinet making from his father. At age 20, he traveled to Den Haag and Amsterdam, learning from established cabinet makers, he became known for his marquetry work, worked in London until 1738. On 18 April 1739, he married Susanne Marie Bausch from Herrnhut, his son, David Roentgen, was born on 11 August 1743. In 1753 they migrated to the Moravian settlement at Neuwied, near Coblenz, where he established a furniture manufactory. Upon his retirement in 1772 his son David established his own reputation. Abraham Roentgen died in Herrnhut in Saxony Germany in 1793. Koeppe, Wolfram. "Abraham and David Roentgen". In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. Biography at the Getty museum Claus Bernet. "Abraham Roentgen". In Bautz, Traugott. Biographisch-Bibliographisches Kirchenlexikon. 29. Nordhausen: Bautz. Cols. 1177–1181. ISBN 978-3-88309-452-6. Manuel Mayer: Die Verwirklichung eines Möbels.
Der Schreibsekretär von Abraham Roentgen in der Residenz zu Würzburg, in: Mainfränkisches Jahrbuch für Kunst und Geschichte, Bd. 70, Archiv des Historischen Vereins für Unterfranken und Aschaffenburg, Bd. 141, Würzburg 2018, ISBN 978-3-88778-555-0, S. 239-259. Wolfram Koeppe: Extravagant Inventions; the Princely Furniture of the Roentgens, Exhibition catalogue, Metropolitan Museum of Art, New York 2012. Heinrich Kreisel: Möbel von Abraham Roentgen, in: Wohnkunst und Hausrat, einst und jetzt, Bd. 5, Darmstadt, o. J. Claus Bernet: Abraham Roentgen. In: Biographisch-Bibliographisches Kirchenlexikon. Band 29, Nordhausen 2008, ISBN 978-3-88309-452-6, Sp. 1177–1181. Andreas Büttner, Ursula Weber-Woelk, Bernd Willscheid: Edle Möbel für höchste Kreise - Roentgens Meisterwerke für Europas Höfe. Katalog des Roentgen-Museums Neuwied, Neuwied 2007, ISBN 3-9809797-5-X. Andreas Büttner: Roentgen. Möbelkunst der Extraklasse, hrsg. von der Stadt Neuwied. Kehrein, Neuwied 2007, ISBN 978-3-934125-09-4. Melanie Doderer-Winkler: Abraham und David Roentgen, in: Rheinische Lebensbilder, Bd.
17, hrsg. von Franz-Josef Heyen, Köln 1997, S. 57–78. Dietrich Fabian: Abraham und David Roentgen. Von der Schreinerwerkstatt zur Kunstmöbel-Manufaktur, Bad Neustadt an der Saale 1992, ISBN 3-922923-87-9. Detlev Richter, Bernd Willscheid: Reinheit, Feuer & Glanz - Stobwasser und Roentgen. Kunsthandwerk von Weltrang, Katalog des Roentgen-Museums Neuwied, Neuwied 2013, ISBN 978-3-9814662-5-6. Peter Prange: Roentgen, Abraham. In: Neue Deutsche Biographie. Band 21, Duncker & Humblot, Berlin 2003, ISBN 3-428-11202-4, S. 730 f.. Wolfgang Thillmann, Bernd Willscheid: Möbeldesign - Roentgen, Thonet und die Moderne, Katalog des Roentgen-Museums Neuwied, Neuwied 2011, ISBN 978-3-9809797-9-5
Pietra dura or pietre dure, called parchin kari or parchinkari in the Indian Subcontinent, is a term for the inlay technique of using cut and fitted polished colored stones to create images. It is considered a decorative art; the stonework, after the work is assembled loosely, is glued stone-by-stone to a substrate after having been "sliced and cut in different shape sections. Stability was achieved by grooving the undersides of the stones so that they interlocked, rather like a jigsaw puzzle, with everything held tautly in place by an encircling'frame'. Many different colored stones marbles, were used, along with semiprecious, precious stones, it first appeared in Rome in the 16th century. Pietra dura items are crafted on green, white or black marble base stones; the resulting panel is flat, but some examples where the image is in low relief were made, taking the work more into the area of hardstone carving. Pietre dure is an Italian plural meaning hardstones. In Italian, but not in English, the term embraces all gem engraving and hardstone carving, the artistic carving of three-dimensional objects in semi-precious stone from a single piece, for example in Chinese jade.
The traditional convention in English has been to use the singular pietra dura just to denote multi-colored inlay work. However, in recent years there has been a trend to use pietre dure as a term for the same thing, but not for all of the techniques it covers, in Italian, but the title of a 2008 exhibition at the Metropolitan Museum of Art, New York, Art of the Royal Court: Treasures in Pietre Dure from the Palaces of Europe used the full Italian sense of the term because they thought that it had greater brand recognition. The material on the website speaks of objects such as a vase in lapis lazuli as being examples of "hardstone carving" The Victoria & Albert Museum in London uses both versions on its website, but uses pietra dura in its "Glossary", evidently not consulted by the author of another page, where the reader is told: "Pietre dure is made from finely sliced coloured stones matched, to create a pictorial scene or regular design"; the English term "Florentine mosaic" is sometimes encountered developed by the tourist industry.
Giovanni Montelatici was an Italian Florentine artist whose brilliant work has been distributed across the world by tourists and collectors. It is distinct from mosaic in that the component stones are much larger and cut to a shape suiting their place in the image, not all of equal size and shape as in mosaic. In pietra dura, the stones are not cemented together with grout, works in pietra dura are portable. Nor should it be confused with micromosaics, a form of mosaic using small tesserae of the same size to create images rather than decorative patterns, for Byzantine icons, for panels for setting into furniture and the like. For fixed inlay work on walls and pavements that do not meet the definition for mosaic, the terms intarsia or cosmati work/cosmatesque are better used. For works that use larger pieces of stone, opus sectile may be used. Pietre dure is stone marquetry; as a high expression of lapidary art, it is related to the jeweller's art. It can be seen as a branch of sculpture as three-dimensionality can be achieved, as with a bas relief.
Pietra dura developed from the ancient Roman opus sectile, which at least in terms of surviving examples, was architectural, used on floors and walls, with both geometric and figurative designs. In the Middle Ages cosmatesque floors and small columns etc. on tombs and altars continued to use inlays of different colours in geometric patterns. Byzantine art continued with inlaid floors, but produced some small religious figures in hardstone inlays, for example in the Pala d'Oro in San Marco, Venice. In the Italian Renaissance this technique again was used for images; the Florentines, who most developed the form, regarded it as'painting in stone'. As it developed in Florence, the technique was called opere di commessi. Medici Grand Duke Ferdinando I of Tuscany founded the Galleria di'Lavori in 1588, now the Opificio delle pietre dure, for the purpose of developing this and other decorative forms. A multitude of varied objects were created. Table tops were prized, these tend to be the largest specimens.
Smaller items in the form of medallions, wall plaques, panels inserted into doors or onto cabinets, jardinieres, garden ornaments, benches, etc. are all found. A popular form was to copy an existing painting of a human figure, as illustrated by the image of Pope Clement VIII, above. Examples are found in many museums; the medium was transported to other European centers of court art and remained popular into the 19th century. In particular, Naples became a noted center of the craft. By the 20th century, the medium was in decline, in part by the assault of modernism, the craft had been reduced to restoration work. In recent decades, the form has been revived, receives state-funded sponsorship. Modern examples range from tourist-oriented kitsch incl | <urn:uuid:023c52fc-ed71-4866-800d-949a147594b3> | CC-MAIN-2019-47 | https://wikivisually.com/wiki/David_Roentgen | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00541.warc.gz | en | 0.943124 | 6,133 | 2.90625 | 3 |
The Second Wave
“Any woman who chooses to behave like a full human being should be warned that the armies of the status quo will treat her as something of a dirty joke. That’s their natural and first weapon. She will need her sisterhood.” – Gloria Steinem
Naming the probelm, Starting the Wave
After the initial political gains and eventual backlash of the 1920s, the Great Depression put a damper on most unified feminist action; “out of necessity,” historian Rory Dicker clarifies, “most American women turned away from women’s rights activism and devoted their energies to physical survival” (Dicker). With the outbreak of WWII in the 1940s, many women entered the workforce to fil the vacancies left by men off to war—at the height of the war effort, 36 percent of adult working-age women worked outside the home (Dicker). But in the time of relative peace that followed, middle-class white women in particular began returning to primarily domestic roles, leaving the workforce in order to raise increasingly large, young families. Because this shift to domesticity was common, so too was a reaction of discontent. Many women were frustrated, though unsure of why, as they were living prosperous lives in what was supposedly “domestic bliss.” A naming of this problem—and a reignition of the women’s rights movement—came in the form of a bestselling book from a woman who notice this problem in many women she knew closely.
The Feminine Mystique
Betty Friedan’s 1963 book, The Feminine Mystique, helped give shape to “the problem that has no name,” or the “housewife’s discontent.” After working on an article profiling the women she’d graduated from Smith college with in 1942, she realized many of her peers were dissatisfied with how their lives had turned out. She decided to pursue this trend in her book, where she clarified that this problem was “not a result of personal inadequacy or psychological weakness, but was caused by the cultural ideology of the feminine mystique, the belief that women shoed derive fulfillment exclusively through domesticity” (Dicker). These educated women felt under-stimulated; though The Feminine Mystique did not offer a solution, it helped explain the problem’s sources and raise awareness.
After writing her bestselling book, Betty Friedan put her organizing and writing skills to good use by helping found the National Organization for Women (NOW) in 1966. Friedan helped draft the founding documents, including the organization’s original statement of purpose: “The purpose of NOW is to take action to bring women into full participation in the mainstream of American society now, exercising all the privileges and responsibilities thereof in truly equal partnership with men.” NOW was established out of frustration with The Equal Employment Opportunity Commission (EEOC) refusal to enforce Title VII’s prohibition of sex discrimination in the workplace—it was also the first women’s rights organization founded post-suffrage (Dicker 72). Friedan was the group’s first President, and the organization became the home of liberal feminism. Because of their EEOC-focused roots, the organization focused many of its efforts on the rights of “ordinary working women” (Dicker 73). While many members were fierce supporters of the Equal Rights Amendment (ERA), some members worried the language of the amendment would actually diminish protections for women workers. In 1967, NOW became the first national organization to endorse the legalization of abortion, but this caused some internal strife as well; conflicts within the NOW’s ranks led to the establishment of some more radical feminist organizations (Dicker 74-74). NOW has continued to grow over the years; the group is “the largest organization of feminist grassroots activists in the United States”.
PHOTO VIA BETTMAN / CORBIS
Title VII of the Civil Rights Act of 1964
The fight for women’s rights was intrinsically tied to the general 1960s fight for civil rights; Title VII of the Civil Rights Act of 1964 and the establishment of the Equal Employment Opportunity Commission (EEOC) were crucial to ensuring rights for women on the job. Under these pieces of legislation, women could not legally be discriminated against at work. Title VII was actually introduced by a pro-segregation congressman opposed to the passage of the Civil Rights Act—he assumed a provision against sex-based discrimination would derail the bill as a whole. In this case, inclusion gendered language was intended as a distraction, but ended up mobilizing Congresswoman Martha Griffiths from Michigan to recruit a coalition of fellow congressmembers to ensure the bill passed (Dicker). The EEOC was created to help enforce Title VII but quickly proved to be an ineffective commission, thus leading to the founding of the NOW.
As some feminists grew increasingly radical, they questioned liberal feminist organizations like NOW. Feminism was founded with on liberal principles—liberal feminism reinforces notions of equality between men and women, “emphasizing the similarities between them and arguing that women can be as capable and as rational as men.” Radical feminists believed that liberal groups were acting too conservatively in their fight for women’s rights, and radical feminism’s “strategies and ideology call for the change and reconstruction of society.” In other words, when it comes to solutions for sexism, Radical Feminism calls not only for thinking outside the box, but for rethinking the box itself. Typically, their focus is on the influence of patriarchy and how to combat the lingering effects of male-domination. In the late 1960s, radical feminist initially emerged as a part of the New Left but quickly found that they were not being treated as equals within the movement. At one conference run by New Left organizations, radical feminist leaders proposed a resolution that never made it to the discussion table. In response to their resolution, the meeting chair said, “Move on, little girl; we have more importation issues to talk about here than women’s liberation” (Dicker). That language—women’s liberation—stuck; soon, the women who had proposed the initial resolution started an organization called the West Side Group and began publishing a newsletter entitled Voice of the Women’s Liberation Movement.
The New York Radical Women
Women’s liberation groups began popping up everywhere, and the New York Radical Women (NYRW) became one of the most influential. The NYRW considered advocating for peace in Vietnam an integral part of the feminist cause; within the peace movement, however, these women stood out. At one protest at Arlington National Cemetery, “about thirty members of the NYRW carried a papier-mâché coffin emblazoned with a streamer reading THE BURIAL OF TRADITIONAL WOMANHOOD” (Dicker 79). At this demonstration, the women gave a “Funeral Oration for the Burial of Traditional Womanhood.” The oration described the cause of death as such: “the old hen, it turns out, was somewhat disturbed to hear us—other women, that is—asserting ourselves just this least little bit about critical problems in the world controlled by men. And it was particularly frightening to her to see other women, we- women, asserting ourselves together, however precariously, in some kind of solidarity, instead of completely resenting each other, being embarrassed by each other, hating each other and hating ourselves.” This solidarity began to be described as a sisterhood, a term which became crucial to the language of this movement. Each year, NYRW published Notes from the Year in a newsletter they called Women’s Liberation.
PHOTO (LEFT) VIA DUKE UNIVERSITY LIBRARIES
Women’s Liberation, Consciousness-Raising, and “The Personal Is Political”
Consciousness-raising (or CR, as it’s often called), was a hallmark of the women’s liberation movement. The practice was initially outlined in Notes on the Second Year by NYRW leader Kathie Sarachild. In a typical CR session, “a small group of women, anywhere from a handful to a dozen, would gather and respond to a particular question” (Dicker). These questions could be about anything having to do with women’s particular experiences, including experiences with sex and sexual orientations, gender role stereotyping, body image, dating and marriage, and beauty standards. The idea behind CR was that as women shared their experiences, patterns in problems would begin to emerge. From these patterns, many women realized that problems they’d privately experiences were privately held by many women. This discovery helped validate women’s experiences and elevated their personal problems to political problems that could be talked about and solved publicly. As Carol Hanisch explained in her landmark piece “The Personal is Political,” which was published in that same Notes on the Second Year, “There are no personal solutions at this time. There is only collective action for a collective solution” (Dicker). By validating women’s voices and helping women realize the role sexism played in their everyday lives, CR hoped to empower its participants and promote the feminism movement more widely.
CR and Our Bodies, Ourselves
One downside to CR sessions is that they required women to be in the same place at the same time, often at meetings or conventions. This meant that women who could not attend due to time constraints, distance, or other limitations were often not included. This limitation meant that many women could not reap the benefits of these conversations, which was especially unfortunate when the issues discussed were as important as their bodies and health. At one session called “Women and Their Bodies,” which took place at a conference in Boston in 1969, the organizer had women discuss pregnancy and birth, abortion, contraception, the female orgasm, and the various ways women were patronized in medical settings. They discovered that women were not always informed on their own bodies and medical professionals had not helped this problem. After the success of this session, many women involved decided to research and write papers on their topics discussed that they later shared with their groups (Dicker). This was a way of compiling and sharing information about women’s bodies without needing to ask a doctor or someone else. Inspired to share this information more widely, they revised their papers and created a collaborative book to help reach women who could not attend their sessions or classes. They initially printed 5,000 copies of the book, titled at this point for the conference session, but quickly sold out—eventually it grew so popular that the original writers established a nonprofit and signed a deal with Simon and Shuster to publish their expanded book, Our Bodies, Ourselves, all across the nation. The book is still published today and is updated every four to six years; as of Fall 2017, the text has been translated into 31 languages.
PHOTO (LEFT) VIA OUR BODIES OURSELVES
Miss America Protest and the “Bra Burners”
In what is often considered the first mass action of the second wave, around 200 NYRW members staged outside the 1968 Miss America Pageant in Atlantic City, New Jersey, in order to protest “an image that has oppressed women” (Rosen). These women were tired of the forced beauty standards put on women, and they were fed up with a system that made money off of socially constructed ideals. To these women, the Miss America Pageant represented all of things women should reject and fight against. Protestors snuck into the hall and unfurled a large banner that read, in all caps, WOMEN’S LIBERATION. This action that caught the attention of the TV cameras inside. Outside, the women mocked the title of Miss America by crowning a sheep while also handing out pamphlets and carrying signs which read “NO MORE MISS AMERICA,” “THE REAL MISS AMERICA LIVES IN HARLEM,” “IF YOU WANT MEAT, GO TO A BUTCHER,” and “CAN MAKEUP COVER THE WOULDS OF OUR OPRESSION?” (Dicker).
There was also a “Freedom Trash Can,” in which protesters were to throw items they saw as oppressive “objects of female torture.” These items included: cosmetics, girdles, issues of Cosmopolitan, Playboy, and Ladies’ Home Journal, wigs, fake eyelashes, makeup, high-heels, and of course bras. Due to safety reasons, the items could not be burned, but they were still thrown into the trash with disdain. Despite not being able to actually burn anything, the media latched onto the idea of “bra burning” feminists, an image that is still evoked today. After the protest, many of the women participants noted that they had not made it clear that they were protesting the pageant and not the contestants (Rosen). While the media covered this protest negatively, “this publicity spurred more women to join NYRW and other women’s liberation groups around the country” (Dicker).
Redstockings Speak-Out, “Abortion: Tell It Like It Is”
Building on the popularity of CR, a women’s rights group called Redstockings decided to use this rhetorical practice for political advocacy. After picketing at a hearing on abortion law in New York, they decided to continue using their voices in a “speak out,” organized in 1969. Like CR, the speak out was built on a theory of collective rhetoric, which means it helped create “novel public vocabularies as the product of the collective articulation of multiple, overlapping individual experiences” (Dubriwny). In creating a public vocabulary built of many overlapping experiences, they hoped to be both inclusive and persuasive. This persuasion, however, was not about simply making an argument that is convincing, but “the creation of situations in which the telling of individual experiences makes possible a reframing of one’s own experiences” (Dubriwny). Through hearing the experiences of other women, it became easier for women to rethink and reframe what they’d gone through and validate their own experiences, often remembering new things and building on what others shared. This abortion “speak out” featured a dozen women who were willing to share their personal testimony about having an abortion; the organizers hoped that this would help reframe the conversation and normalize talking about the procedure. As these women spoke openly, they shared freely, often interrupting each other and building on the last person’s experience; the goal was to create empathy and help sway public opinion, and these women used irony, humor, and openness about their personal experiences to help do just so (Dubriwny).
PHOTO (LEFT) VIA REDSTOCKINGS
The first issue of Ms. Magazine hit newsstands in 1972 and became a major player in normalizing discussions of feminism and women’s rights—with its title alone, Ms. introduced and normalized the marital-status-hiding title for the first time. The magazine was created by journalist Gloria Steinem and editor Patricia Carbine, and it was the first feminism-focused magazine to be published in the mainstream market; prior to Ms., most magazines for feminists were distributed in limited quantities while most magazines for women were focused on beauty, fashion, and keeping a home (Dicker). The magazine tended to be aligned with liberal feminism. With articles like “Raising Kids Without Sex Roles,” “Women Tell the Truth About Their Abortions,” and “Lesbian Love and Sexuality,” Ms. Featured content that could not be found elsewhere, at least not widely. This access meant that women from all across the nation could experience what one Ms. writer, Jane O’Reilly, called the “click,” or the “moment of awareness that women feel when they suddenly realize that sexist assumptions permeating their everyday lives” (Dicker). The magazine was an instant success and sold three hundred thousand copies in only eight days; Ms. continues to be published today, over forty-five years later.
PHOTO VIA MS.
Jo Freeman’s Feminist Button Collection
In 1974, prominent feminist activist and writer Jo Freeman published a piece in Ms. entitled “Say It With Buttons,” where she discussed her ever-increasing collection of feminism-focused buttons. At the time of writing, she purported to have the world’s largest collection of feminist buttons
But she was not satisfied with stopping there: “You might think that with so many, I would feel secure. No way. Every collection I see, no matter how small, usually contains at least one button I don’t own—and spasms of unfulfilled desire surge through my bloodstream.” Freeman understood that the buttons themselves had no intrinsic meaning” “They are made to be given away in order to be worn by the greatest number of people,” she explained, “Thus, if you talk someone into a good (for you) trade, or lift a few buttons from the opposition campaign headquarters under false pretenses, you’re not depriving anyone of something essential for their existence.” Buttons, in other words, are cheap, plentiful, and made to be spread around and shared.
But despite the ubiquity of buttons, Freeman suggested they were one of the best ways to trace feminism histories. She argued that “buttons reflect the Movement’s history and development with greater consistency than its political tracts.” She describes wearing buttons saying “UPPITY WOMEN UNITE,” “I AM A CASTRATING BITCH,” and “SISTERHOOD IS POWERFUL,” as well as buttons more specific to certain demonstrations, such as the second Miss America Pageant. At that event, women wore a new button that “caught the popular imagination.” This button, produced by Robin Morgan, depicted, “a clenched fist inside the biological female symbol,” in the color “menstrual red.” She describes buttons as not only helpful as “mini-billboards,” but also as tools for fundraising.
Equal Rights Amendment Protests & the Alice Paul Memorial March
After women were given the right to vote, suffragist and feminist activist Alice Paul penned the Equal Rights Amendment, which read: “Equality of rights under the law shall not be denied or abridged by the United States or by any state on account of sex.” Alice Paul spent the rest of her life fighting for the passage of this amendment, and its ratification became a major goal for feminists in the 1970s. Women staged rallies, wore buttons, and pressured their elected officials to help the amendment pass. In the early 70s, “the approval of the ERA seemed a certainty” (Dicker). However, due to pressure from anti-ERA conservatives, it was never passed, despite the fact that the deadline for ratification had been extended to 1892. After Alice Paul died in 1977, ERA-supporting feminists staged a memorial march in honor of her life and in hopes of drumming up support for the amendment. As attendee Jo Freeman put it, “To replicate the drama and spectacle of the Suffrage parades, the NWP asked everyone to wear traditional white. A young woman with horse was found to replicate Inez Milholland’s role, but the horse was a chestnut. Many people also wore sashes of purple, white and gold; colors chosen by the NWP to symbolize Woman Suffrage. The NPW explained that purple stood for the “Royal glory of Womanhood”; white for “Purity in the home and in Politics”; and gold for the “Crown of Victory”. Some people carried original Suffrage banners; some carried new ones made for this march which identified their organizations and/or support for the ERA.”
Shifts and Splits in the Movement: Whispers of the Third Wave
Second wave feminism is often criticized for not prominently including women of color and women of varied sexual orientations. There were, however, definitely feminist women of color active before the third wave, and many of these women, like Audre Lorde, “critiqued white feminist for their narrow vision” (Dicker). As Lorde saw it, many white feminist spaces (especially academic spaces) were arrogant for not “examining our many differences” and for not asking for the input of “poor women, black and third-world women, and lesbians.” As a response to narrow feminism, some women of color “claimed a new kind of feminism for themselves (Dicker). One organization, Kitchen Table/Women of Color Press became the first publishing house devoted solely to the voices of women of color. The Combahee River Collective was another prominent organization that focused on the interactions of race, gender, and class. Though the term “intersectionality” was not introduced until the Third Wave, these groups run by women of color were the first people to assert the importance of paying attention to the ways that oppressions intersect (Dicker).
About the Project
© Poetics of Protest | 2018
Website design by Rachel Busse | <urn:uuid:f0e468f0-347f-40df-8d72-c03b93427d3a> | CC-MAIN-2019-47 | http://www.poeticsofprotest.com/second-wave/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00100.warc.gz | en | 0.971046 | 4,478 | 2.828125 | 3 |
- Research article
- Open Access
- Open Peer Review
Risk factors of acute respiratory infections among under five children attending public hospitals in southern Tigray, Ethiopia, 2016/2017
BMC Pediatrics volume 19, Article number: 380 (2019)
Acute Respiratory infection accounts for 94,037000 disability adjusted life years and 1.9 million deaths worldwide. Acute respiratory infections is the most common causes of under-five illness and mortality. The under five children gets three to six episodes of acute respiratory infections annually regardless of where they live. Disease burden due to acute respiratory infection is 10–50 times higher in developing countries when compared to developed countries. The aim of this study was to assess risk factors of acute respiratory infection among under-five children attending Public hospitals in Southern Tigray, Ethiopia 2016/2017.
Institution based case control study was conducted from Nov 2016 to June 2017. Interviewer administered structured questionnaire was used to collect data from a sample of 288 (96 cases and 192 controls) children under 5 years of age. Systematic random sampling was used to recruit study subjects and SPSS version 20 was used to analyze the data. Bivariate and multivariate analysis were employed to examine statistical association between the outcome variable and selected independent variables at 95% confidence level. Level of statistical Significance was declared at p < 0.05. Tables, figures and texts were used to present data.
One hundred sixty (55.6%) and 128 (44.4%) of the participants were males and females respectively. Malnutrition (AOR = 2.89; 95%CI: 1.584–8.951; p = 0.039), cow dung use (AOR =2.21; 95%CI: 1.121–9.373; p = 0.014), presence of smoker in the family (AOR = 0.638; 95% CI: 0.046–0.980; p = 0.042) and maternal literacy (AOR = 3.098; 95%CI: 1.387–18.729; p = 0.021) were found to be significant predictors of acute respiratory infection among under five children.
According to this study maternal literacy, smoking, cow dung use and nutritional status were strongly associated with increased risk of childhood acute respiratory infection. Health care providers should work jointly with the general public, so that scientific knowledge and guidelines for adopting particular preventive measures for acute respiratory infection are disseminated.
Acute Respiratory infection (ARI) accounts for an average 94,037000 disability adjusted life years (DALY) and 1.9 million mortalities throughout the world. The disease is among the most common causes of both illness and mortality in children aged below 5 years [1, 2]. Acute respiratory infection contributes 2 to 4% of deaths in children less than 5 years of age in developed countries. These causes contribute 19 to 21% of child death in the eastern Mediterranean, Africa and South East Asia regions . Although the frequency of ARI is similar in both the developed and developing countries, mortality due to ARI is 10–50 times higher in developing countries .
In countries with high pediatric population, one fourth of all pediatric hospital admissions are mainly due to ARI. Each year, 3% of all children less than 12 months of age need to be admitted for moderate or severe lower respiratory tract infections .
Ethiopia has made investments to reduce the morbidly and mortality of ARI. Integrated management of common childhood illness and community case management are among the programme initiatives scaled up nationally to address ARI in the country .
There are many socio-cultural, demographic and environmental risk factors that predispose children less than 5 years to acquire Respiratory Tract Infections (RTIs). Even though many of these risk factors are preventable , they have not been documented in many regions in Ethiopia making it difficult to develop algorithms for the management of this group of patients.
Considering the feasibility of the study design and the dynamic nature of the pediatric population a case control study design was employed aimed at determining the associated risk factors of ARI amongst children under 5 years of age who attend the southern Tigray Public Hospitals.
Since the pediatric population is a dynamic population and difficult to follow-up, an institutional based unmatched case control study design was employed to collect data on under five children’s risk factors of acute respiratory infection.
Source population and study population
The source population was all children less than 5 years of age in Southern zone of Tigray coming to public Hospitals. The study population was all sampled children of less than 5 years of age attending in the five public Hospitals during the data collection period.
Children of under 5 years of age who diagnosed with ARI at time of data collection period in which their mothers accept to provide informed consent for their children. Exclusion criteria were children whose mothers or care takers were refused to participate in the study.
Selection of cases
The data collectors identified children who were diagnosed with ARI by the physician in the outpatient clinic. The data collectors then selected the study subjects by systematic random sampling method (an interval of 2 was used to get the actual study participants). Following this selection, after spoken informed consent was given participants were included in the study.
Selection of controls
The study data collectors selected the controls on meeting the definition of controls. The recruitment of the controls was done as for the cases as outlined in the above procedure.
Dependent variable was acute respiratory infection. Independent variables were, Parental Social Demographic factors, Child Physiological/nutritional factors and Environmental characteristics.
The conceptual frame work of this study illustrates acute respiratory infection and its risk factors. As depicted in Fig. 1, conceptual framework is developed for this research after reviewing the relevant literatures (Fig. 1).
Sample size determination
Sample size was calculated using Epi Info 7.0 StatCalc program by taking assumptions of 95% confidence level, two controls for each case, 80% power and 18.3% controls having wasting syndrome giving OR of 2.42 , Giving a total sample of 261 (87 cases and 174 controls). Adding 10% non-response rate the final sample was found to be 288 (96 cases and 192 controls). Wasting is selected because it was the exposure variable that gave the highest sample size for cases and controls among the other variables in a study conducted in Kenya .
All the five public hospitals in the zone were included in the study. As a marker for proportional sample size allocation for the hospitals, client flow of three consecutive previous months prior to the data collection period was observed. Systematic random sampling was used to recruit study subjects (Fig. 2).
Data collection tools
Interviewer administered structured questionnaire was used to collect data on risk factors of acute respiratory infection among under five children attending the five public hospitals. The questionnaire was adopted from previous studies and modified accordingly; it was first developed in English and translated in to the local Tigrigna language, and was then translated back to English to check the consistency. The data collection tool is included as an Additional file 1.
Data collection process
Seven individuals who have completed their BSc in nursing from a recognized University were recruited (five of them for data collection and two of them for supervision) and each hospital’s chief executive officer met and asked for permission. The data collection was held for a total of 8 months from November 2016 - June/2017.
Acute Respiratory Infections (ARI) in children: children with any one or combination of symptoms and signs like cough, sore throat, rapid breathing, noisy breathing, chest in drawing, at any time in the last 2 weeks.
Children less than 5 years of age diagnosed with ARI in the hospitals and those referred from other health facilities with the diagnosis of ARI.
Children who visit the hospitals for diagnosis other than ARI.
Refers to low weight-for-height where a child is thin for his/her height but not necessarily short.
Data quality control and assurance management
The data collectors were trained for 1 day and the supervisors were visiting the data collectors once a day to check if they collect the data appropriately. Pretest was carried out on 10% of the sample in two health centers of the zone which were not included in the actual data collection 2 weeks before the actual data collection and the questions were revised based on the response obtained so that questions that create ambiguity were rephrased.
Data analysis procedure
The data was first recorded and cleaned then analyzed using SPSS version 20 software statistical packages. Missing values were treated by SPSS too. Frequency and proportions were used to describe the study population in relation to relevant variables. Binary logistic regression was computed to assess statistical association via Odds ratio, and significance of statistical association was assured or tested using 95% confidence interval and P-value (0.05). Bivariate and multivariate analysis was employed to examine the relationship or statistical association between the outcome variable and selected independent variables. Variables which were significant at p < 0.05 in the bivariate analysis were taken to multivariate analysis to control the possible confounders. Results were presented using tables, figures and texts.
Ethical clearance was secured from Mekelle University College of health science IRB (research committee).
Socio demographic characteristics of the respondents
A total of 288 (96 cases and 192 controls) under five children were included in the study with a response rate of 100%. The children were aged between 4 and 59 months with median age of 16.5 months (Mean ± SD; 20.8 ± 13.9).
Fifty seven (62%) of the cases and 100 (51%) of the controls were rural dwellers. About three fourth of the respondents 227(78.8%) were Orthodox in religion. Thirty six (39.1%) of the mothers of cases and 48(24.5%) of mothers of controls were illiterate with only 4(2%) of mothers of controls completed college program. Fifty two (56.5%) of the cases and 108(55.1%) of the controls were males (Table 1).
Factors associated with acute respiratory infection
Child and parent related factors
Among variables under this category maternal literacy, maternal occupation and household family size demonstrate significant association with acute respiratory infection of under five children at the bivariate analysis.
Most of the respondents were illiterate with 36 (39.1%) of caretakers of cases being unable to read and write and 59(30%) caretakers of controls having at least secondary education. A significant association was found between maternal literacy and risk of ARI by bivariate analysis (COR = 2.95, 95% CI: 1.446–6.017; p = 0.04).
As shown in Table 1, over 50% of the homes had between 5 and 7 persons living in the house. A significant association was found between family size and risk of ARI by bivariate analysis (OR = 0.237 (0.101–0.555, p = 0.02) (Table 1).
Number of siblings, birth order and nutritional status were found to show significant association with under five children acute respiratory infection in the bivariate analysis.
The highest proportion of children had 3 and above siblings, among them were 54 (58.7%) cases and 84 (42.9%) control children. Number of siblings were found to be significantly associated with ARI (p = 0.041). Birth order of the child were found to be significantly associated with risk of ARI (p = 0.048).
Overall, malnutrition (severe and moderate; MUAC< 12.5 mm) was found significantly associated with increased risk of ARI (COR = 1.51, 95% CI: 1.779–9.296; p = 0.001) in the bivariate analysis (Table 1).
Among variables of this category cow dung use and presence of smoker in the house illustrate significant association with acute respiratory infection of under five children in the bivariate analysis.
Among the fuel types used cow dung for cooking was found to be associated with Acute respiratory infection on bivariate analysis (p = 0.002). A significant association was found between smoking and risk of ARI by Bivariate analysis (OR = 0.139, 95% CI: 0.043–0.444) (Table 2).
Overall factors of acute respiratory infection in children
In the bi-variable logistic regression analysis, variables such as maternal literacy, maternal occupation, family size, birth order, number of siblings, presence of smoker in the house, cow dung use and wasting were appeared to be associated with acute respiratory infection. Those variables which were significant in bivariate analysis at p < 0.05 were taken to multivariate analysis to control the possible confounders. Then on multivariate analysis only maternal literacy, cow dung use and nutritional status were found to be associated with ARI.
Children from houses which used cow dung for their fuel were 2 times (AOR =2.21; 95%CI: 1.121–9.373; p = 0.014) more likely to develop ARI. Similarly, ARI was about 3 times (AOR = 2.89; 95%CI: 1.584–8.951; p = 0.039) more common among under five children who were wasted (Table 3).
This study found a significant association of malnutrition with ARI. The result contrasts to a case control study conducted in Kenya which reports an inverse relationship between ARI and wasting (OR = 2.42) . Findings of this study also compared with case control study conducted in Zimbabwe which reported that current and past malnutrition were associated with ARI in children under five with OR = 2.67 . Earlier study conducted in Riyadh city also reported that ARI was more seen in undernourished children (22.2%vs 15.8%; p = 0.001) with increased incidence of ARI due to weakening nutritional status (P = 0.05) . Declining MUAC (p = 0.001) was reported to be associated with ARI and in the nonappearance of other factors malnutrition alone significantly affect the ARI in under 2 years children . One possible explanation for this contrasting finding might be that the effect of lessened cellular immunity in undernourished children which makes them more disposed to ARI. Acute Respiratory Infections usually occur more often, last longer, and are starker in malnourished children, classically because the mucous membranes and other mechanical structures designed to keep the respiratory tract clear are impaired, and the immune system has not developed properly .
This study also found a noteworthy association of maternal literacy with ARI but not with father’s literacy. Parker RL , revealed risk of ARI declined with education of parents. This might be because usually father remains outside for job most of the times but mother is always in the home taking care of children and household activities. Mother due to her close connotation with child knows the minor variations in child’s health than father. Due to such factors mother’s educational status might play important role in child’s disease than father’s literacy.
Cow dung use was the other variable found to be associated with ARI in this study. This result is in agreement with study done by Vinod Mishra et al. , who revealed an association of cow dung use with ARI (OR = 2.2). This could be because of the high daily concentrations of pollutants found in such settings and the large amount of time young children spend with their mothers doing household cooking.
Limitations of the study
Diagnosis of ARI was based on clinical WHO IMNCI classification guideline, which could introduce misclassification bias which could lead to selection bias.
Being institution based case control the study may have limitation in the generalizability of the findings.
Also, this study selectively addressed certain factors of under-five ARI while various factors are found to cause the diseases
This study revealed that, maternal literacy, cow dung use, and nutritional status were strongly associated with increased risk of childhood ARI.
Based on the findings in this study, the following are recommended.
Each Wereda’s Health Office of the zone, in teamwork with the health services in the wereda, ought prepare plans to implement community-based interventions focused towards better food, supplementation (vitamin supplements or fortified milk) to have significant optimistic benefits in dropping malnutrition
Health care providers in partnership with other participants should have plan to provide health education and choices of cooking other than cow dung.
Investigators should conduct extra studies related to this problematic in the area so that all the likely factors could be explored
The FMOH should give weight to mark the mothers familiar concerning their health and kids’ health as when design to control childhood diseases
Generally, it is suggested that the policy makers and academicians/health care providers should effort together to make a communication stage with the general community, through which scientific knowledge and guidelines for adopting particular preventive measures for ARI are disseminated. Since community responses to the ARI epidemic are dynamic, continual surveillance of community responses is valuable and would facilitate relevant governmental risk communication and health education efforts.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Acquired Immune Deficiency Syndrome
Adjusted Odds Ratio
Acute respiratory infection
Crude Odds Ratio
Disability adjusted life years
Global Buren of Disease
Integrated Management of childhood Illness
Lower respiratory tract infection
Ministry Of Health
Primary Health Care
Respiratory Synctial Virus
Severe Acute Respiratory Infections
United Nations Children’s Fund
Upper Respiratory Tract Infection
United States of America
World Health Organization
WHO. Acute Respiratory infections in children: case management in small hospitals in developing countries a manual for doctors and other senior health workers (WHO/ARI/905). Geneva: WHO; 1990.
Williams BG, Gouws E, Boschi-Pinto C, Bryce J, Dye C. Estimates of world wide distribution of child deaths from acute respiratory infections. Lancet Infect Dis. 2002;2:25–32.
Emmelin A, Wall S. Indoor air pollution: a poverty-related cause of mortality among the children of the world. Chest. 2007;132:1615–23.
Broor S, Parveen S, Bharaj P, Prasad VS, Srinivasulu KN, Sumanth KM, Kapoor SK, Fowler K, Sullender WM. A prospective three-year cohort study of the epidemiology and virology of acute respiratory infections of children in rural India. PLoS One. 2007;2(6):e491.
van Woensel JBM, Viral lower respiratory tract infection in infants and young children. BMJ. 2003;327:36–40.
Miller NP, Amouzou A, Tafesse M, Hazel E, Legesse H, Degefie T, Victora CG, Black RE, Bryce J. Integrated community case management of childhood illness in Ethiopia: implementation strength and quality of care. Am J Trop Med Hyg. 2014;13:751.
Schluger NW, Koppaka R. Lung disease in a global context. A call for public health action. Ann Am Thorac Soc. 2014;11(3):407–16.
Matu MN. Risk factors and cost of illness for acute respiratory infections in children under five years of age attending selected health facilities in Nakuru County, Kenya: Jomo Kenyatta University of Agriculture and Technology; 2015. http://hdl.handle.net/123456789/1590.
Mishra V, et al. lndoor air pollution from biomass combustion and acute respiratory illness in preschool age children in Zimbabwe. Int J Epidemiol. 2003;32(5):847–53.
Cunha AL. Relationship between acute respiratory infection and malnutrition in children under 5 years of age. Acta Pediatr. 2000;89:608–9.
WHO. World health organization and UNICEF fulfilling the health agenda for the women and children. Countdown to 2015: Maternal, Newborn and Child survival World Health Organization and UNICEF; 2014.
Parker RL. Acute respiratory illness in children: PHC responses. Health Policy Plan. 1987;2:279–88.
I am indebted to extend my earnest thanks to Mr. Kalayou Kidanu and Mr. Tensay Kahsay, my advisors, for their enriching and critical comments and suggestions for the preparation of this thesis. I am also very grateful to Southern Tigray public hospitals which largely helped the realization of the study through providing relevant information related to the study.
Finally, my deepest thanks shall goes to the study participants, data collectors and supervisors who took part in the study only earnestly without whom the study would have largely been impossible.
This thesis work is made possible by the support of the American people through the Mekelle University under Agreement No. AID-663-A-11-00017. The contents of this document are the sole responsibility of the author and do not necessarily reflect the views of Mekelle University.
Ethics approval and consent to participate
Ethical clearance was secured from Mekelle University College of health science IRB (research committee). Official letter of permissions was obtained from Tigray Regional Health Bureau and submitted to respective public hospitals’ CEO office and respondents were informed in detail about the purpose of the study. Information was then collected after written consent was obtained from each participant (guardians/parents of the children with ARI). Respondents were allowed to refuse or discontinue participation at any time they want. Information was collected anonymously and confidentiality was assured throughout the study period.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Alemayehu, S., Kidanu, K., Kahsay, T. et al. Risk factors of acute respiratory infections among under five children attending public hospitals in southern Tigray, Ethiopia, 2016/2017. BMC Pediatr 19, 380 (2019) doi:10.1186/s12887-019-1767-1
- Children under 5 years
- Acute respiratory infections
- Risk factors | <urn:uuid:9ae183af-2455-427d-ac6f-659d158b63bd> | CC-MAIN-2019-47 | https://bmcpediatr.biomedcentral.com/articles/10.1186/s12887-019-1767-1 | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00178.warc.gz | en | 0.944378 | 4,670 | 2.65625 | 3 |
Samuel Hearne—"The Mungo Park of
Canada"—Perouse complains —The North-West Passage—Indian
guides—Two failures—Third journey successful—Smokes the
calumet—Discovers Arctic Ocean—Cruelty to the Eskimos—Error in
latitude—Remarkable Indian woman—Capture of Prince of Wales
Fort—Criticism by Umfreville.
Such an agitation as that so skilfully
planned and shrewdly carried on by Arthur Dobbs, Esq., could not
but affect the action of the Hudson's Bay Company. The most
serious charge brought against the Company was that, while having
a monopoly of the trade on Hudson Bay, it had taken no steps to
penetrate the country and develop its resources. It is of course
evident that the Company itself could have no reason for refusing
to open up trade with the interior, for by this means it would be
expanding its operations and increasing its profits. The real
reason for its not doing so seems to have been the inertia, not to
say fear, of Hudson's Bay Company agents on the Bay who failed to
mingle with the bands of Indians in the interior.
Now the man was found who was to be equal to
the occasion. This was Samuel Hearne. Except occasional reference
to him in the minutes of the Company and works of the period, we
know little of Samuel Hearne. He was one of the class of men to
which belonged Norton, Kelsey, and others—men who had grown up in
the service of the Company on the Bay, and had become, in the
course of years, accustomed to the climate, condition of life, and
haunts of the Indians, thus being fitted for active work for the
Hearne became so celebrated in his inland expeditions, that the
credit of the Hudson's Bay Company leaving the coast and venturing
into the interior has always been attached to his name. So
greatly, especially in the English mind, have his explorations
bulked, that the author of a book of travels in Canada about the
beginning of this century called him the "Mungo Park of Canada."
In his "Journey," we have an account of his earlier voyages to the
interior in search of the Coppermine River. This book has a
somewhat notable history. In the four-volume work of La Perouse,
the French navigator, it is stated that when he took Prince of
Wales Fort on the Churchill River in 1782, Hearne, as governor of
the fort, surrendered it to him, and that the manuscript of his
"Journey" was seized by the French commander. It was returned to
Hearne on condition that it should be published, but the
publication did not take place until thirteen years afterwards. It
is somewhat amusing to read in Perouse's preface (1791) the
complaint that Hearne had not kept faith with him in regard to
publishing the journal, and the hope is expressed that this public
statement in reminding him of his promise would have the desired
effect of the journal being published.
Four years afterwards Hearne's "Journey"
appeared. A reference to this fine quarto work, which is well
illustrated, brings us back in the introduction to all the
controversies embodied in the work of Dobbs, Ellis, Robson, and
the "American Traveller."
Hearne's orders were received from the
Hudson's Bay Company, in 1769, to go on a land expedition to the
interior of the continent, from the mouth of the Churchill as far
as 70 deg. N. lat., to smoke the calumet of peace with the
Indians, to take accurate astronomical observations, to go with
guides to the Athabasca country, and thence northward to a river
abounding with copper ore and "animals of the fur kind," &c.
It is very noticeable, also, that his
instructions distinctly tell him" to clear up the point, if
possible, in order to prevent further doubt from arising hereafter
respecting a passage out of Hudson Bay into the Western Ocean, as
hath lately been represented by the 'American Traveller.'" The
instructions made it plain that it was the agitation still
continuing from the days of Dobbs which led to the sending of
Hearne to the north country.
Hearne's first expedition was made during the
last months of the year 1769. It is peculiarly instructive in the
fact that it failed to accomplish anything, as it gives us a
glimpse of the difficulties which no doubt so long prevented the
movement to the interior. In the first place, the bitterly severe
months of November and December were badly chosen for the time of
the expedition. On the sixth day of the former of these months
Hearne left Prince of Wales Fort, taking leave of the Governor,
and being sent off with a salute of seven guns. His guide was an
Indian chief, Chawchinahaw. Hearne ascertained very soon, what
others have found among the Indians, that his guide was not to be
trusted; he "often painted the difficulties in the worst colours"
and took every method to dishearten the explorer. Three weeks
after starting, a number of the Indians deserted Hearne.
Shortly after this mishap, Chawchinahaw and
his company ruthlessly deserted the expedition, and two hundred
miles from the fort set out on another route, "making the woods
ring with their laughter." Meeting other Indians, Hearne purchased
venison, but was cheated, while his Indian guide was feasted. The
explorer remarks:—"A sufficient proof of the singular advantage
which a native of this country has over an Englishman, when at
such a distance from the Company's factories as to depend entirely
on them for subsistence."
Hearne arrived at the fort after an absence
of thirty-seven days, as he says, "to my own mortification and the
no small surprise of the Governor." Hearne was simply illustrating
what has been shown a hundred times since, in all foreign regions,
viz., native peoples are quick to see the inexperience of men raw
to the country, and will heartlessly maltreat and deceive them.
However, British officers and men in all parts of the world become
at length accustomed to dealing with savage peoples, and after
some experience, none have ever equalled British agents and
explorers in the management and direction of such peoples.
Early in the following year Hearne plucked up
courage for another expedition. On this occasion ho determined to
take no Europeans, but to trust to Indians alone. On February
23rd, accompanied by five Indians, Hearne started on his second
journey. Following the advice of the Governor, the party took no
Indian women with them, though Hearne states that this was a
mistake, as they were "needed for hauling the baggage as well as
for dressing skins for clothing, pitching our tent, getting
firing, &c." During the first part of the journey deer were
plentiful, and the fish obtained by cutting holes in the ice of
the lakes were excellent.
Hearne spent the time of the necessary delays
caused by the obtaining of fish and game in taking observations,
keeping his journal and chart, and doing his share of trapping.
Meeting, as soon as the spring opened, bands of Indians going on
various errands, the explorer started overland. He carried sixty
pounds of burden, consisting of quadrant, books and papers,
compass, wearing apparel, weapons and presents for the natives.
The traveller often made twenty miles a day over the rugged
Meeting a chief of the Northern Indians going
in July to Prince of Wales Fort, Hearne sent by him for ammunition
and supplies. A canoe being now necessary, Hearne purchased this
of the Indians. It was obtained by the exchange of a single knife,
the full value of which did not exceed a penny. In the middle of
this month the party saw bands of musk oxen. A number of these
were killed and their flesh made into pemmican for future use.
Finding it impossible to reach the Coppermine during the season,
Hearne determined to live with the Indians for the winter.
The explorer was a good deal disturbed by
having to give presents to Indians who met him. Some of them
wanted guns, all wanted ammunition, iron-work, and tobacco; many
were solicitous for medicine; and others pressed for different
articles of clothing. He thought the Indians very inconsiderate in
On August 11th the explorer had the
misfortune to lose his quadrant by its being blown open and broken
by the wind. Shortly after this disaster, Hearne was plundered by
a number of Indians who joined him.
He determined to return to the fort.
Suffering from the want of food and clothing, Hearne was overtaken
by a famous chief, Matonabbee, who was going eastward to Prince of
Wales Fort. The chief had lived several years at the fort, and was
one who knew the Coppermine. Matonabbee discussed the reasons of
Hearne's failure in his two expeditions. The forest philosopher
gave as the reason of these failures the misconduct of the guides
and the failure to take any women on the journey. After
maintaining that women were made for labour, and speaking of their
assistance, said Matonabbee, "women, though they do everything,
are maintained at a trifling expense, for as they always stand
cook, the very licking of their fingers in scarce times is
sufficient for their subsistence." Plainly, the northern chief had
need of the ameliorating influence of modern reformers. In company
with the chief, Hearne returned to the fort, reaching it after an
absence of eight months and twenty-two days, having, as he says,
had "a fruitless or at least an unsuccessful Journey."
Hearne, though beaten twice, was determined
to try a third time and win. He recommended the employment of
Matonabbee as a guide of intelligence and experience. Governor
Norton wished to send some of the coast Indians with Hearne, but
the latter refused them, and incurred the ill-will of the
Governor. Hearne's instructions on this third Journey were "in
quest of a North-West Passage, copper-mines, or any other thing
that may be serviceable to the British nation in general, or the
Hudson's Bay Company in particular." The explorer was now
furnished with an Elton's quadrant.
This third Journey was begun on December 7th,
1770. Travelling sometimes for three or four days without food,
they were annoyed, when supplies were secured, by the chief
Matonabbee taking so ill from over-eating that he had to be drawn
upon a sledge. Without more than the usual incidents of Indian
travelling, the party pushed on till a point some 19 deg. west of
Churchill was reached, according to the calculations of the
explorer. It is to be noted, however, that Hearne's observations,
measurements, and maps, do not seem to be at all accurate.
Turning northward, as far as can be now made
out, about the spot whore the North-West traders first appeared on
their way to the Churchill River, Hearne went north to his
destination.1 His Indian guides now formed a large war party from
the resident Indians, to meet the Eskimos of the river to which
they were going and to conquer them.
The explorer announces that having left
behind "all the women, children, dogs, heavy baggage, and other
encumbrances," on June 1st, 1771, they pursued their journey
northward with great speed. On June 21st the sun did not set at
all, which Hearne took to be proof that they had reached the
Arctic Circle. Next day they met the Copper Indians, who welcomed
them on hearing the object of their visit.
Hearne, according to orders, smoked the
calumet of peace with the Copper Indians. These Indians had never
before seen a white man. Hearne was considered a great curiosity.
Pushing on upon their long journey, the explorers reached the
Coppermine River on July 13th. Hearne was the witness of a cruel
massacre of the Eskimos by his Indian allies, and the seizure of
their copper utensils and other provisions, and expresses disgust
at the enormity of the affair. The mouth of the river, which flows
into the Arctic Ocean, was soon reached on July 18th, and the tide
found to rise about fourteen feet.
Hearne seems in the narrative rather
uncertain about the latitude of the mouth of the Coppermine River,
but states that after some consultation with the Indians, he
erected a mark, and took possession of the coast on behalf of the
Hudson's Bay Company.
In Hearne's map, dated July, 1771, and
purporting to be a plan of the Coppermine, the mouth of the river
is about 71 deg. 54' N. This was a great mistake, as the mouth of
the river is somewhere near 68 deg. N. So great a mistake was
certainly unpardonable. Hearne's apology was that after the
breaking of his quadrant on the second expedition, the instrument
which he used was an old Elton's quadrant, which had been knocking
about the Prince of Wales Fort for nearly thirty years.
Having examined the resources of the river
and heard of the mines from which the Copper Indians obtained all
the metal for the manufacture of hatchets, chisels, knives, &c,
Hearne started southward on his return journey on July 18th.
Instead of coming by the direct route, he went with the Indians of
his party to the north side of Lake Athabasca on December 24th.
Having crossed the lake, as illustrating the loneliness of the
region, the party found a woman who had escaped from an Indian
band which had taken her prisoner, and who had not seen a human
face for seven months, and had lived by snaring partridges,
rabbits, and squirrels. Her skill in maintaining herself in lonely
wilds was truly wonderful. She became the wife of one of the
Indians of Hearne's party. In the middle of March, 1772, Hearne
was delivered a letter, brought to him from Prince of Wales Fort
and dated in the preceding June. Pushing eastward, after a number
of adventures, Hearne reached Prince of Wales Fort on June 30th,
1772, having been absent on his third voyage eighteen months and
twenty-three days. Hearne rejoices that he had at length put an
end to the disputes concerning a North-West Passage through Hudson
Bay. The fact, however, that during the nineteenth century this
became again a living question shows that in this he was mistaken.
The perseverance and pluck of Hearne have
impressed all those who have read his narrative. He was plainly
one of the men possessing the subtle power of impressing the
Indian mind. His disasters would have deterred many men from
following up so difficult and extensive a route. To him the
Hudson's Bay Company owes a debt of gratitude. That debt consists
not in the discovery of the Coppermine, but in the attitude
presented to the Northern Indians from the Bay all the way to Lake
Athabasca, Hearne does not mention the Montreal fur traders, who,
in the very year of his return, reached the Saskatchewan and were
stationed at the Churchill River down which ho passed.
First of white men to reach Athapuscow, now
thought to have been Great Slave Lake, Samuel Hearne claimed for
his Company priority of trade, and answered the calumnies that his
Company was lacking in energy and enterprise. Ho took what may be
called "seizen" of the soil for the English traders. We shall
speak again of his part in leading the movement inland to oppose
the Nor'-Westers in the interior. His services to the Hudson's Bay
Company received recognition in his promotion, three years after
his return home from his third voyage, to the governorship of the
Prince of Wales Fort. To Hearne has been largely given the credit
of the new and adventurous policy of the Hudson's Bay Company, -
Hearne does not, however, disappear from public notice on his
promotion to the command of Prince of Wales Fort. When war broke
out a few years later between England and France, the latter
country, remembering her old successes under D'lber-ville on
Hudson Bay, sent a naval expedition to attack the forts on the
Bay. Umfreville gives an account of the attack on Prince of Wales
Fort on August 8th and 9th, 1772. Admiral de la Perouse was in
command of these war vessels, his flagship being Le Sceptre, of
seventy-four guns. The garrison was thought to be well provided
for a siege, and La Perouse evidently expected to have a severe
contest. However, as he approached the fort, there seemed to be no
preparations made for defence, and, on the summons to surrender,
the gates were immediately thrown open.
Umfreville, who was in the garrison and was
taken prisoner on this occasion, speaks of the conduct of the
Governor as being very reprehensible, but severely criticizes the
Company for its neglect. He says:—"The strength of the fort itself
was such as would have resisted the attack of a more considerable
force; it was built of the strongest materials, the walls were of
great thickness and very durable (it was planned by the ingenious
Mr. Robson, who went out in 1742 for that purpose), it having been
forty years in building and attended with great expense to the
Company. In short, it was the opinion of every intelligent person
that it might have made an obstinate resistance when attacked, had
it been as well provided in other respects; but through the
impolitic conduct of the Company, every courageous exertion of
their servants must have been considered as imprudent temerity;
for this place, which would have required four hundred men for its
defence, the Company, in its consummate wisdom, had garrisoned
with only thirty-nine."
In this matter, Umfreville very plainly shows
his animus to the Company, but incidentally he exonerates Hearne
from the charge of cowardice, inasmuch as it would have been
madness to make defence against so large a body of men. As has
been before pointed out, we can hardly charge with cowardice the
man who had shown his courage and determination in the three
toilsome and dangerous journeys spoken of; rather would we see in
this a proof of his wisdom under unfortunate circumstances. The
surrender of York Factory to La Perouse twelve days afterwards,
without resistance, was an event of an equally discouraging kind.
The Company suffered great loss by the surrender of these forts,
which had been unmolested since the Treaty of Utrecht. | <urn:uuid:cfceaa06-85e3-452d-855f-093464122f8f> | CC-MAIN-2019-47 | https://www.electricscotland.com/history/canada/hudsonbay/chapter12.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00259.warc.gz | en | 0.977282 | 4,201 | 2.96875 | 3 |
February 1971 Popular Electronics
Table of Contents
Wax nostalgic about and learn from the history of early electronics. See articles
published October 1954 - April 1985. All copyrights are hereby acknowledged.
Other than for DC power supply applications where you might need to implement current steering and/or redundancy schemes, there are not too many times when a combination of transistors and/or diodes would be used for logic circuitry in place of integrated circuits. That has not always been the case. Early packaged IC blocks were expensive compared to discrete components, so both hobbyists and professional designers often used a combination of technologies. Resistor-transistor logic (RTL) and diode-transistor logic (DTL), emitter-coupled, logic (ECL), and other variations were covered in a 1969 Radio-Electronics article by titled "How IC's Work: Integrated Circuit Logic Families." This piece provides a little more insight into the construction of those families and shows how to construct logical combinations using diodes and NOR gates. As noted in my Radio-Electronics write-up, the Apollo Guidance Computer used only 3-input NOR gates to build RTL circuits for all of its solid state logic capability.
Discrete Components Yield Better Understanding
Part 1 of a 2-Part Story by Frank H. Tooker
Virtually all RTL (resistor-transistor-logic) integrated circuits can be duplicated from conventional discrete components. This enables the builder to design and test circuits at the same time - well before an IC is selected and installed. Various logic gates, latch circuits, and half-adders are discussed in this part of the article.
The integrated circuit has been with us for barely a decade and in use in hobbyist and experimenter circles for roughly half that time. Yet, the IC has had a profound effect on every area of electronics, making possible the present sophistication of modern digital equipment.
The digital computer, for example, is often viewed erroneously as a complex device of gigantic proportions. But you have only to consider how much more complex and larger in size it would have to be if it were assembled entirely with discrete components. Without the IC, a digital computer could easily occupy the volume of a small house.
What is true of the digital computer is also true of all digital logic devices, including communication, telemetry, and instrumentation systems, as well as the digital test equipment many home enthusiasts use on their workbenches. Without integrated circuits to simplify and miniaturize electronic devices, our space program would still be where it was ten years ago, information processing would be slow and tedious, and it is more than likely that digital test equipment would never have evolved.
Fig. 1 - N-input integrated circuit gate.
The purpose of this article is to provide information needed to breadboard IC logic element equivalents, or near equivalents. "Equivalent" - as used here - refers to the function and not the configuration of the IC and discrete circuits.
In this first of a two-part story on resistor-transistor logic (RTL), attention is focused on logic gates. (The glossary explains the distinction between the three fundamental types of digital logic systems - RTL, DTL, and TTL - and provides definitions for the various technical terms used in this article.) Installment number two will deal with the more sophisticated toggled logic circuits, including the JK flip-flop.
Virtually every RTL element consists of some form of logic gate which operates in much the same manner as a common relay. The gate requires an input activating force and a two-state (on/off, high/low, or logic 1/logic 0) output. Only two output states arc necessary for digital circuits to communicate in their two-digit, or binary arithmetic, language. Consequently, the basic elements of digital systems are quite simple.
Compared to the 0-to-9 decimal system of arithmetic, however, binary arithmetic requires a tedious number of operations to perform the same function and process the same information. The extra operations, of course, require extra logic elements which, in turn, give all digital equipment the appearance of being complex.
The actual simplicity of a digital logic element can be seen in the two-input IC logic gate shown in Fig. 1. If only one stage of this circuit is considered, it is the configuration of an inverter, or one-input gate, in an integrated circuit. This gate could not be simpler, consisting of a single transistor and its associated base resistor. A hex-inverter IC would contain 6 such inverters, all connected to the power source through a common 640-ohm collector load resistor. (Note: Integrated circuit designers have chosen 450 ohms and 640 ohms for the base and collector load resistors, respectively. These values give the circuit optimum fan-in and fan-out. The 450- and 640- ohm values used inside IC's are not commonly available in discrete component form; when you breadboard your elements, you would use 470- and 680-ohm resistors. These will work adequately.)
The transistors in all RTL integrated circuits are silicon NPN types with characteristics similar to discrete computer-type switching transistors. All RTL IC's operate from a power source of 3.6 volts within a maximum tolerance of ten percent.
When breadboarding any RTL element, keep in mind that a computer-type transistor need not have a linear transfer characteristic since it is never operated in a linear fashion. It is either completely cut off or fully saturated. However, it must have certain other characteristics: excellent high-frequency response; comparatively high saturation current gain; and 0.2 volt or less collector-to-emitter saturation potential. The latter is important because when the output of one gate is connected directly to the input of another gate, the output potential of the first transistor, when saturated, is sufficiently near ground potential to insure that the second transistor is fully cutoff.
A one-input gate is most commonly referred to as an inverter because its output is 180° out of phase with its input. In terms of positive computer logic, when the input is at a logic 1, the output is at a logic 0, and vice versa (logic 1 is the complement of logic 0). In terms of negative computer logic, the 0's and 1's change places for the on or off state of a given transistor.
It is simpler to follow positive computer logic where a logic 0 is equal to ground or near ground potential because the logic designation coincides with the signal level. As far as logic is concerned, however, it makes no difference whether logic 0 is represented by a near-ground potential or by some potential significantly removed from ground. If you think of a logic 0 as represented by a cut off transistor, and a logic 1 as represented by a saturated transistor, then negative logic can be followed as easily as can positive logic.
Fig. 2 - N-input discrete-component logic gate.
The schematic diagram of Fig. 1 shows how simple it is to provide additional inputs to the logic gate. The collector load resistor remains the same for each additional input stage. Theoretically, at least, IC designers could go on adding inputs in this fashion until the total accumulated leakage current became excessive. In easily available IC's, four inputs - in a quad arrangement - are the most you can get. Within reasonable limits, adding inputs has no significant effect on the fan-in and fan-out factors of a gate.
There is nothing mysterious about resistors and transistors on an IC chip. They function the same as their discrete counterparts. So, you can easily assemble an inverter, a two-input gate, an n-input gate, etc., using discrete components alone. (The circuits presented from here on arc designed to operate at speeds up to 100,000 Hz, sufficient for experimental purposes. Digital equipment used in science and industry, of course, becomes practical only because it can operate at speeds in the MHz range.)
Figure 2 shows how you can breadboard logic gates with discrete components. Readily available resistor values are somewhat greater than those conventionally used in integrated circuits, but they are close enough for the most part - especially if you do not attempt to work your discrete-component setups close to maximum fan-out. But when working with critical circuits, you shouldn't load your circuits too heavily in any case.
Almost any high-speed, computer-type silicon NPN switching transistor can be used in your circuit setups. A good example of such a transistor is the 2N2475 and HEP56. If you are in an area where surplus parts stores are located, you might be able to pick up quite an assortment of silicon switching transistors at bargain prices.
In the absence of computer-grade transistors, you might try using any high-frequency silicon NPN transistors you have around. But remember to run the input up to where the transistor is well into saturation, and check the collector-to-emitter potential with a meter. If the reading obtained is 0.2 volt or less, chances are you can use the transistor in digital logic-gate service.
Being able to expand a gate is particularly useful when circuits are being assembled on your workbench. The circuit in Fig. 3A is an expander, resembling an inverter or one-input gate with the exception that it has no collector load resistor. Figure 3B shows how all expander can be added to an IC inverter element to make a two-input logic gate. Simply connect the collector (output) of the expander circuit to the output of the inverter. The input to the inverter now becomes input 1 and the input to the expander becomes input 2. Note that the circuit is fundamentally identical to that in Fig. 1. In a similar manner, you can add all expander to a two-input gate to create a three-input gate, and so on.
Fig. 3 - Simple expander (A) adds inputs to gate (B).
Now, suppose you have a two-input-gate IC, you need a three-input gate, and you have no suitable transistor on hand to breadboard the expander. You can expand the two-input gate to a three-input gate by using a couple of germanium diodes as shown in Fig. 4A. The diodes can be 1N191 or HEP134 types - or any diode with similar characteristics.
The purpose of the diodes is to keep a logic signal at input 1 from entering input 2 and vice versa. Yet each diode allows the signal at its respective input to enter the IC left-gate input. (Note: Discrete and IC configurations can be identified by whether or not a circle encloses the transistors. Discrete transistors are enclosed in circles, while IC transistors are not.) If you need a four-input gate, you can add a similar pair of diodes in the same manner to the input resistor on the second transistor.
There can be a 0.3-0.4-volt forward voltage drop across each diode, so it is not advisable to use diode expansion as part of the load in a maximum fan-out configuration. The transistor expander in Fig. 3A is not subject to this limitation.
On the other hand, if you are breadboarding a two-input gate using a pair of germanium diodes and a single transistor, as shown in Fig. 4B, you can often get around the voltage-drop limitation by using a germanium NPN transistor (HEP641 or similar) in the setup. However, there are significant factors that must be taken into consideration here. First, germanium transistors can be operated in no more than a moderate temperature environment since they perform poorly or not at all at elevated temperatures. (The same, of course, applies to germanium diodes.) Second, the lower the required logic level, the lower the noise immunity of the circuit.
For those setups where noise pulses or spurious signals are a particular problem, the circuit in Fig. 4C can be of considerable value. This circuit gates with an input logic level of 3 volts but is unresponsive to input signals of 1.5 volts or less. Additionally, its fan-in is only about ten percent that of a gate with a conventional input.
From now on, logic symbols will be used in many of the schematic diagrams in this article. The logic symbols, with their equivalent electronic circuits, are given in Fig. 5.
Fig. 4 - Germanium diodes are often used to add inputs to existing gates (A) and (B). Typical noise immunity circuit (C) is below.
Fig. 5 - Logic symbols (at left of each circuit) are generally used in logic flow diagrams.
Fig. 6 - Diode pair can be used to make AND gate in (A) from three inverters as in (B). Discrete-component diagram for (B) is shown in (C); (D) and (E) are logic and schematic diagrams for NAND gate.
A positive-logic NOR gate is a negative-logic NAND gate. From the point of view of positive logic, the gates described thus far are all NOR gates in which a logic 1 input to either input 1 or input 2 (or both) produces a logic 0 output.
The circuit in Fig. 6A is a conventional positive-logic two-input AND gate wherein both inputs must be supplied with a logic 1 signal to generate a logic 1 signal at the output. This setup requires two inverters and a two-input gate to bring the input and output signals into phase with each other. The small circles at the apices of the logic symbols indicate inversion, or a 180° phase displacement, between the input and output signals. Hence, two gates or inverters are needed to make the output and input signals of the same phase.
If you have only three inverters and no two-input gate available, you can breadboard a positive-logic AND gate with the aid of a pair of germanium diodes as shown in Fig. 6B. An AND gate, assembled with discrete components is given in Fig. 6C.
An AND gate requires two inversions so that logic 1 inputs provide a logic 1 output. Without the second inversion, we would have a NAND gate. In the NAND circuit, logic 1 inputs provide a logic 0 output. Given in Fig. 6D and in Fig. 6E are the logic diagram and discrete component schematic diagram for NAND gates.
In comparing the AND and NAND gates, note that a double inversion is equal to no inversion at all.
In the preceding logic-gate circuits, output logic directly follows input logic. In the simple two-input gate, for example, a logic 1 at either of the two inputs produces a logic 0 output. Removal of the logic 1 input by sending the input to logic 0 produces a logic 1 output.
There are, however, applications where it is desirable to turn on one gate by applying a signal to one input and turn oft the gate by applying a signal to the other input. Once such a circuit is energized, it will remain turned on even after the excitation signal is removed. It will also be unresponsive to succeeding turn-on signals. Similarly, once it is turned off, it will remain off and be unresponsive to subsequent turn-off signals. Such a device can be thought of as it "latch" and is known as an RS (for reset-set) flip-flop.
The fundamental circuit of a latch ran be represented by a pair of inverters, with the output of one inverter connected directly to the input of the other as shown in Fig. 7 A. Because inversion occurs in each inverter, it is obvious that when one side of the circuit is on, the other must be cut off. It is equally obvious that the on side must remain on and the off side remain off unless something is done to make the system change states. No provision is made to effect any such control in the simple circuit shown.
A more practical latch or R.S flip-flop is shown in Fig. 7B. Here a pair of two-input logic gates is used. One input of each is used for the feedback, and the other is used for control. A logic 1 signal applied to input 1 sends output 1 to logic 0 and output 2 to logic 1. The circuit then remains in this state - held there by its own feedback and disregarding any further application or removal of turn-on signals - until a logic 1 signal is applied to input 2, at which time the output logic reverses itself.
Only a brief pulse at the proper input terminal is needed to trigger and latch the circuit in either state. The waveform of the control pulse is not especially critical. In fact, an RS flip-flop is often used to "shape" a logic pulse by converting it to a square wave with very steep sides.
If a logic 1 signal is applied to both latch inputs simultaneously, both outputs will go to logic 0. The final state of the latch will then depend on which of the two inputs is the last to be removed. Ordinarily, a latch is not operated in this mode; but if a particular setup calls for such operation, there is no reason why it cannot be employed.
The circuit in Fig. 7B is given in discrete-component form in Fig. 7C. Depending on what components you have available, you can breadboard a latch in several different ways. It can consist of a dual two-input gate IC, a pair of inverters in an IC (plus a couple of expanders), or four individual transistors if necessary.
Fig. 7 - Two inverters are employed in fundamental latch circuit (A). More practical latch is RS ftip-flop illustrated in diagrams in (B) and (C).
Fig. 8 - Practical latch using inverters is illustrated in diagram (A); discrete-component circuit for power-gated, or buffered-input, latch is shown in schematic diagram (B).
Fig. 9 - Simultaneous AND/OR gate at left employs diodes and simple inverters. All possible inputs and their outputs are listed in truth table (above).
Fig. 10 - Simple addition of diode D and resistor R to the simultaneous AND/OR gate yields the Exclusive OR circuit that is shown at right.
Fig. 11 - Half-adder/subtracter has DIFFERENCE and BORROW outputs in addition to the SUM and CARRY of Exclusive OR.
If you need a latch circuit and have only a single pair of computer-type silicon NPN transistors, or a couple of spare inverters in a hex-inverter IC, you can assemble the fundamental latch circuit in Fig. 7B and gate or trigger it from one state to the other with germanium diodes. The circuit in Fig. 8A illustrates how this can be done. It is possible to do this for the same reason that it is possible to use a pair of diodes for gate expansion, in which the two diodes on each side of the setup operate as positive-logic OR gates.
In the circuit of Fig. 7B, turn-on of a transistor is accomplished by pulling its collector down to near ground potential. It then turns on as a result of cross-coupling. In the circuit of Fig. 8A, the same result is obtained by driving the base positive with a logic 1 input. Minimum input logic level is about 50 percent higher than that required by the circuit in Fig. 7B, however.
A power-gated or buffered-input latch circuit is shown in Fig. 8B. A virtue of this circuit is that, with light loading, it will trigger reliably from one state to the other with an input current as low as a few microamperes. For a minimum-load setup, input resistors R can have a value as high as 500,000 ohms. It is important to note, however, that input logic level must be about 3 volts. Input current is exchanged for input voltage in this setup. The "high-step" input can help to improve noise immunity.
You can assemble a power-gated latch using a pair of inverters in a hex-inverter integrated circuit, or you can breadboard the whole circuit with four transistors as shown in the schematic diagram. You should use this circuit whenever you have a sufficient input-logic voltage level but inadequate input-logic current to operate a more common latch. Do not attempt to get around the higher input logic level requirement by using a germanium transistor for triggering. Leakage current through a germanium transistor is too great for this application.
The fan-in of the circuit if Fig. 8B is so low that, when used in the majority of digital logic layouts, it can be considered as practically an open circuit. It is especially useful as an exceptionally low-power input start/stop switch in counter and time-lapse applications.
An element which can supply the OR logic function and the AND logic function of two inputs simultaneously is of considerable value in digital circuitry. For one thing, with only slight modification, it forms the foundation for an Exclusive OR, or Half-Adder, element.
In the simultaneous AND/OR gate of Fig. 9, five diodes and four one-input gates perform all of the required logic functions. At the output of the two input inverters, one pair of the diodes provides the AND function, while the other pair, together with the 1500-ohm resistor, provides the OR function. A logic 1 is obtained at the OR output when a logic 1 is applied to input 1, input 2, or both inputs simultaneously. A logic 1 is obtained at the AND output only when a logic 1 is applied to both inputs simultaneously.
A state table for the circuit is also provided in Fig. 9. This state table lists all possible in-puts to a digital logic element or device and the outputs which result from these inputs.
A half-adder or Exclusive OR Logic circuit is shown in Fig. 10. The circuit is obtained by adding resistor R and diode D to the OR circuit of Fig. 9. (In some cases the diode may be omitted.)
In the circuit in Fig. 10, the Exclusive OR output is the SUM output, and the AND output is the CARRY. As shown by the state table in Fig. 10, the circuit provides a logic 1 at the SUM: output when a logic 1 is supplied to either - but not both simultaneously - input. When a logic 1 is supplied to input 1 and input 2 simultaneously, the output is a logic 0, as it is when both inputs are logic 0. The outputs of the circuit demonstrate the fact that a logic 1 added to a logic 0, or vice versa, produces a sum of 1 and a carry of 0. A logic 1 added to a logic 1 produces a sum of 0 and a carry of 1.
A half-adder is required to sum only two logic inputs, whereas a full-adder must sum two inputs and a carry, for a total of three logic inputs. (A full-adder consists of two half-adders, plus some additional circuitry. Details of this circuit would simply digress from the subject of this article. Also, a full-adder would be impractical to breadboard in any event.)
Now, if we label input 1 with an A and input 2 with a B, then in a half-adder/subtracter in which B is subtracted from A, the following happens: First, the SUM output is identical with the DIFFERENCE output, such that the SUM or DIFFERENCE output supplies the Exclusive A OR B function. Next, the CARRY output supplies the A AND B function. And, finally, the BORROW output supplies the BAND A-COMPLEMENT function.
The circuit of a half-adder/subtracter, which can be readily breadboarded, is given in Fig. 11. It consists of five diodes, four inverters, and a dual two-input gate (or the equivalent in discrete form). This particular setup also supplies the complement of the SUM or DIFFERENCE output.
As you can see from the preceding, there is little need - or reason - for you to make a large financial investment in digital IC's if you want to experiment with and design logic elements and systems. Discrete components, and maybe a few commonly used gate IC's, will suffice for your breadboarding arrangements. You can select your IC's from the knowledge you gain through experimenting with discrete component elements. This is really the best and safest route to go when experimenting with integrated circuit digital logic techniques.
Glossary of Digital Logic Terms
Adder: Switching circuit that combines binary information to generate the SUM and CARRY of this information.
AND: This Boolean logic expression is used to identify the logic operation where, given two or more variables, all must be logic 1 for the result to be a logic 1.
DTL (Diode-Transistor Logic): Logic is performed by diodes with transistors used only as inverting amplifiers.
Exclusive OR: A logic function whose output is 1 if either of the two input variables is 1 but whose output is 0 if both inputs are 1 or 0.
Fan-in: A figure denoting the input power required to drive a logic element satisfactorily.
Fan-out: A figure denoting the power output of a logic element with respect to logic element inputs.
AND Gate: All inputs must have 1-level signals at the input to produce a 1-level output.
NAND GATE: All inputs must have 1-level signals at the input to produce a 0-level output.
NOR Gate: Anyone input or more than one input having a 1-level signal will produce a 0-level output.
OR Gate: Anyone input or more than one input having a 1-level input will produce a 1-level output.
Half Adder: A switching circuit which combines binary information to generate the SUM and CARRY. It can accept only the two binary bits to be added.
Inverter: A circuit whose output is always 180° out of phase with its input. (Also called a NOT circuit.)
Negative Logic: Logic in which the more negative voltage represents the 1-state; the less negative voltage represents the 0-state.
Noise Immunity: A measure of the sensitivity of a logic circuit to triggering or reaction to spurious or undesirable electrical signals or noise, largely determined by the signal swing of the logic.
RTL (Resistor-Transistor Logic): Logic is performed by resistors. Transistors are used to produce an inverted output.
TTL, T2L (Transistor-Transistor Logic): A logic system evolved from Diode-Transistor Logic in which the diode cluster is replaced by a multiple-emitter transistor.
Posted January 22, 2019 | <urn:uuid:2cdfb3c3-c678-40ab-aaf0-c937abcd7753> | CC-MAIN-2019-47 | https://www.rfcafe.com/references/popular-electronics/equivalency-rtl-circuits-popular-electronics-february-1971.htm | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00418.warc.gz | en | 0.914903 | 5,641 | 3.453125 | 3 |
Ravenswell Primary School thanks you for reading this very important policy. We want to prevent and tackle bullying behaviour. We encourage everyone to become very familiar with this policy.
- This Policy is Fully Compliant
- Key Principles of Best Practice
- What is Bullying – Our Definition
- Who is Responsible For Doing What
- Our Strategies for Education and Prevention
- Our Procedures re Bullying Behaviour
- Our Programme of Support for Pupils
- Cyber Bullying and Key Measures
- What the Board of Management Confirms
- The School Will Act To Stop Any Harassment
- When The Board Approved This Policy
- Where You Can Find This Policy
- How We’ll Review This Policy
- For Staff: The Template For Recording Bullying Behaviour
- For Families: How You Can Support Your Child
- For Everyone: More on Cyber Bullying Behaviours
- Full Compliance
In accordance with the requirements of the Education (Welfare) Act 2000 and the code of behaviour guidelines issued by the NEWB, the Board of Management of Ravenswell Primary School has adopted the following anti-bullying policy within the framework of the school’s overall code of behaviour. This policy fully complies with the requirements of the Anti-Bullying Procedures for Primary and Post‑Primary Schools which were published in September 2013.
- Key Principles of Best Practice
The Board of Management recognises the very serious nature of bullying and the negative impact that it can have on the lives of pupils and is therefore fully committed to the following key principles of best practice in preventing and tackling bullying behaviour:
- A positive school culture and climate which:
- is welcoming of difference and diversity and is based on inclusivity;
- encourages pupils to disclose and discuss incidents of bullying behaviour in a non-threatening environment;
- promotes respectful relationships across the school community;
- Effective leadership;
- A school-wide approach;
- A shared understanding of what bullying is and its impact;
- Implementation of education and prevention strategies (including awareness raising measures) that:
- build empathy, respect and resilience in pupils; and
- Explicitly address the issues of cyber bullying and identity‑based bullying including, in particular, homophobic and transphobic bullying.
- Effective supervision and monitoring of pupils;
- Supports for staff;
- Consistent recording, investigation and follow up of bullying behaviour (including use of established intervention strategies)
- On-going evaluation of the effectiveness of the anti-bullying policy.
- The Definition of Bullying
In accordance with the Anti-Bullying Procedures for Primary and Post‑Primary Schools, bullying is defined as follows:
Bullying is unwanted negative behaviour, verbal, psychological or physical conducted, by an individual or group against another person (or persons) and which is repeated over time.
The following types of bullying behaviour are included in the definition of bullying:
- deliberate exclusion, malicious gossip and other forms of relational bullying, extortion, isolation, and persistent name calling,
- cyber bullying, and
- Identity-based bullying such as homophobic bullying, racist bullying, bullying based on a person’s membership of the Traveller community, and bullying of those with disabilities or special educational needs.
Isolated or once-off incidents of intentional negative behaviour do not fall within the definition of bullying and should be dealt with, as appropriate, in accordance with the school’s code of behaviour. However, in the context of this policy, placing a once-off offensive or hurtful public message, image or statement on a social network site or other public forum where that message, image or statement can be viewed and/or repeated by other people will be regarded as bullying behaviour.
Negative behaviour that does not meet this definition of bullying will be dealt with in accordance with our school’s code of behaviour.
Additional information on different types of bullying is set out in Section 2 of the Anti-Bullying Procedures for Primary and Post-Primary Schools.
This policy applies to activities and events that take place:
- During school time (including break times)
- School tours/trips
- Extra-curricular activities
Ravenswell Primary School reserves the right to take action against bullying perpetrated outside the school which spills over into the school.
- Who Is Responsible For Doing What
The relevant teacher(s) for investigating and dealing with bullying are as follows:
- Emer Breen (School Principal)
- Deirdre Dillon (Deputy Principal)
Those Responsible For Implementing This Policy:
- All Teaching Staff, with the support of SNAs
All Teaching Staff, with the support of SNAs, will investigate and record incidents of bullying behaviour. Special Needs Assistants (SNAs) will assist teachers in monitoring pupils and activities on yard.
- Kate Breen (Home School Community Liaison officer)
Responsibility for links with parents and dispersal of relevant information and supports.
- The Anti-Bullying Committee
This committee reviews the policy and monitors its implementation regularly, including the creation and implementation of annual Action Plans.
As of March 2014, its members are the management team, the HSCL officer and the Headlamps project worker.
- Our Education and Prevention Strategies
The education and prevention strategies (including strategies specifically aimed at cyber bullying, homophobic and transphobic bullying) that will be used by the school are based on the ten Shield Statements as formulated by the ISPCC.
Our Child-Friendly Version of the ISPCC Shield Statements
- Bullying can happen, anywhere.
- We, at Ravenswell Primary School, have thought about this. We have a plan to limit and stop bullying. Our plan is on our website.
- We do what we say in our plan. We work together to stop bullying. We make a record of bullying events. We try to improve our plan on a regular basis.
- Ravenswell Primary School’s students, parents, staff, and community shared ideas to create the plan, and will keep talking together to make sure the plan works.
- We, at Ravenswell Primary School, appreciate that we’re all different and equal.
- We all (staff and students) keep our eyes and ears open for bullying and we take action to stop it.
- We all (staff and students) keep learning how best to respond to bullying. We must keep trying to improve.
- In class, we talk about bullying with the whole class at least once a term. We also learn about how to deal with bullying situations through SPHE. We look for the good in everyone. We aim to build each other up and never knock anyone down.
- Any child at Ravenswell Primary School can talk to a trusted adult at Ravenswell Primary School about their feelings and worries. Adults will listen to and support every child.
- All members of the school community, including bystanders, can report bullying behaviour to any staff member at Ravenswell Primary School.
Note: These Shield statements are taught to all pupils. They are discussed at assembly once a term. They are highlighted at general class meetings with parents in September each year. They will be displayed on posters throughout the school.
- Our Procedures Re Bullying Behaviour
The school’s procedures for investigation, follow-up and recording of bullying behaviour and the established intervention strategies used by the school for dealing with cases of bullying behaviour are as follows:
- Children are encouraged to disclose and discuss what they perceive to be incidents of bullying behaviour. This can be with the class teacher, the teacher on yard duty at the time, the principal, Special Needs Assistants, any member of staff, the Headlamps project worker or with their parents. This is a “telling school” as defined in the Stay Safe Programme. Children will therefore be constantly assured that their reports of bullying (either for themselves or peers) will be treated with sensitivity.
- Allegations of bullying having occurred are dealt with promptly, firmly and fairly.
- The Incident will be investigated – what, who, when, where, why?
- Pupils are required to cooperate with any investigation. Parents of those involved may, if deemed necessary, also be required to cooperate with any investigation.
- Pupils who are not directly involved but who have witnessed negative behaviour can also provide very useful information and may be expected to assist in any investigation.
Children should understand there are no innocent bystanders if they remain passive where bullying is concerned—All bystanders should report what they perceive to be bullying/ negative
- The relevant teacher will exercise professional judgement to determine whether bullying has occurred. This may involve consultation with the class teacher(s) of the children involved and members of the management team.
- Once it has been established that bullying has indeed taken place, the bullying behaviour will be noted and recorded on the online school’s administration system by the relevant class teacher(s).
- If a group is involved, they may be met both individually and as a group. Each member will be asked for his/her account of what has happened. Accounts may be recorded. (Restorative Practice).
- The parents/guardians of the parties involved will be made aware of what has happened and requested to come and discuss the matter with the teacher and/or principal with a view to solving the problem.
- The alleged bully/bullies will be asked to reflect on his/her/their behaviour and its consequences for himself/herself/themselves and for the person(/people) who is(/are) the victim(s). If deemed necessary, he/she/they will be asked to sign an undertaking that “this behaviour will not reoccur.” (Restorative Practice).
- Efforts will be made to resolve any issues through mediation and to restore, as far as feasible, the relationships of the individuals involved. The situation will be monitored by the class teacher(s) of the individuals involved.
- Serious incidents or recurring incidents of bullying behaviour which have, in the opinion of the relevant class teacher, not been adequately or appropriately addressed within 20 school days will be recorded on the DES template and shall be reported to the principal / deputy principal. The teacher will also use the DES recording template where he/she considers the bullying behaviour to constitute serious misconduct.
- The situation will continue to be closely monitored to ensure that the problem has been resolved. Reconciliation of all is seen as the ultimate goal. Actions taken will be recorded. Records will be reviewed and analysed.
- The code of behaviour will be invoked in circumstances where it is deemed prudent by the relevant teacher and school principal.
- At least once in every school term, the Principal will provide a report to the Board of Management setting out:
- the overall number of bullying cases reported (by means of the bullying recording template) to the Principal or Deputy Principal since the previous report to the board.
- Confirmation that all these cases have been, or are being dealt with in accordance with the school’s anti-bullying policy.
- Additionally, where a parent is not satisfied that the school has dealt with a bullying case in accordance with these procedures, the parents must be referred, as appropriate, to the Board of Management.
- In the event that a parent has exhausted the school’s complaints procedures and is still not satisfied, the school must advise the parents of their right to make a complaint to the Ombudsman for Children.
- The School’s Programme of Support
The school’s Programme of Support for working with pupils affected by bullying is as follows:
- Teaching the Shield Statements.
- Circle time.
- Restorative practice.
- The Headlamps project will also play a role with such programmes as ‘Roots of Empathy’, ‘Incredible Years’, ‘Walk Tall’ and ‘Equine Assisted Learning’.
- Through the means of curricular and extracurricular activities to develop positive self worth.
- Developing pupil’s awareness of identity-based bullying and in particular trans-phobic bullying, i.e. the “Growing Up” lesson in SPHE. Particular account will also be taken of the important and unique role pupils with Special Educational Needs have to play in our school.
- Green schools and student council.
- Art displays.
- After School Activities through School Completion Programme and RavenKidz.
- Cyber Bullying
Cyber bullying includes (but is not limited to) communicating via electronic means with the objective of causing hurt, fear, embarrassment, humiliation, alarm and/or distress to one or more persons.
Cyber bullying includes the use of mobile phones and the internet with the objective of upsetting someone.
It may take the form of general insults or impersonation, defamation or prejudice‑based bullying.
Unlike other forms of bullying, a once-off posting can constitute bullying.
While this policy addresses issues related to cyber bullying of students (i.e. situations in which one or more students are the victim[s]
of bullying), the policy also applies to teaching and other school staff.
Key Measures re Cyber Bullying
- The Anti-Bullying Coordinator will act as a Cyber-Safety Officer to oversee the practices and procedures outlined in this policy and monitor their effectiveness.
- Staff will be trained to identify signs of cyber bullying and will be helped to keep informed about the technologies that children commonly use.
- Advice will be communicated to help students protect themselves from being involved in bullying (as perpetrator or as victim) and to advise them on reporting any incidents.
- Students will be informed about cyber bullying in the course of their education at the school.
- Gardaí will continue to visit the school once a year to talk about cyber bullying.
- Teachers will dedicate a stand-alone lesson to deal with the issue of cyber bullying.
- On an annual basis, parents will be invited to a talk on bullying which will include reference to cyber bullying.
- Students and staff are expected to comply with the school’s policy on the use of computers in the School. (Acceptable user policy)
- Parents will be provided with information and advice on cyber bullying.
- Parents and students are advised at meetings at the beginning of the year that it is illegal for a child under 13 to register with and use many social media networks, including Facebook, Instagram, and SnapChat.
- Ravenswell Primary School endeavours to block access to inappropriate web sites, using firewalls, antivirus protection and filtering systems and no pupil is allowed to work on the Internet without a member of staff present.
- Supervision and Monitoring of Pupils
The Board of Management confirms that appropriate supervision and monitoring policies and practices are in place to both prevent and deal with bullying behaviour and to facilitate early intervention where possible.
- Prevention of Harassment
The Board of Management confirms that the school will, in accordance with its obligations under equality legislation, take all such steps that are reasonably practicable to prevent the sexual harassment of pupils or staff or the harassment of pupils or staff on any of the nine grounds specified, i.e. gender including transgender, civil status, family status, sexual orientation, religion, age, disability, race, and membership of the Traveller community.
- Date This Policy Was Adopted
This policy was adopted by the Board of Management on:
Date: 8th April 2014
- Availability of This Policy
This policy has been made available to school personnel, published on the school website and provided to the Parents’ Association. A copy of this policy will be made available to the Department and the patron if requested.
- Review of This Policy
This policy and its implementation will be reviewed by the Board of Management once in every school year.
Written notification that the review has been completed will be made available to school personnel, published on the school website, and provided to the Parents Association.
A record of the review and its outcome will be made available, if requested, to the patron and the Department.
Signed: ____________________ Signed: ___________________________
(Chairperson of Board of Management) (Principal)
Date: ______________________ Date: ____________________________
Appendix (1): Template for Recording Bullying Behaviour
- Name of pupil being bullied and class group
Name: ___________________________ Class: ______________________________
- Name(s) and class(es) of pupil(s) engaged in bullying behaviour
- Source of bullying concern/report 4. Location of incidents
Tick Relevant Box(es) (Tick relevant box)(es)
- Name of person(s) who reported the bullying concern
- Type of Bullying Behaviour (tick relevant box[es])*
|Physical Aggression||Cyber bullying|
|Damage to property||Intimidation|
|Isolation / Exclusion||Malicious Gossip|
|Name Calling||Other (Specify)|
- Where behaviour is regarded as identity-based bullying, indicate the relevant category
|Homophobic||Disability /SEN related||Racist||Membership of Traveller community||Other (Specify)|
- Brief Description of bullying behaviour and its impact
- Details of action taken
Signed: _________________________ (Relevant Teacher) Date: ___________________
Date Submitted to Principal/ Deputy Principal: _____________________________
Appendix (2): How You Can Support Your Child
- Support Re Cyber Bullying
- Support Re Other Types of Bullying
- Support Re Cyber Bullying
We endorse the advice given from the Irish ‘Sticks and Stones’ Anti-Bullying Programme. A representative, Patricia Kennedy, wrote the following words in the Irish Daily Mail on October 31, 2012:
“Cyberbullying is NOT 24/7; it’s only 24/7 if a child is allowed access to their phone or the internet. Don’t let your own ignorance get in the way of common sense. A simple rule is ‘no phones after bedtime.’ Have a drawer in the kitchen that all phones are left in.
… Try turning off the wifi when you are going to bed to make sure there are no 3am online arguments. The anti-bullying initiative I represent, Sticks and Stones, work with children from all backgrounds, from designated disadvantaged schools to fee-paying schools, and we are constantly surprised at the level of innocence that most children have in relation to the ‘friends’ they make online.
They don’t think there are any dangers involved in chatting with strangers online, and they don’t think there are any repercussions involved for them regarding what they post.
… In our anti-bullying workshops, children tell us one of the reasons they don’t ‘tell’ about bullying is that parents ‘overreact’. Don’t be that parent.
If your child tells you that they are being bullied — don’t lose your temper; above all don’t threaten to take their phone or internet access away — you’re just guaranteeing they’ll never tell you anything again.
Remain calm and ask questions — who, what, why, where, when. Get the facts, write it down, keep the text/phone messages or take a screen shot from the computer so you are informed when you approach the school, internet or phone provider, or gardaí.
Talk to your children; let them know they can talk to you; keep the channels of communication open.”
And we endorse the advice given by the USA’s Federal Department of Health:
“Be Aware of What Your Kids are Doing Online
Talk with your kids about cyberbullying and other online issues regularly.
Know the sites your kids visit and their online activities. Ask where they’re going, what they’re doing, and who they’re doing it with.
Tell your kids that as a responsible parent you may review their online communications if you think there is reason for concern. Installing parental control filtering software or monitoring programs are one option for monitoring your child’s online behaviour, but do not rely solely on these tools.
Have a sense of what they do online and in texts. Learn about the sites they like. Try out the devices they use.
Ask for their passwords, but tell them you’ll only use them in case of emergency.
Ask to “friend” or “follow” your kids on social media sites or ask another trusted adult to do so.
Encourage your kids to tell you immediately if they, or someone they know, is being cyberbullied. Explain that you will not take away their computers or mobile phones if they confide in you about a problem they are having.
Establish Rules about Technology Use
Establish rules about appropriate use of computers, mobile phones, and other technology. For example, be clear about what sites they can visit and what they are permitted to do when they’re online. Show them how to be safe online.
Help them be smart about what they post or say. Tell them not to share anything that could hurt or embarrass themselves or others. Once something is posted, it is out of their control whether someone else will forward it.
Encourage kids to think about who they want to see the information and pictures they post online. Should complete strangers see it? Real friends only? Friends of friends? Think about how people who aren’t friends could use it.
Tell kids to keep their passwords safe and not share them with friends. Sharing passwords can compromise their control over their online identities and activities.”
We encourage you to also look at links for parents on our school website re Cyber Bullying.
- Support Re Other Types of Bullying
Teaching a child to say “NO” in a good assertive tone of voice will help deal with many situations. A child’s self image and body language may send out messages to potential bullies.
Parents should approach their child’s teacher by appointment if the bullying is school related. It is important for you to understand that bullying in school can be difficult for teachers to detect because of the large numbers of children involved. Teachers will appreciate bullying being brought to light. School bullying requires that parents and teachers work together for a resolution.
Sometimes parental advice to a child is to “hit back” at the bully if the abuse is physical. This is not always realistic as it requires a huge amount of courage and indeed sometimes makes the situation worse.
Children should not be encouraged to engage in violent behaviour. Teaching children to be more assertive and to tell is far more positive and effective.
It is important to be realistic; it will not be possible for a single child to assert his/her rights if attacked by a group. Children should be advised to get away and tell in situations such as this.
Keep an account of incidents to help you assess how serious the problem is. Many children with a little help overcome this problem very quickly.
What If Your Child Is Bullying?
- Don’t panic. This may be a temporary response to something else in the child’s life e.g. a new baby, a death in the family, a difficult home problem etc. Give your child an opportunity to talk about anything that could be upsetting him/her.
- Don’t punish bullying by being a bully yourself. Hitting and verbal attack will make the situation worse. Talk to your child and try to find out if there is a problem. Explain how the victim felt. Try to get the child to understand the victim’s point of view. This would need to be done over time.
- Bullies often suffer low self esteem. Use every opportunity you can to praise good, considerate, helpful behaviour. Don’t only look for negatives.
- Talk to your child’s teacher and find out more about your child’s school behaviour. Enlist the teacher’s help in dealing with this. It is important that you both take the same approach.
- If the situation is serious you may need to ask the school or G.P. to refer your child to the child guidance clinic for help.
APPENDIX (3): Types of Behaviour Involved in Cyber Bullying
These guidelines provide assistance in identifying and describing the types of behaviour involved in cyber bullying. The means of cyber bullying are constantly changing, and the following list of types of bullying behaviour can be expanded in light of the experience of the school community:
Types of Behaviour in Cyber Bullying…
- Hate Sites
- Encouraging other people to join the bullying by publishing someone’s
personal details or linking to their social network page.
- Abusive messages.
- Transmitting abusive and/or threatening messages.
- Chat rooms and discussion forums.
- Posting cruel and/or or abusive comments about someone.
- Mobile Phones
- Sending humiliating and abusive video messages or photographic images messages.
- Making silent or abusive phone calls.
- Sending abusive text messages.
- Interactive gaming.
- Locking victims out of games.
- Spreading false rumours about someone.
- Hacking into someone’s account.
- Sending viruses.
- Sending hacking programs to another person.
- Unauthorised interference with a computer device.
- Abusing Personal Information
- Transmitting personal photos, videos emails.
- Blogs Posting blogs where others could see them without the owner of the blog’s permission. | <urn:uuid:548ea13e-e09f-4113-aac2-de0988967f07> | CC-MAIN-2019-47 | https://ravenswell.ie/parents/policies/anti-bullying-policy/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00020.warc.gz | en | 0.936098 | 5,332 | 3.1875 | 3 |